text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Effect of Thermal Treatment on Crystallinity of Poly(ethylene oxide) Electrospun Fibers
Post-process thermal treatment of electrospun fibers obtained from poly(ethylene oxide) (PEO) water and methanol solutions was examined. PEO fibers from methanol solution showed larger diameters as observed by scanning electron microscopy. Fibers both from water and methanol solutions exhibited a significant dimensional stability and surface cracking during the specific exposure time after thermal treatments at 40, 50, and 60 °C. Changes in crystallinity after the thermal treatment were studied by wide-angle X-ray diffraction. The kinetics of secondary crystallization were positively influenced by the as-processed level of the amorphous phase and temperature of thermal treatment. Samples treated at 60 °C were degraded by thermooxidation within the time.
Introduction
Poly(ethylene oxide) (PEO) is a nontoxic highly crystalline polymer with a glass transition temperature of approximately −50 • C, promising extensive chain flexibility [1]. It can be dissolved in various organic solvents, such as methanol or ethanol, and water, which classifies it as an easily spinnable material. Electrospinning is one of the processing methods enabling the production of micro-, submicro-, and nanofibers from polymer solutions and melts using electric forces [2][3][4]. Electrospinning of PEO solutions at room temperature has been thoroughly investigated, providing many research studies describing PEO nonwoven webs with various fiber sizes and molecular orientations depending on the chosen processing parameters [5,6], solvents [7], and solution properties [8].
PEO often serves as a carrier liquid facilitating the electrospinning process of some polymers, such as chitosan or cellulose [9]. These nanofibrous membranes perform many beneficial properties, including a high pore density and surface area. Therefore, they may be employed in diverse applications, such as tissue scaffold, wound dressing, or antibacterial membranes [9,10]. However, nanofibrous membranes lack mechanical strength, which could be enhanced by introducing interfiber bonding after specific treatments. Generally, a thermal treatment is used to bond the electrospun nanofibers. Thermal treatment of nanofibers to improve the crystallinity and mechanical properties of various polymers, such as poly(vinylidene fluoride), poly(4-methyl-1-pentene), polyester, and PEO, has recently been reported [11][12][13][14]. For instance, Bao et al [14] studied the morphology of PEO nanofibrous membranes either with or without multiwall carbon nanotubes thermally bonded at temperatures ranging from 60 to 64 • C. To the authors' knowledge, the thermal-induced morphology of pure PEO nanofibers has not been examined yet.
The crystallinity and molecular orientation of electrospun fibers are normally significantly lower if compared with samples prepared by common processing technologies (molding or film casting) [15]. Macroscopic alignment of fibers was gained either by a rotating mandrel collector [1] or by counter-electrode plates' integration [16,17]. Nonetheless, macroscopic alignment may not always be connected with the orientation on the microscopic level and the degree of crystallinity remains almost the same as without the alignment. The structure's formation during the electrospinning process is indeed a complex issue. The most critical parameters affecting the crystallization process include the degree of chain orientation/relaxation and the rate of solvent evaporation [18].
Post-processed thermal treatment seems to be the proper technique, leading both to evaporation of the residual solvent and an increase of the crystalline phase [19]. A reduction in the interlamellar distance, also known as lamellar thickening, is typical for this phenomenon. Nevertheless, amorphous and crystalline phase rearrangement can be accompanied with the thermooxidation process in the solid state [20]. If compared to pure hydrocarbon polymers, such as poly(ethylene) and poly(propylene), PEO is more sensitive to thermal oxidation [21]. In this context, scission of the chains during this process also induces morphological changes that can be reflected in the overall degree of crystallinity. Thus, the proper temperature-time condition for post-process thermal treatment should be established and was examined in this paper.
Materials
Polyethylene oxide (PEO) of M w = 300,000 g/mol was purchased from Sigma Aldrich (St. Louis, MO, USA). It was dissolved in distilled water and methanol (9 wt.%). PEO solutions were stirred with a mixing rate of 250 rpm for 48 h using a magnetic stirrer (Heidolph MR Hei-Tec, Schwabach, Germany) with a teflon-coated magnetic cross at 25 • C for water solutions and 35 • C for methanol solutions.
Characterization of the Solution
A rotational rheometer Physica MCR 501 (Anton Paar, Graz, Austria) with the concentric cylinder arrangement with inner and outer diameters of 26.6 and 28.9 mm, respectively, was used to determine the rheological properties of PEO solutions at 25 • C. Rheological measurements of the polymer solutions were performed in steady (a range of shear rates from 0.01 to 300 s −1 ) and oscillatory (at a constant strain of 1% and with a frequency ranging from 0.1 to 100 s −1 ) modes.
Electrospinning
Nanofibrous webs were created by the laboratory device [22], which consisted of a high-voltage power supply (Spellman SL70PN150, Hauppauge, NY, USA), carbon-steel stick of a diameter of 10 mm, and motionless flat-metal collector. The electrospinning process was performed from a drop of polymer solution (0.2 mL) placed on the tip of a steel stick at a voltage of 25 kV with the tip-to-collector distance fixed at 200 mm at 20 ± 1 • C with a relative humidity of 38 ± 3%.
Thermal Treatment of Nanofibrous Web
In order to avoid any structural transformations of the prepared samples, electrospun webs were cooled to -75 • C (a significantly lower temperature than PEO T g of -54 • C) immediately after the electrospinning process and stored at this temperature for further experiments. Afterwards, PEO nanofibrous webs were exposed to thermal treatments at various temperatures of 40, 50, and 60 • C for a specified period of time.
Characterization of Nanofibrous Web
Characteristics of nanofibrous webs were evaluated using a Vega 3 high resolution scanning electron microscope (Tescan, Brno, Czech Republic). A conductive coating layer was applied in advance. To determine the mean fiber diameter, Adobe Creative Suite software was used. In total, 300 measurements were executed in three different images.
Crystallinity of the prepared samples was characterized by wide-angle X-ray diffraction (WAXD). Diffractograms were recorded by a Philips diffractometer (XPertPRO, Almelo, The Netherlands) equipped with a hot stage. Measurements were completed in a reflection mode in the 2θ range of 10 to 35 • with a nominal resolution of 0.03 • at selected temperatures and a quasi-logarithmic sampling frequency. Every test was performed in duplicate to enhance the repeatability.
Results and Discussion
The diameter of nanofibers depends on various factors: (i) Polymer characteristics (molecular weight and topology of polymer macromolecules), (ii) solvent characteristics (vapor pressure, surface tension, viscosity), (iii) solution properties (concentration, viscosity and elasticity of solution), and (iv) electrospinning parameters (electric field strength, tip-to-collector distance, temperature, and humidity). The basic characteristics of the solvents are summarized in Table 1. Identical PEO was used for water and methanol solutions. Table 2 shows the rheological behavior of the solutions and resulting fiber diameters. As can be seen, the viscosity of methanol/PEO solutions was lower compared with water/PEO solutions, although the concentration was maintained on the same level. However, electrospun fibers obtained from methanol solutions exhibited significantly larger diameters. Thus, in this case, the main parameter controlling the fiber thickness is the evaporation kinetics of a specific solvent represented by the vapor pressure; ejected jets from the Taylor cone can be drawn in the electric field until evaporation of the solvent and fiber solidification. In addition, solvent quality can be derived using Hansen solubility parameters, indicating the degree of interaction between a polymer and a solvent [23]. The solubility parameter of PEO is 10.5 (cal cm −3 ) 1/2 , methanol is 29.6 (cal cm −3 ) 1/2 , and water is 47.8 (cal cm −3 ) 1/2 [23,24]. Due to smaller differences in the total solubility parameters, methanol is more suitable as a solvent for PEO than water. Figure 1 shows the micrographs of electrospun fibers after four-hour treatments at selected temperatures. As can be seen, the initial smooth surface and diameter of the fibers are almost intact after the thermal treatment in a provided interval. However, the surface of the fibers obtained from the water solution tends to roughen, break, and re-join at a temperature of 60 • C as visible in the upper-right corner of the micrograph. This indicates thermodynamic instability of the fibers. Surface roughening can be ascribed to the chemi-crystallization process [25]. This phenomenon has been described and explained for several polymers and relates to a chain scission on the surface layer. Released macromolecular segments can be incorporated into the already existing crystallites, resulting in surface shrinkage and cracking, particularly on the lamellae borders. The structure of thermal-treated nanofibrous webs consists of well-identified separated fibers, which enables a clear image analysis and calculation of the fiber diameters. Figure 2 depicts the mean fiber diameter and corresponding standard deviation as a function of the temperature of the thermal treatment. As no extensive shrinkage of the oriented amorphous portion occurred, it can be concluded that the diameter variation is statistically insignificant. Although electrospun fibers generally exhibit a high amount of the amorphous phase [26], the level of crystallinity was sufficiently high enough to dimensionally stabilize the nanofibrous webs in this experiment. Crystallinity development in nanofibers after thermal treatment can be examined nondestructively by wide-angle X-ray diffraction. Typical X-ray patterns of PEO are illustrated in Figure 3. Sharp reflections in diffractograms correspond to PEO crystallites. Furthermore, a wide diffused halo is evidence of the amorphous phase. To determine crystallinity precisely, diffractograms have to be separated into individual crystalline peaks and an amorphous halo. In this case, each crystalline peak and amorphous halo was fitted by an individual Gauss function. Finally, crystallinity of the samples was calculated as a ratio between the sum of crystalline peak areas and the area of the whole X-ray pattern. The structure of thermal-treated nanofibrous webs consists of well-identified separated fibers, which enables a clear image analysis and calculation of the fiber diameters. Figure 2 depicts the mean fiber diameter and corresponding standard deviation as a function of the temperature of the thermal treatment. As no extensive shrinkage of the oriented amorphous portion occurred, it can be concluded that the diameter variation is statistically insignificant. Although electrospun fibers generally exhibit a high amount of the amorphous phase [26], the level of crystallinity was sufficiently high enough to dimensionally stabilize the nanofibrous webs in this experiment. The structure of thermal-treated nanofibrous webs consists of well-identified separated fibers, which enables a clear image analysis and calculation of the fiber diameters. Figure 2 depicts the mean fiber diameter and corresponding standard deviation as a function of the temperature of the thermal treatment. As no extensive shrinkage of the oriented amorphous portion occurred, it can be concluded that the diameter variation is statistically insignificant. Although electrospun fibers generally exhibit a high amount of the amorphous phase [26], the level of crystallinity was sufficiently high enough to dimensionally stabilize the nanofibrous webs in this experiment. Crystallinity development in nanofibers after thermal treatment can be examined nondestructively by wide-angle X-ray diffraction. Typical X-ray patterns of PEO are illustrated in Figure 3. Sharp reflections in diffractograms correspond to PEO crystallites. Furthermore, a wide diffused halo is evidence of the amorphous phase. To determine crystallinity precisely, diffractograms have to be separated into individual crystalline peaks and an amorphous halo. In this case, each crystalline peak and amorphous halo was fitted by an individual Gauss function. Finally, crystallinity of the samples was calculated as a ratio between the sum of crystalline peak areas and the area of the whole X-ray pattern. Crystallinity development in nanofibers after thermal treatment can be examined non-destructively by wide-angle X-ray diffraction. Typical X-ray patterns of PEO are illustrated in Figure 3. Sharp reflections in diffractograms correspond to PEO crystallites. Furthermore, a wide diffused halo is evidence of the amorphous phase. To determine crystallinity precisely, diffractograms have to be separated into individual crystalline peaks and an amorphous halo. In this case, each crystalline peak and amorphous halo was fitted by an individual Gauss function. Finally, crystallinity of the samples was calculated as a ratio between the sum of crystalline peak areas and the area of the whole X-ray pattern. While crystalline peaks can be unambiguously identified, the maximum of the amorphous halo varies with the temperature and must be evaluated experimentally. For these purposes, X-ray patterns were measured in the melt state at various temperatures and the maximum of the individual amorphous halo at a specific temperature was assessed [27,28]. In Figure 4, the position of the maximum amorphous halo as a function of temperature is shown. It can be stated that under specific experimental conditions, the maximum of the amorphous halo is a linear function of the temperature. Then, the maximum position can be extrapolated to the solid state, i.e., at temperatures below 60 °C, and the diffractogram can be separated into individual crystalline peaks and an amorphous halo. Figure 6 show the evolution of crystallinity of the nanofibrous webs after the thermal treatment at various temperatures. As can be seen, the initial crystallinity of nanofibers is While crystalline peaks can be unambiguously identified, the maximum of the amorphous halo varies with the temperature and must be evaluated experimentally. For these purposes, X-ray patterns were measured in the melt state at various temperatures and the maximum of the individual amorphous halo at a specific temperature was assessed [27,28]. In Figure 4, the position of the maximum amorphous halo as a function of temperature is shown. It can be stated that under specific experimental conditions, the maximum of the amorphous halo is a linear function of the temperature. Then, the maximum position can be extrapolated to the solid state, i.e., at temperatures below 60 • C, and the diffractogram can be separated into individual crystalline peaks and an amorphous halo. While crystalline peaks can be unambiguously identified, the maximum of the amorphous halo varies with the temperature and must be evaluated experimentally. For these purposes, X-ray patterns were measured in the melt state at various temperatures and the maximum of the individual amorphous halo at a specific temperature was assessed [27,28]. In Figure 4, the position of the maximum amorphous halo as a function of temperature is shown. It can be stated that under specific experimental conditions, the maximum of the amorphous halo is a linear function of the temperature. Then, the maximum position can be extrapolated to the solid state, i.e., at temperatures below 60 °C, and the diffractogram can be separated into individual crystalline peaks and an amorphous halo. Figure 6 show the evolution of crystallinity of the nanofibrous webs after the thermal treatment at various temperatures. As can be seen, the initial crystallinity of nanofibers is Figures 5 and 6 show the evolution of crystallinity of the nanofibrous webs after the thermal treatment at various temperatures. As can be seen, the initial crystallinity of nanofibers is defined by the type of electrospun solution. Fibers prepared from methanol solution have a crystallinity of 5%, higher than water solution fibers. This may be on account of the surface irregularity and solvent quality. It is expected that the fibers' surface solidifies extremely fast, which leads to a higher level of structural defects if compared with the core of the fiber [29]. Consequently, the overall crystallinity of the fibers depends on the surface/volume ratio. Thinner fibers from the water solution are more amorphous. Furthermore, the solvent quality also plays an important role in the overall crystallinity of the fibers obtained from solution. It has been reported that the solvent considerably affects the mobility of dissolved macromolecular segments, manifesting itself in improved crystallization kinetics and crystallite development [30].
Polymers 2019, X, x FOR PEER REVIEW 6 of 9 defined by the type of electrospun solution. Fibers prepared from methanol solution have a crystallinity of 5%, higher than water solution fibers. This may be on account of the surface irregularity and solvent quality. It is expected that the fibers' surface solidifies extremely fast, which leads to a higher level of structural defects if compared with the core of the fiber [29]. Consequently, the overall crystallinity of the fibers depends on the surface/volume ratio. Thinner fibers from the water solution are more amorphous. Furthermore, the solvent quality also plays an important role in the overall crystallinity of the fibers obtained from solution. It has been reported that the solvent considerably affects the mobility of dissolved macromolecular segments, manifesting itself in improved crystallization kinetics and crystallite development [30]. defined by the type of electrospun solution. Fibers prepared from methanol solution have a crystallinity of 5%, higher than water solution fibers. This may be on account of the surface irregularity and solvent quality. It is expected that the fibers' surface solidifies extremely fast, which leads to a higher level of structural defects if compared with the core of the fiber [29]. Consequently, the overall crystallinity of the fibers depends on the surface/volume ratio. Thinner fibers from the water solution are more amorphous. Furthermore, the solvent quality also plays an important role in the overall crystallinity of the fibers obtained from solution. It has been reported that the solvent considerably affects the mobility of dissolved macromolecular segments, manifesting itself in improved crystallization kinetics and crystallite development [30]. What is common for fibers prepared from water and methanol solutions is the monotonic growth of crystallinity in the initial phase of the process of thermal annealing. The kinetics of the process depends on the temperature and as-processed crystallinity. At the beginning, the crystallinity of fibers prepared from water solutions grows significantly faster than that of fibers prepared from methanol and the process is positively influenced by the annealing temperature. Consequently, the rate of secondary crystallization in PEO is controlled by the mobility of the polymeric chains, which is generally higher in the amorphous phase.
After a certain time, crystallinity changes induced by the thermal treatment should reach the maximum. However, in the experiment, the maximum was reached in the samples prepared from both methanol and water solutions only at the temperature of 60 • C. Regarding the temperatures of 40 and 50 • C, crystallinity maxima can be expected after longer periods of thermal treatment. It is interesting to note that the crystalline portion of the samples thermally treated at 60 • C is not stable and with a prolonged process, its level decreases even below the as-processed values. This indicates a competing process of degradation proceeding mutually during the thermal treatment. It has been reported that PEO is sensitive to thermooxidation even at relatively low temperatures. Morlat et al. states that thermooxidative changes can be observed after thermal treatment of PEO at 50 • C [21]. The overall kinetics of the degradation are controlled by oxygen diffusion. From the morphology point of view, oxygen diffusion dominates in amorphous regions. As a consequence, massive thermooxidative changes manifested by a decrease of crystallinity are faster in the fibers prepared from water solution.
Conclusions
The experimental results of this study have shown the following conclusions for PEO electrospun fibers: 1. Electrospun PEO fibers processed from methanol solution possess higher diameters compared with fibers obtained from water. It respects the evaporation kinetics of a solvent.
2. PEO fibers prepared from methanol and water solutions by electrospinning are dimensionally stable even during thermal treatment close to their melting temperature.
3. As a consequence of chemi-crystallization, the fiber surface is roughened after a specific period of thermal treatment.
4. Solvent quality positively influences the as-processed crystallinity of PEO electrospun fibers, which is manifested by the higher crystalline portion in fibers prepared from methanol solution.
5. The crystalline phase increment during secondary crystallization is positively impacted by the as-processed crystalline level of PEO electrospun fibers. Similarly, the temperature of the thermal treatment accelerates secondary crystallization in the observed temperature interval. 6. At a prolonged time of thermal treatment at 60 • C, massive thermooxidative degradation of both types of electrospun PEO fibers is manifested by a decrease in crystallinity. | 4,620 | 2019-08-23T00:00:00.000 | [
"Materials Science"
] |
Potential repurposing of four FDA approved compounds with antiplasmodial activity identified through proteome scale computational drug discovery and in vitro assay
Malaria elimination can benefit from time and cost-efficient approaches for antimalarials such as drug repurposing. In this work, 796 DrugBank compounds were screened against 36 Plasmodium falciparum targets using QuickVina-W. Hits were selected after rescoring using GRaph Interaction Matching (GRIM) and ligand efficiency metrics: surface efficiency index (SEI), binding efficiency index (BEI) and lipophilic efficiency (LipE). They were further evaluated in Molecular dynamics (MD). Twenty-five protein–ligand complexes were finally retained from the 28,656 (36 × 796) dockings. Hit GRIM scores (0.58 to 0.78) showed their molecular interaction similarity to co-crystallized ligands. Minimum LipE (3), SEI (23) and BEI (7) were in at least acceptable thresholds for hits. Binding energies ranged from −6 to −11 kcal/mol. Ligands showed stability in MD simulation with good hydrogen bonding and favorable protein–ligand interactions energy (the poorest being −140.12 kcal/mol). In vitro testing showed 4 active compounds with two having IC50 values in the single-digit μM range.
Scientific Reports
| (2021) 11:1413 | https://doi.org/10.1038/s41598-020-80722-2 www.nature.com/scientificreports/ is known to undergo a significant conformational change upon binding of XMP 17,18 . An induced-fit docking approach or an approach with flexible residues may be a better alternative. Hence, despite the overall success in redocking with the majority of the structures, these three cases may be a limitation. On binding energies, standardization of scores alleviated protein and ligand related biases. The difference of the mean of binding energies of ligands screened on 1nhg and 4qt3 was 2.61 kcal/mol (-9.11 kcal/mol and −6.50 kcal/mol respectively) (Fig. 1B,C). This may be explained by the buried active site in 1nhg, as compared to the more solvent-exposed active site of 4qt3. The standardization process reduced this inter protein noise by centering the mean of binding energies on each protein at zero. A similar phenomenon is observed when viewing and comparing the binding of pairs of ligands across the whole dataset. Co-crystallized ligands in 3fi8 and 1rl4, 2-aminoethyl dihydrogen phosphate and (2R)-2-[(hydroxy-amino)methyl]hexanoic acid have low molecular weights 141 Da and 189 Da respectively which may explain their low scores across all proteins (Fig. 1B). Indeed, vina scoring function has a ligand size-related bias 20 . By contrast, the co-crystallized ligand from 4j56, flavinadenine dinucleotide, has a high molecular weight (785 Da) which may explain its high promiscuity (Fig. 1B).
As key findings from the workflow evaluation, 77% of co-crystallized ligands poses were reproduced. On the binding energies, scores standardization improved protein and ligand related biases.
The above workflow was extended to include ligand efficiency metrics and GRIM. The overall screening workflow including docking, scoring, ranking and molecular dynamics is represented in Fig. 2 and further detailed in the methods Sect. 796 drugs from DrugBank 21 were docked against 36 Plasmodium falciparum protein structures. LipE, BEI and SEI were calculated for each ligand most energetically favorable pose. All poses were scored using the GRIM tool to obtain the Grscore and the pose with the best Grscore was maintained in the GRIM ranking scheme. LipE, SEI, BEI scores were standardized through the z-score across the proteins and then the ligands. All scores were then transformed to their ranks and a complex rank (sum of the ligand and protein ranks) is www.nature.com/scientificreports/ attributed to each complex as described above. A protein rank is its rank respective to a ligand when compared to other proteins. Complex ranks are finally summed and ranked. From the ranked list, 36 complexes were selected (each protein and ligand being unique) for further analysis of their docked poses and structure stability in MD. Finally, 25 stable complexes were maintained.
Screening the DrugBank subset against P. falciparum structures. Top 23 . SEI values were consistently low. The minimum value was 7 while the highest value was 33. The average SEI value was 12, well within the lower limit threshold for SEI (15) defined as a good starting point by Kumar et al. 24 . Abiraterone, a hydrophobic compound with a cLogP of 5.3 and PSA of 33 Å 2 had the highest SEI value. Dianhydrosorbitol 2,5-dinitrate, gemifloxacin, temozolomide, triamcinolone, pirbuterol, terazosin, tenoxicam, abacavir and sitaxentan all showed SEI < 10.
On the other hand, with respect to their molecular interactions, hits and co-crystallized ligands showed similar interaction profiles supported by their high Grscore. All the selected complexes ligands showed a Grscore greater or equal to 0.58. Hit structures (Fig. 3) were compared with those of known inhibitors and current antimalarials. Most hits presented new scaffolds with good potential for scaffold modification in their respective binding sites. In the comparison of the current hits to the set of FDA-approved antimalarials, only terazosin (the hit for 2pml) and the FDA-approved primaquine had a similarity above 0.5 (0.52). However, this value is still too low to infer significant structural analogy. Further, hit scaffolds were also compared with those of the corresponding target known inhibitors in ChEMBL 25 (if present) and to those of known antimalarials. The targeted protein FASTA sequence for known antimalarials was searched in ChEMBL 25 using BLAST to find the corresponding protein. In most cases, the known inhibitors showed low similarity to the hit compound.
These differences between the hits and the co-crystallized ligands in terms of structure is in contrast with the high Grscores of the hits. However, Grscores do not imply ligand similarity, but rather similarity in molecular interactions. Of course, the GRIM score is not ligand centric nor target centric but focuses on the similarity of molecular interactions 26 . Nevertheless, some moieties and subgroups from hits had comparable binding modes with the co-crystallized ligands. For example, abacavir and sinefungin both have a purine moiety similarly interacting with the target 3uj8 (see Supplementary Fig. S15).
In addition, many compounds have appropriate molecular properties for scaffold modification (low molecular weight and logP). Furthermore, in their binding modes, some compounds showed good potential for extension within the binding site. Table 1 presents the four hits confirmed through in vitro experiments, their IC 50 , binding energies, Grscore, and predicted targets. The section below describes the confirmed hits through in vitro assays. All other complexes selected after docking are described in the Supplementary Data (Sect. 1.4). A detailed analysis was performed on the four complexes: not only binding poses and molecular interactions but also their potential action on the parasite life cycle. All drug indication information was obtained from DrugBank 21. Plasmepsin 2 (PDB ID: 2igx)-Fingolimod (DB08868) Plasmepsin 2 belongs to a family of aspartic proteinases involved in haemoglobin degradation for the parasite, and the crystal structure 2igx is co-crystallized with a potent achiral inhibitor (HET CODE: A1T) 27 . The compound fingolimod is known to modulate sphingosine 1-phosphate receptor activity and is indicated for the treatment of relapsing-remitting multiple sclerosis. This compound binds to Plasmepsin 2 at the A1T binding site, buried in a hydrophobic pocket, interacting with ASP121 though one of its hydroxyl groups ( Fig. 3A and Supplementary Figure S38a) www.nature.com/scientificreports/ deep within the pocket. Fitting within that pocket has been observed to be crucial for high-affinity inhibitors 27 . Fingolimod has its polar groups (hydroxyls and amino) fitting in the depth of the hydrophobic pocket which may have associated energetic cost 28 . By contrast, A1T has a characteristic long aliphatic chain. Through its aromatic ring, fingolimod makes hydrophobic contacts with TYR77, PHE111, ILE123 and VAL82, and has Pi-Pi interactions with TYR77 and PHE111 (Fig. 3A). The carbon chain engages in alkyl contacts with PHE120 and PHE111 (Fig. 3A). Unlike A1T, fingolimod which presents a different scaffold (Tanimoto similarity 0.22 with A1T), and further does not show a large expansion in the vast trench area outside the pocket, though this region may offer an ideal possibility for scaffold growing. The phenanthrene halofantrine is a common antimalarial drug that targets Plasmepsin 2 29 . Protein kinase 7 (PDB ID: 2pml)-Terazosin (DB01162) PfPK7 is an "orphan" kinase with the advantage that it presents different features to those of mammalian homologs 30 . Hence, selectivity may be achieved when targeting PfPK7 30,31 especially in this absence of a human homolog. In the structure 2pml it is complexed with adenosine 5′-(beta,gamma-Imino)triphosphate, an ATP analog. On the other hand, terazosin is a quinazoline, a selective alpha1-antagonist used for the treatment of symptoms of benign prostatic hyperplasia (BPH). It binds to 2pml interacting with ASN35, ASP123, ASP190, ILE42, LEU101, LEU179, LEU34, LYS55, SER189, TYR117 ( Fig. 3B and Supplementary Figure S38b). Interactions are dominated by hydrophobic contacts with only one hydrogen bond observed to ARG32. Terazosin presents similar interactions to known inhibitors that bind to this binding site 30 . However, the two compounds are different in their binding modes. Indeed, terazosin extends toward a more superficial area of the pocket while having a moiety fitting more deeply in the binding site 30 . The ATP analog lies in the pocket, though not in the deepest part, crossing terazosin in an overlay of the two. Also, the two structures are significantly different (Tanimoto similarity 0.23) and terazosin does not show significant similarity with available inhibitors (Target ID: CHEMBL6169) the highest being 0.50 for CHEMBL602580. The two molecules share a common long chain connected to a benzene ring. www.nature.com/scientificreports/ Thioredoxin reductase (PfTrxR, PDB ID: 4j56)-Prazosin (DB00457) 4j56 is a thioredoxin reductase-thioredoxin structure complexed with its substrate and the prosthetic group flavin adenine dinucleotide (FAD). The enzyme is essential for Plasmodium falciparum and is involved in redox homeostasis 32 . It was identified as a putative liver stage target 33 . The compound prazosin is a selective α-1-adrenergic receptor antagonist used to treat hypertension. In this case it binds to the FAD binding site in a completely buried pocket. It interacts with some water molecules, ALA191, ALA369, ASP357, CYS88 CYS93, GLY52, LYS96, PRO51, SER212, THR87, VAL233 ( Fig. 3C and Supplementary Figure S38c). CYS93, CYS88 form the redox centers for the protein's function 32 . Prazosin also forms hydrogen bonds with LYS96 and ASP357. The FAD binding pocket is large with a mitigated polar character, in which prazosin occupies a large portion. Its scaffold is characterized by an aromatic quinazoline similar to the quinoxaline found in the Plasmodium thioredoxin reductase (CHEMBL4547) inhibitor (CHEMBL380953). The two rings present numerous hydrophobic interactions. Prazosin has a high lipophilic efficiency (9), a molecular weight of 383 and BEI of 29. Further it has been found to be inactive againstagainst a mammalian thioredoxin reductase (PubChem 34 Bioassay ID 488,772, 488,773, 588,453), a good indication of its selectivity profile.
Calcium-dependent protein kinase 2 (PfCDPK2, PDB ID: 4mvf)-Abiraterone (DB05812) 4mvf is the Structure of Plasmodium falciparum CDPK2 complexed with an inhibitor, staurosporine. Absent from vertebrates, PfCDPK2 is an interesting target for antimalarial therapy 35 . The compound abiraterone is a steroid and an innovative drug that offers clinical benefit to patients with hormone refractory prostate cancer. Abiraterone binds to 4mvf in a trench-like hydrophobic site and interacts with ALA99, ASP213, CYS149, ILE212, LEU199, LYS101, MET146, VAL130, VAL86 ( Fig. 3D and Supplementary Figure S38d). The compound presents a hydrophobic character (low PSA: 33.12 Å 2 with high logP: 5.39) with interactions dominated by alkyl contacts. A polar contact does present with THR82 on a more exposed area of the binding site. It does not share significant similarity with the enzyme (Target ID CHEMBL1908387) inhibitors, the highest being 0.5 for CHEMBL602580.
In conclusion, identified hits in docking interacted with their respective targets similarly to co-crystallized ligand. This is also supported by their high Grscores (> 0.58). Regarding their structures, most hits presented different scaffolds compared to the respective known inhibitors of the targets. Except for SEI, all ligand efficiencies were in good ranges for drug discovery.
Complex stability during MD simulations. All-atom MD simulations for the 25 different protein-ligand systems were performed for 20 ns to provide a profile of their dynamic behavior. MD is an effective means to assess ligand binding mode stability. To investigate both the stability of the apo proteins and complexes, the time evolution of the root mean square deviation (RMSD) and the radius of gyration (Rg) were monitored.
Rg is a measure related to the overall compactness of the protein which can thus assess structure instability, especially when unfolding 36 . An increasing Rg during simulation indicates that a structure is getting less compact and vice-versa. www.nature.com/scientificreports/ The small values of the standard deviations indicate the small variation of the Rg values during simulation. Indeed, the highest values of the standard deviation were 0.05 nm for the complexes (2yoh_DB01001) and 0.02 nm for the apo proteins (4mvf_DB05812). Given that these extreme values are still low, we can conclude that both apo and complex structures maintained a constant level of compactness during simulations, indicating their stability.
Further, we compared Rg value for each system in its complex and apo forms. Because small molecules binding to a protein may induce conformation change resulting in a significant change in their Rg 37,38 . The absolute values of the differences (apo -complex) of the mean values of Rg ranged from 0.1 nm (1nhg_DB01171) to 0.04 nm (2pmn_DB00555). A Z-test of the difference of the Rg means (apo-complex) was conducted to assess the differences in the Rg distributions (Supplementary Table S1). All systems showed a p-value < 0.05 except for the complex 4mvf_DB05812, providing evidence that the mean is not equal in the two Rg distributions (apo and complex) for this system. Most systems, 19 out of 25, showed a small increase in their Rg upon ligand binding. However, the highest value in the mean difference is 0.1 nm (1 Å) for the complex 1nhg_DB01171. Hence, ligand binding did not cause significant change in the protein structures. This is also supported by the RMSD analysis. Figure 5 shows the means of RMSD values for apo and complex proteins. RMSD is a similarity metric used to assess structure variation during MD simulation. A value of 3 Ǻ is often put forward as the similarity threshold 39 . RMSD values were in the range of 0.11 nm to 0.47 nm for the apo protein and in the range of 0.20 nm to 0.50 nm for the complexes. The standard deviations ranged from 0.01 nm to 0.15 nm for the apo proteins and from 0.01 nm to 0.06 nm for the complexes. The highest RMSD values were 0.38 nm (2yoh) and 0.5 nm (2yoh_DB01001) for apo proteins and complexes respectively. Hence, the system 2yoh_DB01001 in both apo and complex forms has RMSD values greater than 3 Ǻ. These values are beyond the acceptable threshold for structural similarity despite the low standard deviation values. Indeed, the standard deviations of the RMSD values were 0.035 nm and 0.066 nm for apo and complex forms of 2yoh.
More, we looked at the effects of ligand binding on protein structures variation. In eight out of the 25 systems, the complexes showed decreased structural variation over the apo protein, showing reduced RMSD values for the complex than in the apo case. In the rest, the ligand binding induces structural variation. The highest absolute value of RMSD difference were 0.11 nm for the complex 2yoh_DB01001 followed by 0.05 nm for the complex 3llt_DB06268. These sole differences in protein RMSD may not be significant to affect proteins' functions.
To assess ligand stability, the time evolution of the number of hydrogen bond in each complex during simulation was analysed (see Fig. 6). In most systems, ligands maintained the initial number of hydrogen bonds. Darifenacin (DB00496) showed the lowest number of hydrogen bonds (0.22) on average. Its binding to the histo-aspartic protease (PDB ID: 3fnu) is driven by hydrophobic contacts, and as a result, the number of hydrogen bonds is reduced when compared to other systems. Similarly, the complex plasmepsin 2 (2igx)-fingolimod (DB08868) showed an average of 0.39 hydrogen bonds. Fingolimod forms hydrophobic contacts with only one hydrogen bond to ASP121 in the binding site in the docked pose. However, in this system (2igx), it was observed that there was an increase in the number of hydrogen bonds at around 10 ns. Abacavir (DB01048) showed the highest number of hydrogen bonds to 3uj8 compared to any other system. In the docked pose for this compound, a network of five hydrogen bonds was observed. This also correlates with the observed interaction energy for this complex which was the 2nd most favourable of the protein-ligand interaction energies (-261.05 kcal/mol) of all systems (see Supplementary Fig. S3 and Table S2). www.nature.com/scientificreports/ Ligand stability was further assessed using ligand RMSD (see Supplementary Fig. S1), by visualization, with the protein-ligand interaction energy (see Supplementary Fig. S3) and the center of mass (COM) distance between the ligand and the protein (see Supplementary Fig. S2). Ligand RMSD fitted to the protein backbone was more sensitive to ligand movements than the ligand RMSD fitted to itself. Indeed, they showed higher values than the ligand RMSD fitted to itself (see Supplementary Fig. S1). However, the high values of ligand RMSD fitted to the protein backbone do not correlate with ligand dissociation as visualized in some systems. Ligand RMSD fitted to the protein captures the movement of the ligand relative to the protein 40 . Thus, the RMSD fitted to the protein backbone and the COM distances were therefore used. COM distances varied from 0.61 nm (2pt6_DB00489) to 2.33 nm (4j56_DB00457) (see Supplementary Fig. S2). Differences in the values are related to the size of the systems. The COM distances are better assessed using their standard deviations, a complex with dissociating ligand will tend to show an increasing COM distance, resulting in a high standard deviation. The highest observed standard deviation was 0.10 nm for the complex 1rl4_DB08877, the lowest value being 0.02 nm for the complex 3o8a_DB01203. Hence, ligands' stability is also supported by the low COM distances variations. Regarding interaction energy, all systems showed a negative total protein-ligand interaction energy in a range of −140.28 kcal/mol to −331.42 kcal/mol. The system with prazosin (4j56_DB00457) showed the most favourable interaction energy profile with an average value of −331.42 kcal/mol (see Supplementary Table 1 and Fig. S3). The total protein-ligand interaction energy also supports ligand non-dissociation from the protein. The negative energy values indicate favorable interaction with protein residues.
Overall, MD simulations showed stable complexes supported by structures' RMSD and radius of gyration. Ligand binding did not significantly change the apo proteins. Moreover, hydrogen bonds and interaction energies showed favorable protein-ligand interactions.
The approach here combines binding affinity, molecular interactions, statistical transformations (binding energies standardization, rank transformation and complex ranking) and compound properties through ligand efficiency metrics. Starting with a set of FDA approved compounds, it may be tempting to simply ignore their pharmacological profiles, as approved drugs may imply at least an acceptable drug likeness profile. However, integrating pharmacological properties certainly directs the prioritization of hits and contributes towards fulfilling the multiple objectives-optimization nature of drug discovery. More efficiency indices are highly recommended for evaluation of high-quality hits in medicinal chemistry 23,[41][42][43] . Compounds with low potency may compensate with better molecular and pharmacokinetic properties for improved bioavailability.
Complementary to binding energy, the knowledge-based and topological scoring function GRIM may be used. However, it is noteworthy that it is biased toward the molecular interaction pattern of the co-crystallized ligand. Yet, different inhibitors may present different interaction patterns. This is certainly the case for allosteric inhibitors. However, protein-ligand molecular recognition is often governed by key conserved molecular interactions 44 as with enzyme substrates. Co-crystallized ligands used as reference ligands in this study to compute the Grscore may still present these conserved interactions. www.nature.com/scientificreports/ Antiplasmodial and human cytotoxicity assays. Fingolimod and abiraterone produced IC 50 values in the single-digit μM range, 2.21 μM and 3.37 μM respectively, against cultured P. falciparum (see Fig. 7). Hence these may be promising for further optimization. A total of four compounds showed antiplasmodial activity (see Fig. 7 and Supplementary Figure S37). Prazosin and terazosin showed IC 50 www.nature.com/scientificreports/ cium-dependent protein kinase 2 for fingolimod, terazosin, prazosin and abiraterone respectively (see Table 1). Thioredoxin reductase 2 is a putative drug target for liver-stage malaria hits 33 . In a pre-screen at a fixed concentration of 20 μM, salbutamol, lamotrigine and moclobemide decreased cell viability to 71.83%, 72.23%, 61.24% respectively, which was not considered as sufficient to warrant their inclusion as active compounds (see Supplementary Fig. S33). Moclobemide 45,46 and salbutamol 47 showed antiplasmodial activity in previous assays [45][46][47] . Prazosin and terazosin are two close analogs (Tanimoto similarity 0.7), differing by a furan ring on prazosin while terazosin has a tetrahydrofuran (see Table 1). The two compounds were predicted on two different targets. Terazosin was predicted to bind on PfPK7 while prazosin was on thioredoxin reductase 2 (see Table 1). This may be related to the hit selection process with the complex ranking which selects a hit for each target in the array of protein. Given their structural analogy, both compounds may be binding on the two targets. Hence they may be potential multitarget compounds for these targets. It also possible that one of the targets is the correct one. Despite their structural similarity, prazosin was about two times more active than terazosin. Indeed, prazosin showed an IC50 of 16.67 uM while terazosin had an IC50 of 34.72 uM (see Table 1). A similar difference in the cell viability assay was also observed (see Supplementary Figure S36).
In the human cytotoxicity assays using human cervix adenocarcinoma (HeLa) cells, only fingolimod had a significant cytotoxic effect, reducing HeLa cells viability to below 50% (1.98%) at 20 µM (see Supplementary Figs. S39 and S40). It further showed an IC 50 of 1.63 µM. Fingolimod is an immunosuppressant targeting the sphingosine-1-phosphate receptor on T cell membranes 48 , which may explain its activity on HeLa cells. Interestingly, the compound is also being investigated for COVID-19 treatment 49 .
Regarding the screening pipeline accuracy, the hit rate was 25%. Indeed, 4 hits of the 16 tested hits were active. A 5% hit rate may be considered as successful 50 with a range between 1 and 40% in prospective screening 51 . It is difficult to measure the contribution of factor to the pipeline GRIM, paired ranking, ligand efficiency metrics. The hit rate is promising given the screening context: protein array, use of only cost-effective methods (ligand efficiency, paired ranking system, rescoring with GRIM, short MD simulation). Future improvements to the workflow may be to use a larger screening library, longer simulation and or binding free energy methods.
Materials and methods
Target and ligand preparation. The set of Plasmodium falciparum structures in the Screening Protein Data Bank (sc-PDB) (release v.2017 frozen PDB data on 2016-11) was downloaded and only one representative of duplicates (protein having same UniProt ID) from this set was retained. For MD simulation purposes, sc-PDB structures having the least number of missing residues were prioritized. The list of selected structures (PDB ID and co-crystallized ligands) is available in Supplementary Table S3. Sc-PDB structures are designed to suit molecular modelling purposes 52 . Their pdbqt formats were generated using docking using AutoDock Tools 53 . Cofactors were retained in their respective structures. The subset of orally active, Food and Drug Administration (FDA) approved small molecules, compliant with Lipinski's rule of five, and not affecting and not targeting Plasmodium spp was downloaded from DrugBank 21 (version 5.1.2, released 2018- [12][13][14][15][16][17][18][19][20]. This selection was made using DrugBank advanced search menu. For ligand analogs (having a Tanimoto coefficient of similarity greater or equal to 0.8), a single representative was used, chosen according to the best Quantitative estimate of druggability (QED) 54 . Ligands having non-valid AutoDock atom types were removed, leaving a final set containing 796 compounds. These ligands were prepared for docking using AutoDock Tools 53 and their molecular properties were calculated using RDKit (version 2018.09.1) 55 . cLogP values were calculated using Crippen's approach available in the RDKit package 55,56 . Docking parameters. An alternate version of Autodock Vina 57 , QuickVina-W 19 was used in a blind docking experiment. QuickVina-W has been reported to achieve an average acceleration factor of 3.60 compared to Vina, without losing accuracy in terms of pose and affinity predictions. The scoring function is identical to that of AutoDock Vina 58 but the search algorithms are more efficient making QuickVina-W ideal for blind docking 19 . Exhaustiveness was scaled to each target box size using a reference value of 24 for a box size of 303 angstroms (24 is three times the default Autodock Vina exhaustiveness) and ten poses were predicted in each docking. A parallelization scheme with 3 CPU (central processing unit) per docking calculation (internal parallelization) and 8 jobs per computer node (each node having 24 cores) (external parallelization) was used for optimal screening performance as reported in the literature 59 . Co-crystallized ligands were used to validate the docking process. An RMSD value between docked and co-crystallized ligands up to 2.00 Å was considered as good poses as is often reported in posing assessments 11,60 . RMSD values were computed using GROMACS 61 version 2016 without least-squares fitting of the structures.
Scoring functions. QuickVina-W and Graph Interaction Matching (GRIM) (belonging to two classes of scoring functions: topological and energetical scoring functions) were used. GRIM was used to re-score the poses generated by QuickVina-W. GRIM is a topological, knowledge-based scoring function that scores interactions pattern similarity between the docked ligand and the co-crystallized one (reference ligand used for comparison). The tool provides a Grscore, ranging from 0.0 to 1.5 and a pose with a Grscore greater than 0.59 has a significant similarity of interaction pattern with the reference ligand 26 . From the D3R Grand Challenge 2, on a dataset of 102 FXR agonists, posing with HYDE 62 and scoring with GRIM-1 gave a Kendall's τ ranking coefficient of 0.442, the third most accurate ranking 63 . Standardization. Binding affinities may differ significantly across different proteins due to peculiarities of their active sites (volume, depth, hydrophobic character etc.). The so-called interprotein scoring noise has been illustrated in different studies and score normalization strategies have been proposed to eliminate the scoring www.nature.com/scientificreports/ bias in reverse docking [64][65][66][67] . A similar phenomenon has been observed also for ligands which tend to show higher affinity with the increase of molecular weight, leading to false positives in docking. Normalizing this bias has been shown to improve ligand affinity ranking in virtual screening 20,66,68 . Here we used the z-score for standardization, subtracting the mean and dividing by the standard deviation (Eq. 1). For the resulting scores from screening, this was applied to each column (series of all ligand docking scores on each particular protein) and then on each row (series of a single ligand's score across the set of proteins) to obtain the z-score as implemented in the python package SciPy 68 .
The standardization scheme was independently applied to the set of scores on each protein, centring the docking scores around a mean of 0 with a standard deviation of 1. This reduces the noise caused by the differences in binding pockets and scores for different ligands on different proteins can thus be compared. It is noteworthy that of two distribution coefficients available, LogD is more accurate for charged compounds than the calculated partition coefficients (log P) 69 . However, LogP was used for simplicity since logD requires determining pKa (dissociation constant), and this calculation was not available in RDKit.
A combined metric of SEI and BEI which can be derived from the 2D efficiency plane 42 is simply the radial coordinate which corresponds to √ SEI 2 + BEI 2 . Binding energy is implicitly included in LipE, SEI and BEI through the pIC 50 .
Rank transformation and complex ranking. LipE, SEI, BEI and the Grscore were combined using the rank transformation after score standardization, transforming each standardized score to its rank. For equal scores, a rank that is the average of the ranks of those scores is taken. Complexes (protein-ligand) were ranked using the complex ranking scheme 50 . A complex rank is defined as the sum of the protein rank and ligand rank and calculated using Eq. (7).
The ligand rank relative to a protein, is its rank compared to all other ligands. In a similar way, the protein rank relative to the ligand is the rank of the protein compared to all other proteins. Thus, a complex rank is simply the sum of the ligand and protein ranks. This allows revealing protein-ligand having high specificity for each other (top-ranked complexes) contributing to filter out false positives. Top-ranked complexes will have low rank values (the lowest rank value being 2 and the highest 832) 50 .
Molecular Dynamics. Molecular dynamics (MD) simulation on protein-ligand complexes was conducted as the final screening step to assess our complexes stability and to filter out some false positives. Proteins having missing residues in structures were first modelled using Prime version 5.4 (r012) (Schrodinger2018-4) 70 . Metal ions (MG in 1d5c, MG 1p9b, MN in 2pml and MG in 3fi8) were not included in the simulations. All systems were simulated during 20 ns in a dodecahedron box with a distance between the solute and the box set to 1.0 nm. The tip3p water model was used with a concentration of 0.15 M (Na + (sodium) and Cl-(chloride) ions). Systems' energies were minimized using the steepest descent method with a maximum force set at < 1,000.0 kJ/ www.nature.com/scientificreports/ mol/nm with a maximum number of steps of 50,000 followed by equilibration at 300 K and 1 atm with 50 ps of MD simulation in the isothermal-isobaric ensemble and subsequently in the canonical one. For the Lennard-Jones and the short-range electrostatic interactions, a cutoff of 10 Å was used. For the long-range electrostatic interactions, the smooth particle mesh Ewald method and a fourth-order interpolation scheme were used. Simulations were done using the leap-frog algorithm for integration. Hydrogen mass repartitioning (HMR) was applied to the proteins and ligands: masses of hydrogens bound to heavy atoms were repartitioned allowing an accelerated 4-fs time step. HMR has been shown to accelerate MD simulations without loss of accuracy [71][72][73][74] . Ligand topologies were generated using ACPYPE 75 with their charges obtained from Discovery Studio Visualizer V1.7, also used to analyse protein-ligand interactions. HMR was also applied to the ligand: increasing the hydrogen-mass by a factor of four and subtracting the added mass from the bonded heavy atom as described in GROMACS 76 documentation, thus, conserving the total mass of the system. Simulations were conducted on a remote machine at CHPC (Center for High-Performance Computing) with GROMACS 76 version 2018.2 using the Amber ff99SB-ILDN 77 force field. After simulation, the GROMACS module trjconv was used to correct for periodicity. Analysis consisted of first assessing the stability of the proteins' structures through their root mean square deviation (RMSD) and the radius of gyration (Rg) calculated using the GROMACS 76 package. The stability of the ligand binding pose was measured using the RMSD of the ligand heavy atoms after least-squares fitted to the protein Backbone and also by the interactions between the ligands and the protein through the number of hydrogen bonds and the total protein-ligand interaction energy. Protein rotation and translation were first removed by fitting it to the starting structure. Trajectories were visualized in the Jupyter Notebook 78 using Nglview 79 and the Pytraj package 80 .
Antiplasmodial assay. Antiplasmodial assessment of the compounds against the 3D7 strain of Plasmodium falciparum was carried out as described previously 81 . Briefly, compounds were incubated with cultured parasites at a final concentration of 20 μM for 48 h and percentage parasite viability relative to untreated control parasites determined using the plasmodium lactate dehydrogenase (pLDH) assay 82 . Compounds that decreased parasite viability > 50% in this initial screen were subjected to dose-response assays to determine their IC 50 values. The 48-h incubation followed by the pLDH assay was repeated with threefold serial dilutions of the test compounds and IC 50 values determined by non-linear regression analysis of % parasite viability vs. log[compound] plots.
Human cytotoxicity assay. To assess the overt cytotoxicity of the test compounds, they were incubated at a single concentration of 20 µM or three-fold serial dilutions (100 to 0.0457 μM) in 96-well plates containing HeLa cells (human cervix adenocarcinoma, maintained in a culture medium made of Dulbecco's Modified Eagle's Medium (DMEM) with 5 mM L-glutamine (Lonza), supplemented with 10% fetal bovine serum (FBS) and antibiotics (penicillin/streptomycin/ amphotericin B) at 37 °C in a 5% CO 2 incubator for 24 h. The numbers of cells surviving drug exposure were counted using the resazurin based reagent and resorufin fluorescence quantified (Excitation560/Emission590) in a SpectraMax M3 plate reader (Molecular Devices). Fluorescence readings obtained for the individual wells were converted to % cell viability relative to the average readings obtained from untreated control wells (HeLa cells without test compounds), after subtracting background readings obtained from wells without cells. Plots of % cell viability vs. Log(compound concentration) were used to determine IC 50 values by non-linear regression using GraphPad Prism (v. 5.02) 83,84 .
Conclusions
For malaria elimination, an accelerated drug discovery pipeline within the context of drug resistance is required. In this context, we propose an in-silico strategy using drug repurposing through the screening of a set of Drug-Bank compounds against a set of P. falciparum structures. This workflow combines ligand efficiency indices and molecular interaction similarity and statistical transformations. Hits showed good ligand efficiency indices and have shown stability in MD simulations. Fingolimod, abiraterone, prazosin and terazosin showed IC 50 values against cultured P. falciparum of 2.21 μM, 3.37 μM, 16.67 μM and 34.72 μM respectively. Further investigation of these hits could lead to new antimalarials for prophylaxis, transmission-blocking, efficient cure and disease eradication. | 7,840 | 2021-01-14T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Multinomial Regression with Elastic Net Penalty and Its Grouping Effect in Gene Selection
and Applied Analysis 3 Hence,
Introduction
Support vector machine [1], lasso [2], and their expansions, such as the hybrid huberized support vector machine [3], the doubly regularized support vector machine [4], the 1-norm support vector machine [5], the sparse logistic regression [6], the elastic net [7], and the improved elastic net [8], have been successfully applied to the binary classification problems of microarray data.However, the aforementioned binary classification methods cannot be applied to the multiclass classification easily.Hence, the multiclass classification problems are the difficult issues in microarray classification [9][10][11].
Besides improving the accuracy, another challenge for the multiclass classification problem of microarray data is how to select the key genes [9][10][11][12][13][14][15].By solving an optimization formula, a new multicategory support vector machine was proposed in [9].It can be successfully used to microarray classification [9].However, this optimization model needs to select genes using the additional methods.To automatically select genes during performing the multiclass classification, new optimization models [12][13][14], such as the 1 norm multiclass support vector machine in [12], the multicategory support vector machine with sup norm regularization in [13], and the huberized multiclass support vector machine in [14], were developed.
Note that the logistic loss function not only has good statistical significance but also is second order differentiable.Hence, the regularized logistic regression optimization models have been successfully applied to binary classification problem [15][16][17][18][19]. Multinomial regression can be obtained when applying the logistic regression to the multiclass classification problem.The emergence of the sparse multinomial regression provides a reasonable application to the multiclass classification of microarray data that featured with identifying important genes [20][21][22].By using Bayesian 1 regularization, the sparse multinomial regression model was proposed in [20].By adopting a data augmentation strategy with Gaussian latent variables, the variational Bayesian multinomial probit model which can reduce the prediction error was presented in [21].By using the elastic net penalty, the regularized multinomial regression model was developed in [22].It can be applied to the multiple sequence alignment of protein related to mutation.Although the above sparse multinomial models achieved good prediction results on the real data, all of them failed to select genes (or variables) in groups.
For the multiclass classification of the microarray data, this paper combined the multinomial likelihood loss function having explicit probability meanings [23] with multiclass elastic net penalty selecting genes in groups [14], proposed a multinomial regression with elastic net penalty, and proved
Multinomial Regression with the Multiclass Elastic Net
Penalty.Following the idea of sparse multinomial regression [20][21][22], we fit the above class-conditional probability model by the regularized multinomial likelihood.Let ( ) = Pr( = | ).It is easily obtained that Abstract and Applied Analysis 3 Hence, Let Then ( 13) can be rewritten as Note that Hence, the multinomial likelihood loss function can be defined as In order to improve the performance of gene selection, the following elastic net penalty for the multiclass classification problem was proposed in [14] By combing the multiclass elastic net penalty (18) with the multinomial likelihood loss function (17), we propose the following multinomial regression model with the elastic net penalty: arg min where 1 , 2 represent the regularization parameter.Note that ( , ) = (0, ⃗ 0).Hence, the optimization problem ( 19) can be simplified as arg min 3.2.Grouping Effect.For the microarray classification, it is very important to identify the related gene in groups.In the section, we will prove that the multinomial regression with elastic net penalty can encourage a grouping effect in gene selection.To this end, we must first prove the inequality shown in Theorem 1.
Theorem 1.Let ( b, ŵ) be the solution of the optimization problem (19) or (20).For any new parameter pairs which are selected as ( that is, Note that Hence, from ( 24) and (25), we can get where Equation ( 26) is equivalent to the following inequality: Hence, inequality (21) holds.This completes the proof.
Using the results in Theorem 1, we prove that the multinomial regression with elastic net penalty (19) can encourage a grouping effect.Theorem 2. Give the training data set ( 1 , 1 ), ( 2 , 2 ), . . ., ( , ) and assume that the matrix and vector satisfy (1).If the pairs ( b, ŵ) are the optimal solution of the multinomial regression with elastic net penalty (19), then the following inequality holds, where = () () = ∑ =1 , ŵ() is the th column of parameter matrix ŵ, and ŵ() is the th column of parameter matrix ŵ.
Proof.First of all, we construct the new parameter pairs Let Since the pairs ( b, ŵ) are the optimal solution of the multinomial regression with elastic net penalty (19), it can be easily obtained that From ( 33) and ( 21) and the definition of the parameter pairs ( * , * ), we have ( From (37), it can be easily obtained that where = () () = ∑ =1 .This completes the proof.
According to the inequality shown in Theorem 2, the multinomial regression with elastic net penalty can assign the same parameter vectors (i.e., ŵ() = ŵ() ) to the high correlated predictors () , () (i.e., = 1).This means that the multinomial regression with elastic net penalty can select genes in groups according to their correlation.According to the technical term in [14] Equation ( 40) can be easily solved by using the R package "glmnet" which is publicly available.
Conclusion
By combining the multinomial likelihood loss function having explicit probability meanings with the multiclass elastic net penalty selecting genes in groups, the multinomial regression with elastic net penalty for the multiclass classification problem of microarray data was proposed in this paper.The proposed multinomial regression is proved to encourage a grouping effect in gene selection.In the next work, we will apply this optimization model to the real microarray data and verify the specific biological significance.
ŵ , ) (19)is performance is called grouping effect in gene selection for multiclass classification.Particularly, for the binary classification, that is, = 2, inequality (29) becomes Microarray is the typical small , large problem.Because the number of the genes in microarray data is very large, it will result in the curse of dimensionality to solve the proposed multinomial regression.To improve the solving speed, Friedman et al. proposed the pairwise coordinate decent algorithm which takes advantage of the sparse property of characteristic.Therefore, we choose the pairwise coordinate decent algorithm to solve the multinomial regression with elastic net penalty.To this end, we convert(19)into the following form: ∑ =1 [ ∑ =1 ( + ) − log ∑ =1 ( + ) ] | 1,541.4 | 2014-03-31T00:00:00.000 | [
"Mathematics"
] |
Density Fluctuations across the Superfluid-Supersolid Phase Transition in a Dipolar Quantum Gas
Phase transitions share the universal feature of enhanced fluctuations near the transition point. Here we show that density fluctuations reveal how a Bose-Einstein condensate of dipolar atoms spontaneously breaks its translation symmetry and enters the supersolid state of matter -- a phase that combines superfluidity with crystalline order. We report on the first direct in situ measurement of density fluctuations across the superfluid-supersolid phase transition. This allows us to introduce a general and straightforward way to extract the static structure factor, estimate the spectrum of elementary excitations and image the dominant fluctuation patterns. We observe a strong response in the static structure factor and infer a distinct roton minimum in the dispersion relation. Furthermore, we show that the characteristic fluctuations correspond to elementary excitations such as the roton modes, which have been theoretically predicted to be dominant at the quantum critical point, and that the supersolid state supports both superfluid as well as crystal phonons.
Fluctuations play a central role in quantum manybody systems. They connect the response and correlation of the system to its excitation spectrum, instabilities, phase transitions and thermodynamic properties. A quantity that is fundamental to the theoretical description of fluctuations in many-body systems is the structure factor, which can be formulated as the Fourier transform of the density-density correlation function [1,2]. Superfluid helium is an important example of a quantum manybody state, where the determination of the structure factor was crucial to understand its elementary excitations and therefore improved our understanding of superfluidity [2][3][4][5]. In the case of quantum gases, the structure factor of Bose-Einstein condensates (BECs) and superfluid Fermi gases is often investigated by Bragg spectroscopy [6][7][8]. In contact-interacting BECs this enabled the study of the spectrum and collective modes [9]. In dipolar BECs, it has provided indications of the roton minimum in the dispersion relation [10] analogous to the neutron and X-ray scattering data for helium [2,11]. A different approach is to look at the condensate density directly in situ, which provides access to finite temperature and quantum fluctuations [12][13][14][15][16][17][18], and enables to extract the static structure factor simultaneously at all momenta.
The roton minimum both in helium and dipolar quantum gases is accompanied by a characteristic peak in the static structure factor close to the roton momentum [2,[19][20][21]. Unlike in helium however, the contact interactions in dipolar quantum gases are tunable [22]. This tunability allows for precise control of the dispersion relation and the controllable softening of the roton minimum. The roton modes associated to this minimum manifest as density modulations on top of the ground-state density distribution [23,24]. An instability in the ground state can appear once the roton minimum is sufficiently soft. Since these modes represent precursors to solidification, dipolar BECs have long been proposed as candidates for the elusive supersolid state of matter, which simultaneously combines crystalline order with superfluidity [25].
Recently, a dipolar supersolid state of matter has been realized through a phase transition from a BEC to an array of coherent quantum droplets [26][27][28][29][30][31][32] by precisely tuning the contact interaction strength. Close to the transition point these droplets are immersed in a superfluid background, and by lowering the scattering length further the superfluid fraction decreases towards a regime of isolated droplets. As the superfluid-supersolid phase transition is governed by intrinsic interactions it is of interest to study the fluctuations that emerge across the transition [21,33,34], facilitate structure formation [15,20,[35][36][37] and give rise to the supersolid state. The low-lying collective modes were shown to be particularly interesting regarding the aspect of supersolidity in this system [29][30][31]38]. Those modes are facilitated by a continuous superfluidity across the droplet array, despite the translational symmetry breaking. Schematic of the dispersion relationhω(k) and associated static structure factor S(k) of an elongated and strongly dipolar BEC [21]. A decrease in scattering length as causes a roton minimum to emerge in the excitation spectrum, associated with a characteristic peak in the static structure factor. The roton momentum krot is indicated where the roton minimum drops near zero. (c) For a given scattering length as we observe a large number of in situ densities nj(r), calculate their mean n(r) and the density fluctuations δnj(r) = nj(r) − n(r) as the deviation of the in situ images from their mean. Investigating the mean power spectrum of the fluctuations |δn(k)| 2 for different scattering lengths across the transition allows us to directly observe the static structure factor as the system passes from BEC to supersolid to isolated droplet states. The colored arrow on top indicates the direction towards lower scattering lengths, passing from BEC to supersolid to isolated droplet regimes. The colormap used for the images shows the normalized amplitude of densities, density fluctuations and mean power spectra, respectively.
Here we provide the first direct in situ observation of density fluctuations across the superfluid-supersolid phase transition in a trapped dipolar quantum gas. By analyzing hundreds of in situ images of the atomic cloud around the phase transition point we spatially resolve characteristic fluctuation patterns that arise across the transition. From the observed fluctuations we determine the static structure factor and estimate the spectrum of elementary excitations. We observe a strong peak in the static structure and an associated roton minimum in the dispersion relation. Moreover, we experimentally determine that the dominant fluctuations at the transition point correspond to two degenerate roton modes [29,38] and that the supersolid state supports both superfluid as well as crystal phonons in a narrow range of scattering lengths. Our study combines the fluctuations with the excitation spectrum of a dipolar supersolid and highlights its bipartite nature between the superfluid BEC and the crystalline isolated droplets.
The rotonic dispersion relation of strongly dipolar BECs [10,23,24] is schematically shown in Fig. 1(a). The system becomes more susceptible to density fluctuations as the roton minimum softens. These density fluctuations are associated with a characteristic peak [2,[39][40][41][42] in the static structure factor, illustrated in Fig. 1(b). This can be understood by considering the general Feynman-Bijl formula S(k) =h 2 k 2 /2mε(k) [2,3], connecting the static structure factor S(k) to the excitation spectrum (k) at zero temperature. As the energy of the roton modes drops near zero, the density fluctuations and thus the structure factor increase dramatically. Eventually, the roton minimum has sufficiently softened in order for the system to enter the roton instability. This roton instability triggers the phase transition to a dipolar supersolid and arrays of isolated quantum droplets.
To study the static structure factor experimentally, we prepare a dipolar BEC with typically 40 × 10 3 162 Dy atoms at temperatures of approximately 20 nK in a cigar-shaped trap with trapping frequencies ω/2π = [30 (1), 89(2), 108(2)] Hz and a magnetic field oriented alongŷ (see Methods). The scattering length is tuned via a double Feshbach resonance [43] to final values between 90 a 0 and 105 a 0 by linearly ramping the magnetic field in 30 ms. We wait for 15 ms to allow for the system to equilibrate and then the atoms are imaged using phase-contrast imaging along theẑ-axis with a resolution of ∼ 1 µm. We find either a BEC, a supersolid phase (SSP) or isolated droplets (ID) for large (a s 105 a 0 ), intermediate (a s 98.4 a 0 ) and small (a s 90 a 0 ) scattering lenghts, respectively [29]. We accumulate enough averages for a statistical evaluation of the structure factor by repeating the experiment around 200 times for every scattering length.
We obtain S(k) experimentally by analyzing the in situ images as illustrated in Fig. 1(c). For every scattering length we center the in situ densities n j (r) to their center of mass and normalize them to the mean atom number. With the former step we remove contributions of the dipole center of mass motion [29] and with the latter we correct for shot to shot total atom number fluctuations ( [13]; Methods) that would otherwise give contributions to S(k) near k = 0. From these in situ images we obtain the mean image n(r) and the density fluctuations δn j (r) = n j (r)− n(r) as the deviation of the in situ images from their mean. With the Fourier transform of the density fluctuation δn j (k) = d 3 rδn j (r)e ik·r we obtain the mean power spectrum of the fluctuations |δn(k)| 2 . In homogeneous systems, the static structure factor can be directly written as S(k) = |δn(k)| 2 /N , where N is the atom number [2,15,17]. In practice, the interpretation is less straightforward [18,[44][45][46] since the expectation values of the density n(r) are spatially dependent due to the finite size and the translational symmetry breaking in the supersolid and droplet regime. Nonetheless S(k) gives insight into the strength of fluctuations [13,18,[47][48][49] and is a quantity that can be continuously evaluated from the BEC via the supersolid to the isolated droplet regime. We note that our evaluation is limited to intermediate momenta between k min /2π 0.08 µm −1 and k max /2π 1 µm −1 due to the finite system size and the finite resolution of our imaging system, respectively [15,18]. We extract the static structure factor S(k x , k y , k z = 0) cut along the k z = 0 plane according to the Fourier slice theorem since the atomic densities are integrated along the line of sight during the imaging process ( [50]; Methods). Due to the cigar-shaped trap geometry, the fluctuations predominantly show structure alongk x (see Fig. 1(c)), which allows us to extract a cut of the mean power spectrum at k y = 0 to obtain the 1D structure factor S(k x ).
Using the above described analysis, we obtain S(k x ) across the phase transition as shown in Fig. 2. In the BEC regime at a s 104 a 0 , we find the structure factor to be relatively flat with the exception of a small peak at around k x /2π 0.25 µm −1 . This peak is an indication that far in the BEC regime roton modes can be excited [29] and consequently that the spectrum features modifications from a purely contact interacting quantum gas.
As the scattering length is reduced, the position and amplitude of this characteristic peak are observed to increase continuously towards the phase transition point (a s 98.4 a 0 ). A continuously growing peak ampli- tude of the structure factor signals enhanced fluctuations, consistent with a softening roton minimum towards the transition point. The structure factor reaches its maximum value as a function of the scattering length at the transition point and is located at the roton momentum k rot /2π 0.29 µm −1 . This value is mainly given by the harmonic oscillator length l y along the magnetic field direction [52]. At the transition point, the enhanced fluctuations provide the roton instability and lead to the formation of supersolid quantum droplets, whose spacing d 3 µm smoothly matches the roton wavelength 2π/k rot [27,29]. The observed increase of the roton momentum towards smaller scattering lengths can be understood in a variational approach of elongated dipolar condenstates [53]. Around the transition point, the in situ densities we observe from shot to shot show droplets immersed in an overall BEC background, constituting the supersolid state of matter [26-32, 38, 54]. Here the density fluctuation patterns become most clear and show spatial oscillations (see Fig. 1(c), middle column). These characteristic fluctuations can directly be attributed to the symmetric and antisymmetric roton modes we found in our previous work [38].
After crossing the phase transition (a s < ∼ 98.4 a 0 ) the Experimentally determined excitation energy ω(kx) according to equation (1) assuming a temperature of 20 nK, for scattering lengths above the phase transition point. A clear roton minimum at finite momentum is observed that softens towards the transition point.
peak amplitude of the structure factor decreases and a shoulder develops at smaller momenta. This shoulder increases further for smaller scattering lengths and eventually leads to a double-peak structure as seen in Fig. 2. The origin of this rising double peak can be understood by means of a principle component analysis.
The maximum of the structure factor for different scattering lengths acts as a measure of the density fluctuation strength across the superfluid to supersolid phase transition. It quickly increases from the BEC side when approaching the phase transition indicating a significant enhancement of the characteristic fluctuations close to the phase transition point. We see that the increase from the BEC side towards the phase transition is sharper than the decrease on the droplet side. The magnitude of the structure factor (S max = 260) can mainly be explained by thermal enhancement of the participating low-energy modes.
To estimate the dispersion relation based on the experimentally determined structure factor we use the relation which extends the Feynman-Bijl formula, S(k) =h 2 k 2 /2mε(k), valid at T = 0, to nonzero temperatures T [2,45]. At nonzero temperatures and small excitation energies the contribution of low-lying modes to the structure factor can easily be enhanced by several orders of magnitude. Close to the transition point where the roton gap ∆ rot is small compared to the temperature of the system (h∆ rot /k B T < ∼ 1), equation (1) can be expanded and the peak of the static structure factor scales as S max ∼ T /∆ 2 rot [19]. Note that equation (1) is an excellent description of the structure factor for a weakly interacting superfluid, where the excitation spectrum is dominated by a single mode, and where the influence of the quantum as well as thermal depletion can be ignored. Although we study a finite system, leading to a discrete excitation spectrum, a continuous approximation to the dispersion relation yields a meaningful estimate for the excitation energies (see Methods). We show the resulting spectrum Fig. 3. To do so we assumed a mean temperature of 20 nK, a conservative approximation to include additional minor heating during the preparation (see Methods). In Fig. 3, one can see a small roton minimum already well above the trap frequency of 30 Hz for a large scattering length. The roton minimum softens and moves towards higher momenta k x as the scattering length is lowered and finally reaches its lowest energy at the phase transition point. After crossing the phase transition point, when the crystalline structure has developed, the excitation spectrum should have a band structure due to the translational symmetry breaking. In this case equation (1) is no longer necessarily justified as several modes contribute to the excitation spectrum, and therefore it is no longer straightforward to extract the excitation spectrum from the measured static structure factor.
To gain a better insight into the modes that dominantly contribute to the fluctuations, we use principal component analysis (PCA) [55] on the density fluctuations for all scattering lengths combined. This modelfree statistical analysis is a general method to extract dominant components or to reduce the dimensionality of a dataset. We study the principal components (PCs) across the phase transition since there is a direct correspondence [56] to the dominant collective excitations obtained with the Bogoliubov-de-Gennes (BdG) formalism [38]. This allows us to identify and compare the most dominant PCs with specific BdG modes and study how their weight behaves across the transition, as shown in Fig. 4 and Fig. 5.
The first principal component is structureless and only represents the global atom number fluctuation [56,57]. The subsequent principal components are shown in Fig. 4(a)-(b). These two components are dominant across the phase transition and represent a periodic spatial pattern. Close to the transition point we can identify these characteristic patterns in many single-shot realizations of the density fluctuations, as shown in the central column of Fig. 1(c). We compare the profiles of these PCs to the antisymmetric and symmetric roton modes from the BdG theory at the transition point in Fig. 4(c)-(d) and find them to be in excellent agreement. These two roton modes are developing into the Goldstone [29] and amplitude (Higgs) [38] modes of the supersolid.
The mean absolute value of the weights for these two PCs are shown in Fig. 4(e) as an indication of their strength across the phase transition. Starting from the BEC side, where they are comparable in strength to other modes, these PCs gain rapidly in strength as the phase transition is approached (see Methods). We note that the maximum of the structure factor behaves similar to the weights of these two PCs as a function of scatter- The gray area indicates the supersolid region previously determined [29]. Error bars indicate the standard error of the mean.
ing length. Leading up to the quantum critical point at a s 98.4 a 0 these two modes have identical weights, in accordance to our previous work [38], in which we showed that the two roton modes remain degenerate while softening towards the phase transition.
Further into the isolated droplet regime the weight of the roton PCs decrease and other PCs become more important because further modes are softening. In Fig. 5 we present the three next higher PCs that correspond to the BEC phonon (a) and the antisymmetric (b) and symmetric (c) crystal phonon, respectively. The quadrupole mode of the BEC has a relatively high weight in the BEC regime and vanishes for small scattering lengths towards the isolated droplet regime. The breathing or lowest phonon modes of the droplet crystal show a clear splitting in the Fourier transform (Fig. 5(d)-(e)) explaining the observed double-peak structure in S(k x ) for low scat- tering lengths. This can be understood as the appearance of the band structure, where excitations are split around the edge of the Brillouin zone. These modes only have an appreciable weight for low scattering length after crossing the phase transition ( Fig. 5(g)). In the experiment the excitation of the crystal breathing mode is further enhanced by the preparation process [27,30]. Note that there is a small region close to the phase transition where both types of modes have a non-vanishing weight. This subtle feature shows the coexistence of both BEC and droplet crystal, which is a prerequisite of the supersolid nature of the phase. One can see that the supersolid state supports both types of excitations -the phonon of the superfluid BEC and the crystalline phonons of the droplets.
In conclusion, we reported the first in situ measurement of the density fluctuations across the superfluid to supersolid phase transition in a dipolar quantum gas. We quantified the fluctuation strength across the transition by the static structure factor S(k x ) using a statistical evaluation of in situ images and found a characteristic peak in S(k x ) that strongly increases towards the phase transition point. We showed that this peak is unambiguously dominated by the low-lying modes of the rotonic dispersion relation. The characteristic fluctuations close to the transition point are stronger compared to the BEC or isolated droplet regime. The large amplitude of the measured static structure factor reveals the important role played by temperature at the phase transition, an aspect which has so far been absent in the discussion of the superfluid-supersolid phase transition. Using principal component analysis we spatially resolved the dominant fluctuations and identified them as two roton modes. Furthermore, we showed that the supersolid supports both superfluid and crystal phonons. Our study provides a promising outlook to extract thermodynamic properties [20] and possibly universal access to the condensate fraction [37] of the supersolid state. Exciting avenues for future work includes using fluctuations as a tool for thermometry of the supersolid state [58], understanding the out of equilibrium dynamics that arise when crossing the phase transition [59,60], and exploring the roles of fluctuations in the Kibble-Zurek mechanism [33,34,61].
COMPETING INTERESTS
The authors declare no competing financial interests.
DATA AVAILABILITY
Correspondence and requests for materials should be addressed to T.P. (email: t.pfau@physik.unistuttgart.de).
A. Experimental protocol
The complete experimental procedure has been described in detail in our previous publications [27,29,62]. After preparing a quasipure BEC of 162 Dy with T 10 nK in a crossed optical dipole trap at 1064 nm, we reshape the trap within 20 ms to its final geometry with trap frequencies of ω/2π = [30(1), 89(2), 108(2)] Hz. The magnetic field is oriented along theŷ-axis and is used to tune the contact interaction strength.
We ramp the magnetic field in a two-step ramp closer to the double Feshbach resonance near 5.1 G to tune the scattering length from its initial background value of a bg = 140(20) a 0 [63][64][65] to a final value between 90 a 0 and 105 a 0 . This corresponds to the droplet and BEC regime, respectively. We expect the preparation scheme to induce some additional heating and thus assume a temperature of 20 nK in our later analysis of S(k x ) and ω(k x ). After 15 ms of free evolution the droplets have formed and equilibrated. We finally probe the atomic system using in situ phase-contrast imaging along the verticalẑ-axis, which is done using a microscope objective featuring a numerical aperture of 0.3. We reach an imaging resolution of 1 µm.
B. Analysis method and principle component analysis
In the following we will describe our analysis procedure. We start with a large set of images that contains 200 averages for each scattering length. We center the images with respect to their center-of-mass to get rid of the otherwise dominating dipole modes. Afterwards, we post-select on atom number for every scattering length and only take images with an atom number that lays in an interval of ±30% around the mean atom number at that scattering length. We have confirmed that changing the tolerance in the post selection does not affect the features of the structure factor. In a next step, we normalize each image to its atom numberñ j = n j /N j and calculate the fluctuations δñ j (r) =ñ j (r) − ñ(r) as the deviation of the normalized imageñ j (r) from the mean image ñ(r) . The structure factor is then given by the power spectrum of these fluctuations multiplied by the mean atom numberN where δñ j (k) = F[δñ j ](k) = d 2 r δñ j (r)e ik·r is the Fourier transform of the normalized fluctuations which were obtained from the line integrated images. According to the Fourier-slice theorem, taking the Fourier transform of two-dimensional integrated atomic densities is connected with the Fourier transform of the full Supplementary Figure 1. Average atom number. Average atom number for the scattering length range in the experiment. Crossing the phase transition from BEC to droplet arrays the increasing density leads to larger three-body losses causing lower atom numbers for smaller scattering lengths. Error bars indicate one standard deviation from the mean.
three-dimensional atomic density distribution. For an arbitrary function f (x, y, z) and its projection p(x, y) = dz f (x, y, z) the Fourier-slice theorem reads As a result, the static structure factor S(k x , k y ) extracted from the integrated densities is in fact a slice through the structure factorS(k x , k y , k z ) one would get if one had access to the full three-dimensional density distribution S(k x , k y ) =S(k x , k y , 0). Further it is worth noticing that normalizing each image to its atom number when comparing the structure factor at different scattering lengths has an effect in the small k regime. This part of the structure factor is highly dependent on the total atom number which is changing across the transition due to higher three-body losses towards the droplet regime (Fig. (1)). Finally we take a cut at k y /2π = 0 to determine the one-dimensional static structure factor S(k x ) we present in Fig. 2 of the main text. We confirmed that averaging within the k y -values that correspond to our imaging resolution is below our statistical error obtained by bootstrapping [51,[66][67][68].
Two natural boundaries arise at small-k and at largek because we are probing a finite system. The small-k cut-off simply comes from the finite size of the atomic cloud on the image. This results in a lowest k-value k min /2π 0.08 µm −1 that a possible excitation must have in a system of size L 12 µm. In contrast, the high-k cut-off has its origin in the finite imaging resolution of the microscope objective which leads to k max /2π 1 µm −1 .
We note that Ref. [27] has shown that the dynamical preparation scheme with the scattering length ramp only leads to states close to the actual ground state of the system for the BEC and supersolid regime. In regime of isolated droplets, the preparation process can lead to a different number of droplets from realization to realization. In comparison to the BdG theory, where only excitation on top of the actual ground state are considered, this could lead to increased fluctuations for the isolated droplets in the experiment. For all scattering lengths, we expect the dynamical nature of the sample preparation to slightly modify the observed fluctuations compared to purely thermally populated collective modes.
In order to get a better intuition of the different contributions to the experimental structure factor, we use a model-free statistical analysis, the principle component analysis (PCA) [55]. PCA has a wide range of general applications [55], from image analysis to dimensional reduction of large dataset. For ultracold atomic systems, it turns out that there is a direct correspondence [56] between the principal components (PCs) and the density variation of a mode obtained by the Bogoliubov-de-Gennes (BdG) formalism. This allows us to identify a specific PC with a certain BdG mode as long as the imaging noise is negligible. One of PCA's properties is that the signal (in our case the centered images n j (r)) can be reconstructed exactly using a superposition of all PCs, using their respective weights. However, a small subset of PCs accounts for most of the information contained within the scattering length scan. This becomes essential when PCA is used for dimensional reduction.
In the experiment we combine all images independent of their scattering length to one large dataset. This allows us not only to illustrate the different contributions to the fluctuations and therefore the structure factor but also to track the weight of different PCs over a certain scattering length range [56]. We confirm by treating the BEC and droplet regime separately that the relevant lowlying PCs do not change significantly except for their order (or variance). The first two PCs, namely the roton modes, do not change their shape over the complete scattering length range. By limiting the scattering length range to the BEC or droplet side we find in addition to the roton modes only the quadrupole or breathing modes, respectively. This is in agreement with the results presented in the main text, where we analyzed the complete dataset and observed a vanishing weight of these components on the respective side of the phase transition.
C. Simulation details
In this section, we briefly summarize simulation details not explained in the main text. A system of dipolar atoms that undergoes the transition from a BEC to a supersolid can be described by means of the extended Gross-Pitaevskii equation (eGPE) [54,69,70] where we define H GP := H 0 + g|ψ| 2 + Φ dip [ψ] + g qf |ψ| 3 and ψ is normalized to the atom number N = d 3 r |ψ(r)| 2 . The term H 0 = −h 2 ∇ 2 /2m + V ext contains the kinetic energy and trap confinement V ext (r) = m(ω 2 x x 2 + ω 2 y y 2 + ω 2 z z 2 )/2. The contact interaction strength g = 4πh 2 a s /m is given by the scattering length a s .
The dipolar mean field po- is the dipolar interaction for aligned dipoles. The strength of the dipolar interaction is given by the parameter g dd = 4πh 2 a dd /m characterized by the dipolar length a dd = µ 0 µ 2 m m/(12πh 2 ). Here, µ m is the magnetic moment and ϑ is the angle between r and the magnetic field axis. Furthermore, the quantity g qf = 32/(3 √ π)ga 3/2 Q 5 ( dd ) represents quantum fluctuations within the local density approximation for dipolar systems [71,72], where dd = g dd /g = a dd /a s is the relative dipolar strength. In our simulations, we use the approximation Q 5 ( dd ) = 1 + 3 2 2 dd [70,[72][73][74][75]. The mean-field dipolar potential is effectively calculated using a Fourier transform, where we use a spherical cutoff for the dipolar potential. The cutoff radius is set to the size of the simulations space such that there is no spurious interaction between periodic images [69,76,77].
In order to study the elementary excitations, we use the BdG formalism as described in Ref. [38] and linearly expand the wavefunction ψ(r, t) = ψ 0 (r) + λ[u(r)e −iωt + v * (r)e iωt ]e −iµt/h around the ground state ψ 0 with the chemical potential µ. This ansatz together with equation (4) leads to a system of linear equations that can be expressed in matrix form. For the actual form of the BdG matrix representation we refer the interested reader to the literature [38,52,75]. We numerically solve these equations to obtain the modes u and v corresponding to the lowest excitation energieshω. Due to our finite-sized system, we obtain a discrete spectrum of elementary excitations, rather than a continuous one. In addition there is a systematic shift between the theoretical and experimental transition point of approximately ∆a s 2.6 a 0 [10,31,43,52,78].
D. Temperature-enhanced static structure factor
In order to obtain the dynamic structure factor from the Bogoliubov-spectrum we first define the theoretical density fluctuation corresponding to the mode j as δn j = f * j ψ 0 with f j = u j + v j and the ground state ψ 0 . The static structure factor then reads +n(ω j , T )δ(ω + ω j )), where δn j (k) is the Fourier transform of the fluctuations corresponding to the j'th mode and Supplementary Figure 2. Dynamic structure factor from numerical simulations. Dynamic structure factor for two different scattering lengths before (a) and just after the transition (b). The finite system size results in discrete modes. The red line is calculated via the Feynman-Bijl formula equation (1) at T = 0. The softening roton mode clearly dominates the dynamic structure factor when approaching the phase transition. The colorbar shows the amplitude of the dynamic structure factor. For illustration purposes the dynamic structure factor was convolved along the ω-axis with a Gaussian of width σ = 0.5 Hz.
The static structure factor S(k) = N −1 dω S(k, ω) is then given after integrating along ω [2] which leads to the result similar to equation (1) of the main text. Here we made use of the identity 2n(ω j , T ) + 1 = coth(hω j /2k B T ). Equation (6) clearly indicates that low-lying modes satisfying the conditionhω j k B T can be drastically enhanced. For our situation this means that the roton modes that soften towards the phase transition dominate the structure factor compared to all other modes. Only a finite number of modes are essentially necessary to quantitatively describe experiments at finite temperature.
In Fig. 2 we show the zero-temperature dynamic structure factor for two different scattering lengths, in the BEC phase and just after the phase transition. Due to our finite system size one can clearly see the discrete mode structure along the k x -and ω-axis. The color bar indicates the amplitude of the dynamic structure factor. The red line shows the excitation energy obtained via the Feynman-Bijl formula, equation (1) of the main text at T = 0. It illustrates that the continuous dispersion relation obtained from this equation yields a meaningful estimate of the discrete Bogoliubov spectrum.
E. Temperature dependence of the experimental excitation spectrum In Fig. 3 of the main text we show the dispersion relation determined from the experimental static structure factor. This is done by solving equation (1) numerically. For dilute, weakly-interacting Bose gases with negligible quantum and thermal depletion this is a valid approximation. Although the system is undergoing a phase transition, the gas parameter na 3 s 10 −5 is still small enough to consider it as dilute and weakly-interacting with negligible quantum depletion, which is the case in our situation. The thermal component at a temperature of 20 nK can be estimated to be less than 5 % [2].
In the situation where, as we have seen above, the structure factor is mainly dominated by the contribution from the two degenerate roton modes, it is possible to expand coth x 1/x in equation (1) for small energies [19], yielding S(k, T ) S(k, 0) 2k B T ε(k) =h 2 k 2 k B T mε(k) 2 .
Equation (7) might then also be used in a second step to calculate back the zero-temperature static structure factor. In fact we find due to the validity of this expansion that the expression (k) h 2 k 2 k B T /mS(k, T ) is a good approximation to the numerical solution of equation (1).
The usage of equation (1) requires knowledge of the temperature in the system. As measuring low temperatures with a vanishing thermal fraction is rather challenging, we show in Fig. 3 how the uncertainty of our temperature of 20(5) nK affects the determined excitation spectrum for two scattering lengths. | 7,896 | 2020-09-18T00:00:00.000 | [
"Physics"
] |
Photoluminescence enhancement in few-layer WS 2 films via Au nanoparticles
Nano-composites of two-dimensional atomic layered WS 2 and Au nanoparticles (AuNPs) have been fabricated by sulfurization of sputtered W films followed by immersing into HAuCl 4 aqueous solution. The morphology, structure and AuNPs distribution have been characterized by electron microscopy. The decorated AuNPs can be more densely formed on the edge and defective sites of triangle WS 2 . We have compared the optical absorption and photoluminescence of bare WS 2 and Au-decorated WS 2 layers. Enhancement in the photoluminescence is observed in the Au-WS 2 nano-composites, attributed to localized surface plasmonic ef-fect. This work provides the possibility to develop photonic application in two-dimensional materials. C 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
I. INTRODUCTION
The unique properties of graphene with gapless at K and K ′ points in brilliouin zone have been exploited in the last decades. However, the applications of graphene are hindered in optoelectronic devices and field-effect transistors due to the absence of bandgap. Two-dimensional (2D) transition metal dichalcogenides (TMDCs) such as MoS 2 and WS 2 with different coulombic potentials of metal and sulfur atoms lead to the non-zero Semenoff mass so as to non-zero bandgap energies. 1 In monolayer WS 2 , the W atom is hexagonally sandwiched by two trigonal coordinated sulfur atoms in covalent bonds. The bandgap of WS 2 materials shift from indirect to direct while thinning-out the number of layer of WS 2 . For instance, the band-gap energy of WS 2 transits from 1.4 eV in bulk structure to 2.1 eV in monolayer crystal. [2][3][4] The tunable bandgap favors the applications in optoelectronics, such as photodetectors 3,4 and solar cells. [5][6][7] Enhanced light-emission is vital for both fundamental material study and development of optoelectronic device. Localized surface plasmon resonance (LSPR) excited by the interaction between photon and metal surface is a known effective approach to enhance light-emission, which has been evident in many systems. 8,9 However, a few reports have been found about the photoluminescence (PL) enhancement from TMDCs materials via plasmonic effect. 10,11 Earlier study suggested that exciton-plasmon interaction could be established between plasmonic resonator and few-layers WS 2 to enhance PL emission from atomic layered TMDCs. 12 In this study, we decorated Au nanoparticles (AuNPs) on the WS 2 via a simple chemical method. The spatial distribution of nano-sized Au has been observed through electron microscopy characterization after the chemical treatment. The PL measurement has been performed for the WS 2 sample with and without AuNPs coating. The increase of PL emission is mainly attributed to the intensity enhancement by the LSPR of the AuNPs. Particularly, those enhanced excitonic emission after the decoration of AuNPs is the signature of the enhanced emission, which is amplified by the field enhancement arising from LSPR excitation.
a Authors to whom correspondence should be addressed.
II. EXPERIMENTAL
Highly-crystalline micro-flakes WS 2 were grown by the sulfurization of thin sputtered tungsten film (W) on Si substrate with 1 µm thick SiO 2 . As-prepared W film and 0.25 g sulfur powders were placed into two Al 2 O 3 made crucibles respectively. The boat with sulfur powders was placed on the upstream of quartz tube and the other carrying W film was put on the center of quartz tube. Argon was used as the carrier gas with a flow rate of 150 sccm. The pressure needed to control is 240 Pa during the reaction. The growth temperature was set to 750 • C for 15 min and at the same time the S powder was evaporated at above 113 • C. At the end of reaction, the furnace temperature was cooled down naturally and finally the as-grown WS 2 samples were taken out from the system. Field-emission scanning electron microscopy (FE-SEM) (JEOL JSM -6335F) was used to capture the images.
III. RESULTS AND DISCUSSION
As shown in Figure 1(a) of FE-SEM image, it is clear that few layer triangle WS 2 has been fabricated with a size distribution approximately ranging from 10 to 30 µm. The sample can be transferred to another substrate using PMMA (MicroChem) with molecular weight of 950 K as the transfer medium. This process can be done by spin coating the PMMA on the sample initially at 500 rpm for 10 s, and subsequently at 300 rpm for 30 s. After that, the sample was baked at 100 • C for 10 min and then put into 1 M NaOH for 15 min. Finally the PMMA-capped WS 2 was washed by DI water to remove all chemicals residues. Figure 1(b) shows the high-resolution transmission electron microscopy (HR-TEM) images of WS 2 labelled with (110) and (100) miller index notation (inset of Fig. 1(b)), which match with the previously reported result. 13 After the confirmation of successfully fabricated WS 2 micro-sized flakes, the as-prepared WS 2 samples were immersed into 5 mM HAuCl 4 aqueous solution in time intervals of 40 s at room temperature. After that WS 2 was put into a dry box for 4 hr for drying. 14 The HR-TEM image in Figure 1(c) confirms the identity of AuNPs with interspacing of 0.23 nm and the HR-TEM image in Figure 1(d) reveals the distribution of AuNPs on WS 2 crystal. From the HR-TEM image, it is noted that the majority of the AuNPs exist along the edge of the single triangular WS 2 .
Measurements of PL spectra and mapping were carried out in a micro-Raman system (Horiba Jobin Yvon HR800) with PL mode by using an excitation laser with the wavelength of 488 nm. A 100x objective lens with numerical aperture NA of 0.9 was used for the measurement at room temperature. The spot size of laser is about 1µm. The laser power needs to be set at below ∼50 mW to avoid the appreciable heating effect. Figure 2(a) and 2(b) show the spatial mappings of PL intensity distribution on the surface of the bilayer WS 2 without and with AuNPs coating. The evidences claimed that the thickness of as-prepared WS 2 has been proven by Raman spectrum and height profile of AFM image in bilayer (not shown here). The light emission of the WS 2 can be observed in a range of 1-3 folded enhancement concentrated on the edge sites of WS 2 which is mainly due to the nucleation of Au NPs over there. The average PL enhancement of Au/WS 2 is about 2-folded. Such a phenomenon can be explained as follows. When WS 2 crystal was immersed into the aqueous HAuCl 4 solution, the precursor AuCl 4 − ions have more chances to reduce the AuNPs at the edge sites of WS 2 due to the unsaturated sulfur atoms which makes it easy to form Au-S bonding. 15 When Au and WS 2 hybrid nanostructures are in a close vicinity, the exciton-plasmon interaction occurs. 16 In order to gain an in-depth understanding of the PL enhancement, we have performed the optical absorption measurement on the obtained samples. Figure 2(c) and 2(d) show the absolute absorption spectra from the WS 2 sample without and with AuNPs decoration. Accordingly, the inset of Figure 2(d) shows the absorbance enhancement in existence of AuNPs. When AuNPs become an amplifier, the light field is enhanced due to the metal nanostructure in the proximity of the 2D semiconducting crystal. A majority of incident photons are absorbed by the Au-WS 2 nano-composites such that the interaction between excitons and plasmons gives rise to a large optical absorption enhancement of the semiconductor WS 2 as shown in Figure 2 From Figure 2(c) and 2(d), it also shows a manifest enhancement in the light absorption in contrast to the previous absorption spectrum featured a slightly blue shift in the absorption wavelength accompanying by the high energy emission and narrower line-width as well. The narrow line-width in the Au-decorating absorption spectrum also greatly improves the efficiency of light emission of WS 2 due to the arisen LSPR. 17 It can be clearly seen that three prominent resonant frequency peaks are occurred at 490 nm, 580 nm and 690 nm as shown in the inset of Figure 2(d).
According to the previous study, 18 the tunable resonant frequency of AuNPs arrangement responding in the visible light region fully matches the light emission of direct band-gap semiconducting WS 2 crystal.
Apart from the PL enhancement on the edge sites of AuNPs/WS 2 composite, it is interesting to investigate how the PL is varied on the defective sites of the composite. In order to study the influence of defects on Au nucleation, a solid-state laser with the wavelength of 488 nm and power of about 0.5 mW was irradiated on the sample for 10 min. Therefore, some defect sites may be induced on the bilayer WS 2 due to potentially banishment of sulfur atoms. 19 Subsequently, we repeated the Au decoration on the new WS 2 sample by the same experimental condition as the immersion of HAuCl 4 − solution process. Accordingly, the defective sites of 2D WS 2 should favor the Au nucleation on those sulfur vacancies. Compared to the AuNPs-WS 2 composite before laser irradiation, Au arrangement of the irradiated sample is obviously denser from the SEM image as shown in Figure 3(a), we can observe the deposited AuNPs are occurred not only on the edge sites without laser treatment, but also in the central part of the WS 2 triangle flake after the laser treatment. Therefore, AuNPs would be anchored mainly on the surface of WS 2 via following the trace of laser irradiation, which is not limited to the edges sites. By counting on the number of AuNPs in the marked five regions of Figure 3(a), there are around 18 to 22 granules of AuNPs in each region. We have compared the light emission intensity of PL spectra from position 1 to 5 before and after Au decoration as shown in Figure 3(b). In Figure 3(b), it is apparent that PL enhancement factor varies in different positions and the maximum 8 folded intensity can be achieved in the position 5. It seems that there is weak relationship between the PL enhancement factor and density of AuNPs over the WS 2 sample as previous reports. 8,9 Hence, other factors such as interspacing between AuNPs may be responsible for the observed PL enhancement. In addition to the enhancement in PL intensity, a blue shift in PL spectra can be observed when decorating AuNPs on 2D WS 2 . Previous simulation implied that the pair of Au NPs configuration could give rise to a slight spectral blue shift due to plasmonic resonance. 20 Hence, it reveals that a blue shift in the PL emission wavelength in Figure 3(b) may correlate with the AuNPs arrangement. Apart from the study of PL enhancement from the above described single triangle flake, we have measured PL variation from more triangle flakes. Figure 4 shows the average PL enhancement by decorating AuNPs on total 32 pieces of triangle WS 2 relevant to the same thickness. From the statistical analysis, we found that the most AuNPs/WS 2 induced PL enhancement is around two-folded on the whole, further supporting the feasibility of enhancing PL via the simple method of decorating Au NPs on few layer WS 2 .
IV. CONCLUSION
In conclusion, we have performed the synthesis, structural characterization and PL properties of Au-WS 2 nanocomposites. AuNPs can be selectively anchored on the surface of edge and defective sites of triangle atomic layered WS 2 . The conjugate provides an opportunity to modify PL features of 2D material due to increased optical absorption by plasmonic effect. We have demonstrated an enhancement and spatial distribution of PL in the Au-WS 2 nanocomposites. The simple method of Au NPs decoration may be generally expanded to the PL enhancement in a variety of 2D materials, which may provide a material platform to improve and exploit nanoscale photonic devices. | 2,793 | 2015-06-24T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Convective and diffusive effects on particle transport in asymmetric periodic capillaries
We present here results of a theoretical investigation of particle transport in longitudinally asymmetric but axially symmetric capillaries, allowing for the influence of both diffusion and convection. In this study we have focused attention primarily on characterizing the influence of tube geometry and applied hydraulic pressure on the magnitude, direction and rate of transport of particles in axi-symmetric, saw-tooth shaped tubes. Three initial value problems are considered. The first involves the evolution of a fixed number of particles initially confined to a central wave-section. The second involves the evolution of the same initial state but including an ongoing production of particles in the central wave-section. The third involves the evolution of particles a fully laden tube. Based on a physical model of convective-diffusive transport, assuming an underlying oscillatory fluid velocity field that is unaffected by the presence of the particles, we find that transport rates and even net transport directions depend critically on the design specifics, such as tube geometry, flow rate, initial particle configuration and whether or not particles are continuously introduced. The second transient scenario is qualitatively independent of the details of how particles are generated. In the third scenario there is no net transport. As the study is fundamental in nature, our findings could engender greater understanding of practical systems.
Introduction
In a recent paper [1] we investigated laminar hydrodynamic flow through infinite axisymmetric, periodic capillaries by means of a boundary element method. The flow field, even while laminar, exhibited vortex behavior in certain regions of the undulating tube, whose geometry was characterized by a narrow throat of specified radius and length, and a finitelength expansion zone, whose radius was a function of the axial coordinate. We quantified the onset of flow recirculation appearing in the expansion sections of the tube as a function of throat and expansion zone dimensions, focusing the study on tube geometries that possessed axial symmetry. We found that for a given geometric shape, a critical expansion zone dimension existed above which a flow vortex region developed and grew (in each repeat section of PLOS ONE | https://doi.org/10.1371/journal.pone.0183127 August 25, 2017 1 / 21 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 the periodic tube). The growth as a function of geometric parameter was limited by the appearance of a second recirculation zone above the first. The natural question we now ask is what influences, if any, do these recirculation zones have on the net transport of particles suspended in the fluid. Assuming that the particles do not alter the makeup of the flow field, we are specifically led to ask the question of whether there is an interplay between flow recirculation and particle diffusion. Is recirculation in combination with or in competition with diffusion, in the net transport of particles? To the best of our knowledge this question has not been addressed. The study presented here may provide some elemental physical understanding to achieve the goal of protein, DNA or other macromolecular separation, separation of biological cells, as well as fine mineral particle separation [2][3][4][5][6][7] and their transport. There are many features of the current work which point to it complementing the recent analysis on hydrodynamic particle transport in tubes by Herringer et al. [8]. Considering a more direct application, the model and simulation results can facilitate greater understanding of subcutaneous drug delivery. Drug molecules are introduced to a vascular system either by direct infusion (injection) into a blood vessel or at the conclusion of a diffusive process through extra-vascular tissue following topical (skin) application. The in vivo transport is then dictated by vascular diffusion from the point of entry and influenced by convective forces due to the action of the heart pump [9][10][11][12][13][14]. Although a direct comparison between our model results and practical measurements in living tissues is not realistic, some useful insight into the latter system may be derived.
We thus investigate the dynamic behavior of particle distributions in infinite, periodic tubes filled with a viscous liquid, which is disturbed by the action of a periodic pressure field that drives the fluid forwards and backwards with no net fluid flow. In this theoretical study three scenarios are considered. First consideration is given to the initial value problem of a fixed number of particles, initially distributed over one wave-section, and then allowed to convect and diffuse into adjoining wave-sections. Second is an initial value problem with the same initial state but to which is added a continual supply of particles in the same single wavesection. Both scenarios represent transient states of the system and arguably have a correspondence with localized drug introduction in a blood vessel (see, e.g., Fig 4 in Dancik et al. [13]). Finally, we consider an initial value problem adopting the state of a completely filled tube, with particles subjected to an ongoing oscillatory fluid flux. In the latter case the entire system is spatially periodic. The dynamics of these systems is governed by a hydrodynamic set of equations for the fluid flow and a diffusive-convective equation with a forcing of particles due to the motion of the suspending fluid. In the second scenario, a particle generation term, applied in one section only, appears in the diffusive-convective equation. The study, as a function of tube geometry, is fundamental in that the questions posed above are answered by monitoring the net direction and rate of transport of particles through the tube at subsequent times.
We consider tube shape and geometry as primary factors influencing the transport phenomenon. We establish the effect on transport of longitudinal asymmetry, amplitude and wavelength of corrugation, sharpness of expansion regions as well as length and radius of the throat regions. We concentrate on geometric variants of a smoothed saw-tooth profile. That is, in contrast to the study reported by Islam, et al. [1], we focus on tube profile asymmetry. The theoretical model is described in the next section, while in the Simulation Results section we explore a sufficient extent of parameter space to characterize the contributions of diffusion and convection as well as the effect of geometry on facilitating particle transport in the three scenarios. A discussion of the phenomenon of particle transport within an asymmetric, periodic capillary, based on our findings, is relegated to the Discussion section.
Theoretical model and governing equations
We consider a dispersion of particles of number density " cð" x; " tÞ suspended in an incompressible fluid of density ρ and dynamic viscosity μ confined to an infinite, periodic axi-symmetric capillary. Gravitational effects are ignored, which is equivalent to assuming that the particle density is equal to the density of the fluid. Many of the fluid and particle assumptions outlined below are similar to the assumptions adopted in recent work on peristaltic transport of nanoparticles in micro-channels [15,16] (note that the latter work involves a two-dimensional system in contrast to our axi-symmetric system).
A point on the surface of the axi-symmetric tube is given by the vector position " hð" zÞr, whereẑ andr are unit vectors in the longitudinal and radial directions, respectively, and " hð" zÞ defines the tube surface. By assumption, spatial periodicity implies that " hð" zÞ ¼ " hð" z þ LÞ where L is the spatial period of the periodic profile. The geometric variables implicit in " h will depend on the shape assumed. However, characteristic to all shapes that we consider is a throat region of radius, " B, and an expansion region of maximum radius, " A þ " B. The shape we consider predominantly in this paper is that of an asymmetric saw-tooth profile. However, for comparison we also consider the special cases of a symmetric triangular profile and a straight cylinder. Both the saw-tooth and triangular profiles have been smoothed to eliminate corners. Fortunately, this smoothing also appears in experimental studies. The profiles we study then have a differentiable surface tangent vector. A schematic of the periodic tube in longitudinal section and an illustration of the smoothed saw-tooth profile with defining parameters are shown in Fig 1. In the following sections we refer to "leading" and "trailing" edges of the saw tooth profile as explained in the figure.
The suspending fluid is assumed to be in a state of flow, driven by a time-varying pressure gradient D " Pð " tÞ=L, where D " Pð " tÞ is the instantaneous pressure drop across one wave-section. We shall assume both low Reynolds number flow and a sufficiently slow time variation of the pressure to neglect the transient and convective terms in the Navier-Stokes equations and allow use of the time independent Stokes system of equations. The latter assumption implies that relative to the time scale of the pressure variation, the flow is able to adjust immediately. We also assume that the flow is instantaneously responsive to the slowly varying applied pressure so that the time dependence of the fluid velocity mimics that of the applied pressure.
We also assume that the particle dispersion is sufficiently dilute and particle size sufficiently small that one can neglect particle-particle and particle-surface interactions. More importantly, as mentioned earlier, with these assumptions we neglect the influence of the particles themselves on the development of the fluid flow field. Under these assumptions, the hydrodynamic and particle transport problems are partially decoupled.
Hydrodynamic flow in an infinite periodic tube
Given the assumptions outlined above, the hydrodynamic problem is decoupled from the particle transport problem. The former problem can then be solved within a single wave-section of the tube under the condition of periodicity. This is in fact the problem solved by Islam, et al. [1] by a boundary element method applicable to an infinite periodic tube. The reader is referred to that work for details. In summary, the governing equations are the time-independent, linear momentum and continuity equations, respectively, with a stick boundary condition on the tube surface, " S, and pressure difference D " P ¼ " Pð" xÞ À " Pð" x þ LẑÞ applied across one wave-section of the tube. For flow in an infinite periodic tube one can write " Pð" xÞ ¼ À ðDPÞ amp " z=L þ " }ð" xÞ where the amplitude (ΔP) amp will later be replaced by a slowly varying function of time and where, meanwhile, " }ð" xÞ ¼ " }ð" x þ LẑÞ is a periodic function of " z. In Eq (1), commonly referred to as the Stokes system, " pð" r; " zÞ is the hydrodynamic pressure, μ is the viscosity of the fluid and " uð" z; " rÞ is the flow velocity.
Presuming comparable radial and axial characteristic length scales, in non-dimensional variables, z ¼ " z=L; r ¼ " r=L; p ¼ " p=ðDPÞ amp ; u ¼ " um=ðLðDPÞ amp Þ, the Stokes equations become The above spatial non-dimensionalisation would not be applicable for channels with highly disparate lateral and longitudinal characteristic lengths for which case the z and r variables should be scaled differently (see, e.g., [15,16] evaluated on the tube surface and interior, respectively. Here, dS is an element of surface area on the boundary S at y, V is the interior tube domain and FðyÞ ¼ À SðyÞ ÁnðyÞ is the force per unit area exerted on the fluid by the boundary at position y (boundary force), with S(y) as the stress tensor, The surface normalnðyÞ ¼ ðn r ;n z Þ is directed outward from the volume V. Also, G(x, y) and H(x, y) are known functions of the sample point x and source point y, defined as follows [17]: Eqs (4) and (5) can be simplified considerably for an infinite, axi-symmetric and periodic tube. Axi-symmetry obviates the dependence on azimuthal angle. Consequently, integration over this variable can be performed immediately, reducing the two-dimensional surface integrals effectively to one-dimensional integrals over the boundary line defined by a single longitudinal section. Periodicity can be utilized to reduce these integrals over the infinite tube to a set of finite, one-dimensional integrals confined to one wave-section. Details of this procedure can be found in Islam et al. [1].
Axisymmetric particle diffusive-convective transport
We consider the scenario of time and spatial scales that lead to diffusive effects that are commensurate with the influence of fluid convection. We thus consider here a macroscopic, continuum description and solve the axi-symmetric convective diffusion equation for the particle number density, " cð" r; " z; " tÞ: where particle size, a, is partially and indirectly taken into account through its appearance in the Stokes-Einstein formula D th = k B T temp /(6πμa) for the diffusion constant, where k B is Boltzmann's constant and T temp is temperature in degrees Kelvin. This equation governs the distribution of point particles assuming a fluid convection contribution that is driven by a fluid velocity determined in the absence of particles, i.e., " uð" r; " z; " tÞ is treated as known in this equation.
The final term in Eq (6), " cð" z; " tÞ, which only appears in the second initial value problem considered here, is a particle generation term that is nonzero only in the central wavesection, À L=2 " z L=2 (details are given later). It represents the uniform (in " r) and continual (in " t) production within that wave-section of particles that are subsequently transported into neighboring sections. This term has been added with little consideration for how it could be engineered in practice although one can imagine it to model approximately the introduction of particles (under adequate pressure) through a porous central section boundary. Maintaining an ongoing generation term fortuitously also ensures that derivative calculations remain significantly greater than numerical error by one to two orders of magnitude, or greater. Given that we are interested in transport through an infinite periodic, axi-symmetric capillary of longitudinal cross section, " hð" zÞ, it suffices to consider particles introduced by means of a smooth function that possesses the properties of (a) being non-zero only within the central wave-section, W 0 , approaching zero smoothly as " z ! AEL=2, (b) being a differentiable function of " z, and (c) resulting in a zero first moment (see Simulation results) when integrated over the length of the central wave-section. A function satisfying these conditions is where h min = B, W 0 ¼ fð" r; " zÞ; " r 2 ð0; " hð" zÞÞ; À L=2 " z L=2g and C(0) = c 0 , C(t) = c 0 /T for " t > 0 and c 0 is a prescribed scalar. This assumes a constant (in " t) and uniform (in " r) supply of particles in W 0 ; over a period of the pressure oscillation, T, the function " cð" z; " tÞ generates a constant particle number, pc 0 L " h 2 min . Including Eqs (7) in (6) allows particles to be introduced without preventing movement across W 0 .
Having thus described this functional form and associated initial state, we demonstrate in S1 Appendix (Section C) that qualitative behavior of this initial value problem remains unchanged if we instead invoke a simpler and cruder initial state, cð" r; " z; 0Þ ¼ c 0 for " x 2 W 0 , 0 otherwise, and a different functional form, Using Eq (6) we follow the time development of the particle distribution through all wavesections.
Eq (6) is clearly an approximate representation for finite sized particles (with partial account through D th ) [18]. For particles of finite size, higher order contributions arising from (a) collisions of two or more particles, (b) reduced available volume for diffusion and (c) a modified flow field, would have to be considered. Nevertheless, we expect that zeroth-order behavior (addressing the questions of whether or not there is net transport, the relative influences of diffusion and convection, and the role of recirculation) can be represented by such a description.
The boundary conditions that complement Eq (6) are the conditions of no particle flux through the tube wall and of axi-symmetry, respectively: and Using the same normalisation procedure that led to Eq (3), complemented by the density normalisation c ¼ " c=c 0 and time normalisation t ¼ " t=T, the convective diffusion equation, Eq (6), becomes where the dimensionless constants are defined as α = D th T/L 2 and β = (ΔP) amp T/μ, i.e., ratios of diffusive and convective strengths, respectively, to system-intrinsic values. The ratio β: α is the mass transfer Peclet number, p e , viz, the ratio of convective to molecular mass transfer. The non-dimensional particle generation term in Eq (7) is for t ! 0, and appears only in one wave-section of the tube and only in the second initial value problem.
In this work we consider the numerical solution of Eq (11) for a range of values of α and β to ascertain the relative importance of diffusion versus convection for transport in longitudinally asymmetric capillaries, for a range of different tube geometries.
In S1 Appendix (Section C) we complement the analysis and simulation results presented herein with an analogous numerical study adopting the generation function in Eq (8) for t > 0 and the simpler initial condition c = 1 in W 0 . A comparison of the two sets of results will show that the qualitative behavior found is independent of the detailed nature of the chosen " cð" z; " tÞ function and of the initial condition, " cð" r; " z; 0Þ, although this scenario is different to the scenario of no particle generation.
Numerical approach
To solve the one-dimensional, simplified version of integral Eq (4) (which is not reproduced here due to its size and complexity), the tube boundary curve over which integrals are performed is partitioned into N elements according to a grid defined along the z-axis; the integrals are then re-expressed as a sum of integrals over these small elements. For a sufficiently fine grid, the unknowns, which are the surface forces, are assumed constant over the elements and extracted from under the integral signs; the segmented line integrals over the remaining known quantities are then approximately evaluated using the trapezoidal rule. The equations making up this boundary element approximation, together with boundary condition (2), comprise a linear system of 2N algebraic equations for the 2N unknown components of the force distribution on the tube surface, f = (f r , f z ). These equations are solved using an International Mathematics and Statistics Library (IMSL) routine in Fortran. More numerical details are given in Islam et al. [1]. Within the creeping flow approximation, given a slowly varying pressure difference, ΔP(t), the fluid velocity field is assumed to adapt instantaneously to changes in the pressure.
Once the force distribution on the tube surface is known, it is used to calculate the fluid velocity profile in the tube interior via the one dimensional version of Eq (5). It follows from the assumption that the particle distribution does not influence the fluid flow field and the condition of periodicity, that a velocity evaluation at position " x 0 in the central wave-section is periodically reproduced in all wave-sections at points modulo L of the central point, i.e., at points " The assumption of an instantaneously responsive flow field to a slowly varying pressure difference implies that the time dependence of u corresponds to that of ΔP(t). Defined over a single period only, the non-dimensional pressure differences which we consider here are, DP a ðtÞ ¼ P 0 sin ð2ptÞ; 0 t < 1; DP c ðtÞ ¼ À P 0 sin ð2ptÞ; 0 t < 1: In the above, P 0 1 (normalized by (ΔP) amp ) is a fundamental, constant pressure amplitude; the dimensional angular frequency, ω = 2π/T, is prescribed with corresponding period, T (with " t scaled by T, the argument of the sinusoidal functions involves only the factor of 2π).
In contrast to the method used to solve the fluid dynamic problem, we solve Eq (11), with accompanying boundary and initial conditions, by an explicit discretization scheme applied to a finite number of wave-sections. In the majority of cases, 61 wave-sections in all were considered to be the simulation domain; with K = 30 sections on either side of a central section. The spatial domain was partitioned into a rectangular grid fðz i ; r j Þg N;M i;j¼1 based on uniform linear grids of size Δr and Δz, respectively, established for given tube dimensions, where N(M) is the number of grid points in the z(r)−direction in one wave-section. Although it may have been sensible to invoke a nonuniform grid in order to get better resolution near corner regions, we found that if the grid was sufficiently fine, no accuracy issues emerged. Details of the numerical approach can be found in S1 Appendix (Section A).
Simulation results
Neglecting particle-particle and particle-surface forces, the only physical mechanisms that contribute to the transport of particles through a channel are (Brownian) diffusion and convection. The factors that may influence the extent of transport include the channel shape, the relative geometric dimensions of the channel, fluid viscosity μ, particle size a, the flow characteristics (specifically, the existence or absence of fluid recirculation and the applied pressure profile) and the applied pressure. The model adopted here takes account of these features to varying degrees of approximation. For example, particle size is indirectly taken into account through its appearance in D th . This level of approximation is consistent with Eq (6).
The two main theoretical questions we address here are, firstly, does net transport occur in channels of nonuniform cross-section and, secondly, what factors do then contribute? A secondary question concerns the influence of the particle generation term itself. Our principal results, used to answer these questions, are expressed in terms of the cross-section and wavelength-averaged partial moments of the particle distribution: for n = 0, 1, 2, . . ., k = −K, . . ., K.
In particular, the accumulated measure is used to track the total number of particles in the tube as a function of time. The second important quantity we employ is the time dependent first moment overall: which is a measure of the asymmetry of the distribution evaluated over the entire simulated tube. For future reference we point out that for the choice of generation function, Eq (7), for both longitudinally symmetric and asymmetric tubes, the initial value, C 1 (0), is zero. This is to be contrasted with the case presented in S1 Appendix (Section C) (and shown in Fig 2), in which we consider an alternative initial condition and generation function.
Transient behavior: No particle generation
Considering the dynamic behavior of a fixed number of particles initially confined to a section of a long tube, it is expected that under diffusive influences alone and for a longitudinally symmetric tube profile (including a straight cylindrical tube) particles would disperse to equal degrees in both directions. Such is not the case for a periodically asymmetric tube as illustrated by the two β = 0 cases in Fig 2(a) and 2(b); the tube having the larger expansion regions results in a larger distribution asymmetry (larger first moment, C 1 (t)). In Fig 2(a) the state of the distribution asymmetry is apparently established by t = 1 (time point measured in pressure oscillations) and maintained constant thereafter for the tube with the larger expansion regions and decreasing for t > 1 for the tube with the smaller expansion regions. As already mentioned the initial particle distribution in (a) was chosen to give a zero first moment regardless of tube shape, which contrasts it with (b) where different shapes (A = 0.24 and A = 0.48) result in different initial first moments. Since diffusion is governed by concentration gradients it is not surprising to find the different dynamic behavior in Fig 2(a) and 2(b) brought about by the different initial distributions. For the system depicted in Fig 2(a), oscillatory convection effects, superimposed on diffusion, alter the dynamics quantitatively in the tube with the larger expansion regions (which also possess recirculation zones), and qualitatively in the case of the tube with the smaller expansion regions (no recirculation). In the latter case there is clearly a switch in net particle transport (from positive z, to negative z) which appears for a β value between 100 and 200 (Peclet number, 1000 < p e < 2000). In contrast, for the case of Fig 2(b) there does not appear to be any variation (with β) in the first moment at any anniversary of the pressure oscillation.
Transient behavior: Particle generation in central wave-section
Diffusion only: Profile asymmetry as a necessary condition for transport. The data depicted in Fig 3 readily answers the first main question of whether it is possible for preferential transport to occur. Fig 3(a)-3(c) show representative results of the first moment, C 1 (t), of the particle distributions, the total particle numbers, i.e., the zeroth moment C 0 (t), and the ratio of the two, respectively. Although no convection is yet assumed (β = 0) the data is plotted as a function of time measured in units of equivalent cycles of the pressure. The curves in all three panels reflect the response to increases in expansion region (increasing A) or increases in the throat region (increasing B). The zeroth moments have been normalized by the particle numbers initially contained within the central wave-section. That the C 1 curves do not remain constant over time signifies a disparity between the amount of material transported to the right compared with what has been transported to the left (by diffusion alone). The precise conditions under which the results in Fig 3 are derived are given in the caption. Of particular note is the fact that the direction of net transport is in the direction of the leading edge of the saw tooth profile (see Fig 1(b)).
If we modify the tube shape by displacing laterally the position of the expansion peak to give a symmetric triangular wave-section, while maintaining the same throat width (B) and height of expansion peak (A), we obtain results (data not shown) for the condition of geometric symmetry. In this special case we find no difference in the amount of material transported to the right or left. That is, we find no net transport of particles: C 1 (t) 0 for all t ! 0. Asymmetry in shape is clearly a critical factor. However, another critical factor is the aspect ratio of the tube profile, A/B. A decrease in this ratio diminishes the importance of the expansion region, placing greater emphasis on the influence of the throat. However, since this can be achieved in two ways, different consequences ensue for transport (Fig 3(a)-3(c)). As mentioned earlier, with our choice of ψ the particles are deliberately distributed unevenly at t = 0 to counter the profile asymmetry so as to give a zero first moment initially. Thus, the density of particles is greater in the negative z−half of the central wave-section. Given the relatively larger volume available immediately to the right of center, the concentration gradient results in a greater number of particles finding their way to the positive z-half of the tube domain.
In the first set of curves of Fig 3 (dashed lines) the height of the expansion zone is kept fixed (A = 0.48) for two values of the throat radius, B: 0.4 and 0.8. The second set (solid lines) features a constant throat radius (B = 0.2) and decreasing expansion zone height: 0.24 and 0.12. These specific combinations of A and B constrain the geometric ratio A/B to the common values of 1.2 and 0.6, respectively. A more important reason for this choice will become apparent in a later section. Since the two sets of curves do not superimpose, this ratio is not a universal number, which suggests that different experimental designs will necessarily have different transport characteristics.
Note that decreasing either B or A decreases the wave-section volume. However, the nature of ψ is such that the number of particles contained within the tube decreases only in the former case but remains constant in the latter case (Fig 3(b)). On the other hand, decreasing either B or A decreases the first moment, C 1 (t), as the profile adopts a more symmetric shape. Not surprisingly, the zeroth moment (Fig 3(b)) increases with time in all cases as more particles are injected into the system, with the largest increases appearing for the larger systems (A = 0.48 and B = 0.4, 0.8).
Considering the time development, one of the most important conclusions to draw from (Fig 3(a) and 3(c)) is that, under the simulated conditions, the action of diffusion alone results in a net distribution of particles in the positive z-direction, i.e., in the direction of the leading edge of the saw-tooth profile. Taken in conjunction with the qualitatively similar results given in S1 Appendix (Section C), we conclude that the movement towards the positive z-direction is not due specifically to our choice of particle generating function, even though ψ(z, t) does influence the quantitative degree of propagation. When the throat is sufficiently narrow, the expansion region has the dominant influence, with the concentration gradient driving more particles in the positive z-direction. This influence increases with increasing A. Increases in B at fixed asymmetric A not only increase the total volume in the tube, but also the total particle number generated through ψ (1 ! h 2 min =h 2 ðzÞ ! B 2 =ðA þ BÞ 2 ! 1 for B ) 1). There will thus be an increasing proportion of particles produced in the leading edge half of the central wavesection with increasing B, which will promote a larger degree of diffusion in the positive zdirection. As A is decreased, the slopes of the first moment profiles decrease in magnitude. In the limit A ! 0 for fixed B, we obtain a straight cylindrical tube for which there is no net direction of transport of particles (C 1 0). Thus, we arrive at our fundamental proposition that tube asymmetry is a necessary condition for net particle transport by the process of diffusion. The associated conclusion, given that we arrive at the same outcome with two choices of generating function (Eq (7) here and Eq (S26) of S1 Appendix (Section C)) is that the qualitative behavior is not governed by the method of introducing particles. It is worth noting that for t ) 1, the large expansion zone results A = 0.48 in Fig 3(c) (with analogous outcomes in the convective case, see later) indicate the linear relationship C 1 (t) = κC 0 (t), with the proportionality coefficient, κ, being a positive, time-independent, decreasing function of B, while for the narrow throat case (B = 0.2), the proportionality coefficient is a linear decreasing function of time but increasing function of expansion height.
We remark, finally, that the end points of the curves in Fig 3(a) and 3(b) are indicative of the times taken for the particle distributions to reach the 10 th wave-section on one or other side of the central wave-section in our simulated system, W ±10 . These are also the ordinate axis intercepts of the curves in Fig 4 (measured in equivalent cycles of a pressure oscillation, T). The latter figure summarizes the fact that for constant expansion zone dimensions, the system with the greatest (smallest) throat opening, facilitating (restricting) passage through the tube, has the shortest (longest) transport time. However, it is also apparent that the more accentuated is the expansion zone, the slower the progression of the particles. From a design perspective, these results suggest that the slower progress with an accentuated expansion zone can be offset by a sufficiently large throat. This is discussed further in the next section.
Diffusion and convection: Positive or negative reinforcement of transport?. As the ratio A/B can be altered either by increasing the throat radius at a fixed expansion dimension or by exaggerating the expansion zone at a fixed throat size, it is not surprising that different convective behavior can result. The different effects are, moreover, compounded by the coupling of convection with diffusion. Fig 5(b) depicts the system's response (C 1 (t)/C 0 (t)) to increasing strength of convection (β ! 0) for an applied pressure of ΔP a (t) = sin(2πt) at constant oscillation period. Two profile combinations (A = 0.48, B = 0.2 with A/B = 2.4 and A = 0.24, B = 0.2 with A/B = 1.2), are considered. In the cases shown, convection biases the particle distribution toward negative z-values, the effect of which is reinforced with increased β. Although both tube shapes resulted in positive first moments in the case of diffusion alone (see Fig 3(a) and 3(c)), with convection present (specifically developing as sin(2πt)), a greater proportion of particles now advance in the negative z-direction. The near-horizontal asymptotes of the ratio of moments (Fig 5(b)) again imply proportional relationships, C 1 = κC 0 , this time with κ < 0 in all non-zero β cases. Since the total particle number C 0 is an increasing function of time as a result of the constant injection of particles (all cases lie on a single curve since B is kept constant (data not shown)), the first moment increases in magnitude at a proportional rate (Fig 5(a)). That is, the peak of the particle distribution moves progressively toward z = −1, i.e., in the direction of the profile's trailing edge. We remark here that we get qualitatively similar behavior (see Fig C of S1 Appendix) using our alternative choice of ψ(z, t), which supports the impression that this trend is independent of how particles are introduced into the tube.
As to the results themselves, in the case of convection and diffusion, with a pressure differential ΔP a = sin(2πt) driving the fluid initially in the positive direction, the net particle motion is in the negative z-direction. The counterintuitive behavior (featured in Figs 5-8) is due to the interplay between the continual addition of particles in the central wave-section and the Particle transport in capillaries specific form of the pressure gradient. It can be understood by appeal to the following argument. Consider the simpler system of particles in a straight tube subjected only to convection (α = 0), and assume that the fluid displacement amplitude is less than one cell in length. In this case, the sinusoidal flow will first convect the particles initially present, forwards and then backwards, returning them to their initial location after one complete pressure cycle. However, all through this oscillation, particles are being generated in the W 0 wave-section. Thus, in the first half-period of oscillation, particles are present in the zeroth and first wave-sections only. During the second half-period of fluid oscillation, all particles are convected in the negative zdirection. At the conclusion of the first pressure oscillation (at t = 1) particles will be present in the W 0 and the W −1 wave-sections only. Thus, at t = 1, the first time point plotted in the figures, the first moment of the distribution will be negative. The process is repeated during the second and subsequent pressure cycles, with particles continually being added to these two "blocks". The net result, with more particles specifically and progressively added to wavesection W −1 , steadily shifts the balance of the particle distribution to negative z. Including the effect of diffusion only smears this pristine state of a two-wave-section concentration to include particles in neighboring wave-sections, on either side: k positive and negative. Similarly, allowing for expansion regions (nonzero A) modifies the picture further, but the underlying convective process remains the dominant influence. Some further qualifying comments on this phenomenon appear in the next section.
A summary of the times the particles first reach either wave-section W −10 or W +10 as a function of convection strength, β, for some different geometric conditions is provided in Fig 4. As already remarked, the greater the throat dimension, the quicker the particles are transported to the ends of the simulated system.
At this point it is worth drawing attention to the different behavior demonstrated by the two transient scenarios (i.e., the case of no particles generated in the tube's central wavesection versus the case of particles continually added in that wave-section). In absolute terms, the inclusion of particle generation in one wave-section (irrespective of form, see S1 Appendix (Section C)) is sufficient to change the nature of particle transport: compare Figs 2(a) and 3(a) Particle transport in capillaries and 5(a). However, in relative terms there is less distinction. Since particles are continually produced in time in this second transient scenario and as we have found C 1 = κC 0 in this scenario, the most appropriate comparison is between the C 1 results shown in Fig 2(a) and the C 1 /C 0 results of Fig 3(c), and more clearly those of Fig 5(b). According to the findings reported in Islam, et al. [1], beyond some critical amplitude, A Ã , of the expansion region, which depends to varying degrees on other geometric factors (throat width, the degree of asymmetry and width of the expansion region, and somewhat on the degree of smoothing of the profile), the flow field can exhibit one or more zones of recirculation in the expansion regions. It is not unreasonable then for particles that diffuse into one such region to become caught in a recirculation flow, provided the strength of the hydrodynamic flow is sufficient to dominate over diffusive motion. Thus, for sufficiently large β there will be regular periods within which trapped particles will circulate in these zones and not propagate from one wave-section to the next. However, for a temporal oscillatory flow field there will always be periods when the velocity field will be "weak" in some sense compared to diffusion, allowing the particles to diffuse out of these zones into or near to the throat region where they can be convected out of that section in the next cycle. The natural question to ask is whether this process assists or hinders the general diffusive movement of particles. That is, the question is whether recirculation opposes or reinforces diffusive transport. A partial answer can already be deduced from the results discussed earlier. The data shown in Fig 5, referring to the convective behavior in tubes whose shape proportions have the ratio A/B of 1.2 or 2.4, shows that although convection opposes the diffusive trend toward positive z values, forcing the mean of the distribution to negative z, recirculation, which is present in the case A/B = 2.4, reduces this influence.
Plots of ratios of instantaneous particle numbers in the recirculation regions of wavesections W −2 and W 2 to particle numbers in the remainder of those wave-sections are shown in Fig 6. Data points correspond to quarter periods of pressure oscillation (symbols at odd numbered quarter periods have been removed for improved clarity). The smooth continuous curve in Fig 6(a) judiciously to ensure sufficient advance to steady state conditions within a reasonable number of oscillations, yet be representative of what would transpire in all sections. In (a) the ratio maxima appear at completions of pressure oscillations, while ratio minima appear during pressure lulls midway through oscillations. With time there is an obvious increasing trend toward steady state both for the β = 0 case as well as for the (common) bottom envelope through points of minima for the nonzero β values. The (different) upper envelopes through the points of maxima appear to plateau to constant values sooner than do the minima. It is not clear from the figure but the upper envelopes actually have a slight negative slope with time. As a whole, the results suggest the tendency to converge toward the asymptotic value given by the volume ratio, which is consistent with the picture of a uniform distribution at steady state, to be discussed shortly. In (b) the dynamic situations on either side of the central wave-section are contrasted; most obvious is the 180˚phase difference between the results, which is not surprising given the oscillatory nature of the pressure. But what is also clear is the particle distribution asymmetry between k = 2 and k = −2 underlying the net negative first moment shown in Fig 5. Complementary information is found in the results shown in Fig 7 where we compare transport in tubes with a common throat dimension, B = 0.2, but with subcritical and supercritical expansion zone dimensions. These results reinforce the idea that recirculation reduces the tendency of convection to transport particles in the negative z-direction; particles become trapped in regions of recirculation flow and are thus unavailable for convective transport during a significant proportion of a pressure cycle. Recirculation is thus an important design characteristic to consider in the fabrication of micro-and nano-channels. Moreover, within the class of non-recirculation flows, a factor of two difference in expansion amplitude does not result in a proportionate change in net transport. On the other hand, within the class of recirculation flows, a 50% change in expansion zone amplitude results in a two fold change in the first moment (Fig 7(a)). Note again that since the throat dimension is kept constant at B = 0.2, the total number of particles generated increases linearly with time but is independent of A (data not shown).
Finally, in Fig 8 results of the somewhat elementary consideration of different time dependent pressure gradients, ΔP a (t), ΔP b (t) and ΔP c (t) are presented. With both mechanisms of diffusion and convection acting within either a symmetric (triangular) or an asymmetric (sawtooth) tube, one would expect a non-zero mean distribution of particles, with the direction of bias being determined by the sign of the pressure gradient. Naturally, the total particle count does not display any dependence on pressure profile (data not shown), which is a feature that can be utilized to check on the numerics. By construction, both symmetric and asymmetric profiles have zero ordinate intercepts (C 1 (0)) in Fig 8(a) and 8(b). However, for pressure oscillations of ΔP a (t) and ΔP c (t), all cases immediately depart from zero thereafter.
According to our earlier theoretical explanation, with ΔP c (t) the convective field couples with diffusion to promote propagation toward positive z, while with ΔP a (t) the convective field opposes diffusion to result in propagation toward negative z. Although this is captured in the results shown in Fig 8(a) and 8(b), the all but reflective symmetry about the time axis suggests that for the cases of low (A/B = 0.6) or no (A/B = 0) expansion region, for which no recirculation zones appear, diffusion does not play a decisive role, even though it remains an active participant, spreading the particles along the tube (the simulations are terminated when the particles have reached wave-section W ±10 , which indicates that diffusion is still important). It is interesting that compared with the case of a straight cylindrical tube, the presence of an expansion region, whether symmetrically positioned or asymmetrically positioned, enhances the transport slightly (particles reach W ±10 sooner). However, for the case of a significant expansion region, A/B = 2.4, for which recirculation zones appear, the reflective symmetry is broken, suggestive of a more significant cooperation between diffusion and recirculation in the manner described previously, biasing the transport in the direction of the leading edge of the saw-tooth tube (positive z-direction), but delaying transport somewhat (the lines for these cases are terminated at larger T values, indicating that particles reach W ±10 later).
From an experimental perspective, it is significant that both ΔP a (t) and ΔP c (t) are experimentally reasonable temporal functions. The influence of ΔP b (t) is midway between the other two, with C 1 (t) being very close to zero for the symmetric tube systems and even for the low A/B = 0.6 case of an asymmetric tube. For the asymmetric tube with an exaggerated expansion region, A/B = 2.4, the cosine time dependent pressure amplitude is still midway between the other two, but non-zero. The results for the alternative initial state and generating function (data not shown) are qualitatively consistent indicating a lack of dependence on the form of function assumed.
Initial state of a uniform particle distribution We imagine that particle generation has persisted sufficiently long or that conditions have been so manufactured that the tube has become filled with particles to a uniform concentration (c = 1) prior to the application of fluid flow. The system in this uniform condition is assumed the initial state for subsequent convective diffusion calculations; the system and its dynamics are then periodic in space, at all times. Consequently, the following conditions of periodicity apply at any location, z 0 , along the tube: cðr; z 0 ; tÞ ¼ cðr; z 0 þ L; tÞ; ( It is intuitive that when the tube is then subjected to an applied oscillatory hydraulic pressure wave, there will not be any change in the total number of particles in any wave-section (S1 Appendix (Section B)). Superimposing a symmetry argument one would then conclude that for a straight tube as well as for a periodic tube of longitudinally symmetric profile there is no net particle flux in either direction. It does not necessarily follow, however, that no net transport of particles occurs in one or other direction for a periodic tube with an asymmetric profile. To address this question we have undertaken a simulation study of the latter case using, as quantitative measure, the time-averaged particle flux through the throat cross-section at z 0 = 1/2,Ĵ For all asymmetric profile cases considered, varying the strength of convective strength (β), the strength of diffusion (α) and the amplitude of the expansion region in the profile (A, to capture the cases of recirculation and non-recirculation), the time averaged particle flux through the throat cross-section was zero. This means that not only is the total particle number within each wave-section conserved over a period of sinusoidal pressure oscillation (what exits on the right, enters on the left), at this level of approximation there is also no net movement of particles in either direction when averaged over a period of oscillation; what exits on the right during one half of an oscillation, enters again from the right during the other half. Consequently, only under the transient situations considered in and prior to achieving steady state is there net particle movement.
Naturally, this conclusion is based on the model we have employed. It may therefore need revising should an alternative model, that includes higher order particle size effects or nonisotropic diffusion [18], be employed.
Discussion
Our numerical findings show that different transport characteristics are possible depending on the system characteristics. For the reader's convenience we summarize our findings in point form.
1. For the transient case tracking the evolution of an initial distribution of a finite number of particles, the initial distribution plays an important role. Distributed to counterbalance the tube shape, the particles are transported in the direction of the leading edge, predominantly by diffusion. Convection reduces the degree to which this occurs and can redirect the transport in the direction of the trailing edge.
2. For the transient case of an initial distribution supplemented by a steady supply of particles in one wave-section: • With straight cylindrical and longitudinally symmetric tubes: -an oscillatory convective flow field alone will not result in net transport; -diffusion alone will not result in net transport; -net transport is possible, even in straight cylindrical tubes, when both an oscillatory convective flow field and diffusion act. However, for cylindrical tubes and periodic tubes of shallow expansion regions, net transport is dominated by convection.
• With longitudinally asymmetric tubes: -diffusion alone will result in net transport, in a direction that is dependent on the geometry of the tube, i.e., on both the relative and absolute dimensions of the expansion and throat portions. In our set up, the direction of net transport is in the direction of the leading edge of the saw tooth; -regardless of the direction of diffusive particle transport, the superposition of an oscillatory convective flow field driven by a positive (negative) sinusoidal pressure gradient will convect the particles in the direction of the trailing (leading) edge of the tube, opposing (supporting) the flow direction established by diffusion; -diffusion couples with flow recirculation in highly distended, expansion regions of a tube of a given throat radius to modify the degree of transport relative to the case of no recirculation in favor of the direction preferred by diffusion.
• In the case of both asymmetric and symmetric tube profiles: -when both diffusion and convection act, the direction of net transport is strongly dependent on the oscillatory pressure gradient driving the fluid flow.
3. For a tube uniformly filled with particles, the model adopted here predicts that particle number per wave-section is conserved and, moreover, that there is no net transport on average over one period of pressure oscillation.
Our results are arguably conditional on the specific conditions of our simulations, namely tube geometry, initial distribution of particles and finally on the assumed generation of particles in the central wave-section. Although we have demonstrated throughout that qualitative behavior is not dependent on the details of either the initial particle distribution (initial condition) or how particles are introduced in the central section (the generating function)compare Figs 3-8 and Figs B-D of S1 Appendix-it is reasonable to ask whether the observed qualitative behavior itself is predicated on the existence of any generating source of particles.
In Fig 2 we presented results of calculations of the first moment C 1 (t) for the case of no particle generating function, with particles initially distributed in two ways, giving rise to either a zero initial first moment 2(a) or a nonzero first moment 2(b). For tubes with small expansion regions, the convection-free case has the particles diffusing out of W 0 according to Eq (6) in such a way that the first moment decreases linearly (after the first oscillation). In this situation, without a continual source of particles to influence behavior, diffusion subsequently drives more particles in the negative z-direction. The effect of a superimposed convection (under a positive sinusoidal pressure difference, ΔP a (t)) is to shift this trend, in the physical manner discussed earlier, and under sufficiently high flow rates to establish a net negative first moment from the outset. For tubes with large expansion regions, particles diffuse to the right creating a net positive first moment, which is again counteracted, but to a lesser extent, by fluid convection (compare differences between dashed lines and solid lines in Fig 2(a)).
We have not investigated the first initial value problem for any greater number of oscillations due to limited numerical accuracy (dispersed particle concentrations and moment calculations quickly become comparable to the numerical error). Nevertheless, one can reasonably conclude that, while the particular details of a source function ψ(z, t) may not matter qualitatively, the existence of a continual source of particles is itself influential in determining net transport behavior. The follow-up question to ask is: which scenario, with its accompanying physical response, is the most relevant? Presuming that to achieve particle transport, some form of particle reservoir is required, it would seem more appropriate to consider the condition of a source of particles, represented here by ψ(z, t), and the response shown in Figs 3-8. This is certainly the more relevant case to drug infusion in the vascular system [13,14].
In view of our findings, we conclude that there is a strong dependence on experimental design, not only regarding the shape and dimensions of the tubes.
Concluding remarks
Particle transport through macroscopic vessels can occur under the action of gravity (sedimentation) [19,20], of an applied electric field (electrophoresis) [21,22] or of a fluid flow (convection) [23]. We have investigated particle transport in longitudinally asymmetric capillaries assuming the action of both diffusive and convective mechanisms. We have undertaken a study primarily of the influence of tube geometry on magnitude, direction and rate of transport of particles in axi-symmetric tubes of saw-tooth shape. Assuming a physical model of forced diffusion where the convective element, an underlying fluid velocity field, is assumed unaffected by the presence of the particles, we find a range of transport outcomes depending explicitly on tube geometry and applied pressure profile. In particular, we have considered the effect of replacing a pressure gradient that is a simple sine function of time with a negative sine and a cosine time dependent pressure gradient, with important consequences on the direction of preferred transport. One area still to be explored is the effect of the relative differences between characteristic times of particle diffusion, particle convection and temporal oscillation of the pressure gradient. This consideration is with the view to a more detailed study of the influence of recirculation flow in the expansion regions of the tube. In a future publication we hope to report on this aspect, as well as a more direct comparison with experimental results.
Supporting information S1 Appendix. Analysis details and supplementary results. (PDF) | 12,387.2 | 2017-08-25T00:00:00.000 | [
"Physics"
] |
Variations on a theme of Kasteleyn, with application to the totally nonnegative Grassmannian
We provide a short proof of a classical result of Kasteleyn, and prove several variants thereof. One of these results has become key in the parametrization of positroid varieties, and thus deserves the short direct proof which we provide.
Theorem 1. Let G be a planar bipartite graph with N black vertices b 1 , b 2 , . . . , b N and N white vertices w 1 , w 2 , . . . , w N . There is a N × N matrix K with K ij = ±1 if there is an edge from i to j, and K ij = 0 otherwise, such that det K is the number of perfect matchings of G.
We will define a graph with boundary to be a finite graph G with a specified subset ∂G of the vertices of G, equipped with a circular ordering. We call the vertices of ∂G the boundary vertices and the other vertices of G the internal vertices. We define a perfect matching of a graph with boundary to be a collection M of edges of G which contains each internal vertex precisely once, and each boundary vertex at most once. For a perfect matching M , we define ∂M to be the set of boundary vertices contained in M . For a subset I of ∂G, we define D(G, I) to be the number of perfect matchings M of I with ∂M = I. We define a graph with boundary to be embedded in a disc if G is embedded in a closed planar disc D such that the the vertices of ∂G lie on ∂D, in their specified order.
The first variant of Kasteleyn's result which we want is: Theorem 2. Let G be a bipartite graph with boundary embedded in a disk, such that all of the boundary vertices are white. Let there be N + k black vertices b 1 , b 2 , . . . , b N +k , let there be N internal white vertices w 1 , w 2 , . . . , w N and let there be n boundary vertices w N +1 , . . . , w N +n , with that circular order. Then there is an (N + k) × (N + n) matix K with K ij = ±1 if there is an edge from i to j, and K ij = 0 otherwise, having the following property: For any k element subset I of ∂G, let K I be the submatrix of K using all rows, the first N columns, and the additional k columns indexed by I. Then D(G, I) = det K I .
This will then imply
Theorem 3. Let G be as in Theorem 2. Then there is a k × n real matrix L with the property we now describe: For any k element subset I of ∂G, let L I be the submatrix of L using all rows and the columns indexed by I. Then D(G, I) = det L I .
In particular, all maximal minors of L I are nonnegative. As we will point out explicitly in Corollary 3.4, that means that the n k numbers D(G, I) are the Plücker coordinates of a point in Postnikov's totally nonnegative Grassmannian.
Kasteleyn also proved a version of his theorem for graphs which are not bipartite: if there is an edge from i to j, and X ij = 0 otherwise, such that the Pfaffian Pf(X) is the number of perfect matchings of G.
We will prove this and the corresponding results: Theorem 5. Let G be a planar graph with boundary embedded in a disc, having N internal vertices v 1 , v 2 , . . . , v N and n boundary vertices v N +1 , v N +2 , . . . , v N +n in that circular order. Then there is an (N + n) × (N + n) skew symmetric matrix X with X ij = −X ji = ±1 if there is an edge from i to j, and X ij = 0 otherwise having the following property: For any subset I of ∂G, let X I be the submatrix of X using the first N rows and first N columns, and additionally those rows and columns indexed by I. Then D(G, I) = Pf(X I ).
Theorem 6. Let G be as in Theorem 5 and assume D(G, ∅) > 0 (which implies that N is even). Then there is an n × n skew symmetric real matrix Y with the following property: For any subset I of ∂G, let Y I be the submatrix using the rows and columns indexed by I. Then D(G, I) = Pf(Y I ) D(G, ∅) for all subsets I of ∂G.
Remark. We take the Pfaffian of an odd by odd skew symmetric matrix to be zero, so the theorems are true but trivial in the cases that involve such Pfaffians.
Remark. Theorem 6 shows that there are many skew-symmetric matrices all of whose Pfaffian's are nonnegative; it would be interesting to develop analogues of classical results on nonnegative matrices for Pfaffians.
All of these theorems have easy variants where there are weights on the edges of G. Specifically, let w : Edges(G) → R >0 be any weighting function. For a perfect matching M , we define w(M ) = e∈M w(e); we define D(G, I, w) = ∂M=I w(M ). Then the corresponding results are (1) In the setting of Theorem 1, there is a matrix or 0 as above such that Pf(X I ) = D(G, I, w). (6) In the setting of Theorem 6 (including the hypothesis that D(G, ∅, w) > 0), there is a real skewsymmetric matrix Y such that D(G, I, w) = Pf(Y I )D(G, ∅, w). In particular, part (3) shows that the n k numbers D(G, I, w), as I varies, are the Plücker coordinates of a point in the totally nonnegative Grassmannian. If we fix G and let w vary over all possible weightings of the edges of G, we thus obtain a parametrization of a portion of the totally nonnegative Grassmannian. This parametrization of the totally nonnegative Grassmannian is the one found by Postnikov [9], who described it in terms of certain random walks. Talaska [12] recast Postnikov's formulas in terms of flows. Postnikov, Williams and the author [10] implicitly pointed out that this was equivalent to summing over matchings. Lam's lecture notes [7] make the point explicit. As the positive Grassmannian and its parametrizations grow more popular, the author feels that there should be a brief paper which records a direct proof that w → (D(G, I, w)) I∈( [n] k ) parametrizes a portion of the totally nonnegative Grassmannian. The author would like to express his gratitude to Jim Propp for introducing him to Kasteleyn's method, Kuo's condensation theorem, and the pleasures of combinatorial research.
A topological proof of Theorems 1, 2, 4 and 5
The key to our proof is to prove a more general result for non-planar graphs. Let G be a general graph. We will define a planar immersion of G to be a continuous map φ : G → R 2 such that each edge of G is taken to a line segment and, for any edge e, and any vertex v not an end point of e, the point φ(v) is not contained in φ(e). We point out explicitly that a line segment has positive length; a single point is not a line segment.
For G a graph, φ : G → R 2 a planar immersion and M a perfect matching of G, we define cross(M ) to be the number of unordered pairs {e 1 , e 2 } of distinct edges of M for which φ(e 1 ) and φ(e 2 ) intersect. We set ǫ(M ) = (−1) cross(M) . Proposition 1.1. Let G be a bipartite graph with N black vertices b i and N white vertices w j , equipped with a planar immersion φ. Then there is an N × N matrix K ij , with K ij = ±1 if there is an edge from b i to w j , and 0 otherwise, such that det K = M ǫ(M ).
Proof. We first note that the theorem is easy if all the black vertices φ(b i ) lie in order on a line and all the white vertices φ(w j ) lie in order on a parallel line; just take K ij = 1 whenever there is an edge (b i , w j ). We must check that ǫ(M ) is the sign coming from the determinant. Let (b 1 , w σ(1) ), (b 2 , w σ(2) ), . . . , (b N , w σ(N ) ) be a perfect matching M , so σ is a permutation. The number of crossing edges of M is the number of (i 1 , i 2 ) such that i 1 < i 2 and σ(i 1 ) > σ(i 2 ). So cross(M ) is the number of inversions of σ, and ǫ(M ) is the sign of σ, as desired.
Given any point z ∈ (R 2 ) Vertices(G) , there is a map φ z : G → R 2 which sends a vertex v to z(v) and sends each edge to a line segment or point. Let Ω be the set of z for which φ z is a planar immersion.
We note that Ω is an open subset of (R 2 ) Vertices(G) ∼ = R 4N , obtained by deleting the codimension 1 subvarieties on which certain triples of vertices become colinear. Let Ω ′ ⊃ Ω be the open set where we impose that all the vertices have distinct images z(v), and also that the interiors of φ z (e 1 ) and φ z (e 2 ) do not overlap in a line segment for any distinct edges e 1 and e 2 . So Ω ′ is obtained from R 4N by deleting the codimension two subvarieties on which pairs of vertices become equal, or on which certain quadruples of vertices become colinear.
In particular, since Ω ′ is R 4N with codimension two subvarieties removed, Ω ′ is connected. Let z 0 be a point of Ω corresponding to the immersion in the first paragraph, so Proposition 1.1 is true at φ z0 . Let z 1 be any point of Ω, and choose a path z(t) from z 0 to z 1 through Ω ′ . We will show that the Proposition is true for every φ z(t) with z(t) ∈ Ω.
As we travel along z(t), the only changes of the topology of the embedding occur when a vertex v passes through an edge e. Let ǫ 1 and ǫ 2 denote the sign functions for the two topologies. We claim that ǫ 1 (M ) = −ǫ 2 (M ) if e ∈ M , and ǫ 1 (M ) = ǫ 2 (M ) otherwise. This is because M has exactly one edge incident to v. This edge crosses e in one topology and not the other, and no other crossings change. So, if K is a matrix which works for the first topology, then we can obtain a matrix for the second topology by switching the sign of the entry corresponding to edge e. Since we have a matrix which works at z 0 , we obtain a matrix which works at any z.
Proof of Theorem 1. Fary [1] showed that any planar graph can be drawn so that the edges are straight lines. Drawing G in this manner, the function ǫ is simply 1 and we obtain Theorem 1.
Remark 1.2. We could avoid appealing to Fary's theorem by using piecewise linear edges and adding additional coordinates for the positions of the bends in these edges. This creates a few new cases, but no significant difficulties. Remark 1.3. Norine [8] used planar immersions to provide a characterization of Pfaffian graphs, and thus implicitly to provide a proof of Kasteleyn's theorem, but did not discuss deforming the immersion. The author previously posted a sketch of this argument on Mathoverflow [11]. The author is not aware of any other prior sources for this argument.
We have proved Theorem 1. Slight variants of this argument prove Theorems 2, 4 and 5. For Theorem 2, we fix a closed rectangle D and consider immersions G → D taking ∂D → ∂G in the specified circular order. Our starting point is that the black vertices occurs in order on the top edge of the rectangle and the black vertices occur in order on the bottom edge. The fact that D is convex ensures that For Theorems 4 and 5, we proceed similarly. Our starting point is now to take all the vertices φ(v) to lie on the boundary of a circle, and recall that one way to describe the signs occurring in the Pfaffian is as the number of crossings when a matching is drawn in this manner.
Proofs of Theorems 3 and 6
We begin with Theorem 3. Let K be the matrix from Theorem 2. If G has no perfect matchings, the Theorem is immediate; take L = 0. So we may assume that G has perfect matchings, and thus that det K I = 0 for some I. In particular, the first N columns of K are linearly independent.
Therefore, applying row operations, we may transform K into a matrix of block form Id N * 0 L without changing any maximal minors. Then, in the notations of Theorems 2 and 3, we have det K I = det L I . This proves Theorem 3. The proof of Theorem 6 is similar. If G has no perfect matchings, take Y = 0. Otherwise, the upper left N × N submatrix, X ∅ must be of rank N . So we can write Left and right multiplying X by
Remark 3.3. Many authors have pointed out that Corollaries 3.1 and 3.2 and similar identities follow from properties of Pfaffians. (See [13], [2], [4].) It seems uncommon, though, to observe that the relations that occur when deleting subsets of n boundary vertices are precisely the relations between the Pfaffians of an n × n matrix.
There is no need to limit ourselves to 2 × 4 matrices. More generally, we deduce the following result: Corollary 3.4. Let G be as in Theorems 2 and 3. Then the n k numbers D(I), as I ranges through k-element subsets of ∂G, are either all zero or the Plücker coordinates of a point on the Grassmannian G(k, n).
Proof. By definition, the Plücker coordinates of RowSpan(L) are the maximal minors of L, assuming L has rank k.
As explained in the introduction, this last result is key to parametrizations of the totally nonnegative Grassmannian. | 3,542 | 2015-10-13T00:00:00.000 | [
"Mathematics"
] |
Pascal’s Wager and Its Postmodern Counterpart
Pascal’s Wager is probably the most analysed apologetic argument in the history of apologetics. What has often been the case, however, is that this piece of Pascal’s Pensées has often been misinterpreted and taken out of the Pascal’s total apologetic work. For that reason, the Wager has been misappropriated and has undergone a battery of misplaced criticism. Taken in its proper context, the Wager is a beautiful vindication of the Christian faith, cleverly constructed to make the sceptic re-think his position and contemplate the importance of the Christian faith. Much confusion exists about the placement of this particular Pensées, and where it is situated in his overall apology (Pensées 418) lends itself to the challenge of what has become “the Many Gods Objection.” For that reason, I would suggest that Pascal’s Wager belongs at the very beginning of his Pensées, where the rest of the Pensées are an explanation for the reason Christianity is the most attractive belief. Postmodern philosophers have re-appropriated the Wager and made it fit their own philosophical and theological presuppositions playing in the hands of the “Many-Gods-Objection.” This paper describes the beauty of Pascal’s Wager in its proper context and expresses the erroneous postmodern appropriation of the Wager.
Introduction
Pascal's apology will always be associated with his famous Wager.His Pensées might long be ignored, but this section of his Thoughts has been remembered and analysed for more than three hundred years.As David Wetsel (1994: 248) exclaimed, "No other single passage in the Pensées has generated more commentary than the 'infini/rien' fragment, popularly known as 'the Wager.'"Several issues must be taken in consideration when scrutinizing Pascal's Wager.Firstly, it would be misguided to evaluate and interpret Pascal's wager in isolation from the rest of his Pensées.Secondly, we must keep in mind the audience of Pascal when he proposed the religious wager.Ignoring the overall context in which the Wager is placed can lead to a skewed interpretation such as presented by Slavoj Zizek (2010: 136-140) in his article "The Atheist Wager," in which he interprets and critiques Pascal's wager in isolation, ignoring all other Pensées regarding the existence of God and the proofs that Pascal presents in subsequent fragments.Above all, because of its emphasis on the existential aspect of religion, the Wager still speaks to a contemporary audience well over 350 years after it was first written.The Wager argument has been wrongly appropriated by postmodern theologians to further their theological viewpoints, playing in the hands of the most ardent objection to Pascal's Wager, the so-called "Many Gods Objection." Although a reference or an allusion to some kind of wager was not original (cf. Ryan, 1994: 11-18), Pascal was the first who explicitly and elaborately used the wager as an apologetic tool.In Pascal's time nine versions of the wager argument were in currency and Pascal simply adapted a model for his own purposes (Hazelton in Van Vliet, 2000: 53-54).Others after Pascal, such as John Tillotson have made use of the wager concept to convince the sceptic that being a Christian is overall far more advantageous than holding the position of scepticism or even atheism.The Archbishop of Canterbury in his work The Wisdom of Being Religious contends that venturing into the Christian faith is far more propitious, for "he [the Christian] is inwardly more contended and happy, and usually more healthful, and perhaps meets with more respect and faithfuller [sic] friends, and lives in a more secure and flourishing condition" (Tillotson, 1819: 108-109).
Apologists have been known to be overly concerned with proving or demonstrating the existence of God using a variety of pieces of evidence.Pascal, in his Wager, was not at all concerned with establishing an apology for the existence of God, however.Pascal's Wager and his subsequent elaboration in the remainder of his Pensées as a vindication of the Christian faith, bore down to existential aspects that spoke to the gambling libertine and, as a matter of fact, can still speak to a contemporary audience.Sister Marie Louise Hubert (1973:69) aptly notes, "With the firm conviction, mingled with sympathetic understanding, which resulted from his own religious experience, Pascal realized that something more dynamic was needed in order to reach the heart as well as the mind of the libertines."This more dynamic approach, which is not only employed in the Wager but also in the rest of the Pensées, stresses the need for "happiness and welfare, temporal as well as eternal" (Hubert, 1973: 69).
Much research has been done to discover the true meaning of Pascal's Wager and many objections have been levelled to discredit it.Books are devoted to explaining in-depth the rationale, whether philosophical or existential, behind this famous fragment (cf.Jordan, 1994Jordan, , 2006;;Rota, 2016), and some have confused us more by providing philosophical formulae (c.f.Jordan, 2002) or mathematical equations (c.f.Adamson, 1995).A definitive and clear understanding regarding the Wager seems illusive, mainly because this fragment appears to be situated in the Pensées somewhat disconnected from the rest of the fragments.Michel and Marie-Rose Le Guern suggest that the Infini/Rien (or Wager) fragment forms a self-contained unit as an independent apology (Le Guern in Wetsel, 1994: 244).It is understandable that speculations have been made regarding the placing of the Wager in Pascal's overall apologetic scheme because the arrangement of Pascal's Thoughts has mostly been dependent on different editors.But to propose that fragment 418 is an independent apology must be dismissed.On the contrary, it can be argued that the Wager could be posited early in Pascal's entire Apology, and that the rest of the Pensées is an elaboration and a clarification of the reason why wagering on God is reasonable.David Wetsel (1994: 275) agrees and has come to the same conclusion, "As I see it, we should perhaps best think of the wager fragment as a kind of prelude to the Apology sketched by the dossier of 1658.Perhaps it is a kind of lure, intended to draw a certain kind of unbeliever into the chapters that will follow."This conclusion makes the most sense because the remainder of Pascal's Pensées clarifies the content of the Wager.
The Wager's Content and Its Original Audience
At first blush, Pascal's Wager suggests a questionable proposal and reveals some dubious theological propositions.At closer examination, however, we discover that Pascal is entirely consistent in his overall scheme of thinking as his Pensées indicate.What becomes clear is his deep desire to promote the Christian faith to his interlocutor.Speculations abound, however, regarding the type of interlocutor(s) Pascal addressed in his Wager.David Wetsel spends an entire chapter in his book Pascal and Disbelief speculating the nature of Pascal's interlocutor.There are those who wonder if the audience could be the libertine, as addressed above or the sceptics like Montaigne (Henri Gouhier and Paul Bénichou in Wetsel, 1994: 248 respectively).Sister Marie Louise Hubert (1973;14) in her work Pascal's Unfinished Apology suggests as well that Pascal's audience indeed consists of libertines who are indifferent to religion, but well-acquainted with the Christian faith.These libertines are addressed in Pensées 427, where they despairingly note, "Just as I don't know whence I come from, so I don't know whither I am going.All I know is that when I leave this world I shall fall for ever into nothingness or into the hands of a wrathful God…" (Pascal, 1966:158).It is more than likely that the interlocutor is "the one who is starting to seek God or who at least is unhappy" (Wetsel, 1994: 248).One thing is clear when reading the Wager the interlocutor is not an ardent atheist or comfortable unbeliever.The person (or persons), whom Pascal addresses has a listening ear and is somewhat familiar with Pascal's theological position and, although rebutting the proposal of Pascal, the interlocutor is interested in hearing more of what the apologist has to say.The manner in which Pascal addressed the speculative gamble suggests that the interlocutor is one that he is familiar with from his, so-called, "worldly period", which was marked with selfishness, pride and materialism (Krailsheimer, 1966: 14).It appears that Pascal had intimate knowledge of the mindset of the gambler who, in all appearances, enjoyed a carefree lifestyle but was utterly unhappy and was bound to search for the truth.
In consistent fashion, Pascal begins his Wager denouncing the high place of reason in matters of religion.On the one hand, the French apologist stresses that no rational demonstrable proofs of God's existence are available, yet on the other hand he emphasizes that betting on God is most reasonable.The Wager goes directly to the heart of the matter.It does not deal with the existence of God, as a matter of fact it does not intend to prove God's existence at all, but it deals with the existential implications of the meaning and purpose of life.In other words, the Wager deals with the question we all have to answer at one point in the context of our own mortality.The urgency of this question was not only pertinent in Pascal's day but is still very much the question the contemporary mortal has to deal with as well.Whether we want to admit it or not, there exists a human urge to search for the truth and God.The fact that humankind has a God-shaped vacuum leads all of us to contemplate the concern for the human telos.Pascal keys in on this human desire for truth that can only be found in the infinite God.His argument runs counter to the nihilistic outlook on our existence as proposed by the likes of Nietzsche and the relativistic sense of reality of the pragmatic postmodern.In all, the Wager appeals to man's desire for true happiness, which comes through the affections rather than through reasoning.Thus fragment 418 does not set out to produce an apologetic argument for the existence of God or to prove "the real possibility of God, but rather to set people on fire to seek God" (Gilbert, 2011: 211).
In his Wager, Pascal continues by stressing that, humanly speaking, it is impossible for the finite (human) to comprehend the infinite (God), as in accordance to the rest of his Pensées.Asking then for rational grounds or proofs for the Christian's belief is futile for there are none, according to Pascal.We must not conclude from this that Pascal is admitting the irrationality of the Christian faith; all Pascal is saying that proofs like scientific formulae are not available to convince the unbeliever of the truth of Christianity.He (1966: 150) admits that "reason cannot make you choose either, reason cannot prove either wrong."In what follows, Pascal invites the gambler to make a choice, "either God is or he is not."The essence of the gamble is found in fragment 387: "I should be much more afraid of being mistaken and then finding out that Christianity is true than of being mistaken in believing it to be true" (Pascal, 1966: 143).Pascal appeals to the psychological element of the Christian belief; the stakes are infinitely high: relief from your wretched state and receive ultimate happiness in this life and the next or remain in your current state of wretchedness and receive your due reward.
The third century apologist , who made use of a wager to convince the unbeliever of the truth of Christianity, was far less subtle in pointing out the high stakes to the heathen in his Adversus Gentes, where he (2004: 457) states, "Your interests are in jeopardy, -the salvation, I mean, of your souls; and unless you give yourselves to seek to know the Supreme God, a cruel death awaits you when freed from the bonds of the body, not bringing sudden annihilation, but destroying by the bitterness of its grievous and long-protracted punishment."Although Pascal never explicitly mentions the aspect of punishment in fragment 418, the loss incurred when ignoring the wager is clear.He refuses to use the fear of punishment as the sole motivator for the gambler to take the wager; Pascal points out what the gambler has to gain and allows him to weigh the gains against the losses, all the while making sure that the gains are infinitely more than the losses.
Peter Kreeft (1993: 297) explains it as follows, The Wager can easily be recast to appeal to a higher motive than the fear of Hell.One could wager as follows: if God exists, he deserves all my allegiance and faith.And I don't know whether he exists or not.Therefore, to avoid the terrible injustice of refusing God his rights, I will believe.Thus, we can simply substitute the 'high' motive of love (giving God his due) and fear of injustice for the love of Heaven and the fear of Hell, and everything in the Wager remains unchanged.
Not surprisingly, the interlocutor suggests not to make the gamble, "…the right thing is not to wager at all" (Pascal, 1966: 150).Pascal cleverly points out that the gambler must wager, for by not wagering he is already committed.One cannot remain indifferent or neutral, the agnostic has already made his bet against God; simply not to wager is not an option.Peter Kreeft (1993: 299) rightly points out, "The option of agnosticism is closed to us, not by thought but by life -or rather, by death."Now that the interlocutor has been made aware of his obligation to make a choice, Pascal offers him a risk assessment.As a gambler, the interlocutor is familiar with the bets he takes on a regular basis; risk assessment is something every gambler intentionally participates in.Pascal (1966: 151) assesses the gamble as follows, "if you win, you win everything, if you lose, you lose nothing." Afraid and unsatisfied, the gambler fears he is still wagering too much, still depending on his reason to assess the tangible benefits that the gamble should be giving him; he needs to place his bet "in accordance with a certain calculation, a calculation that can be represented by a simple formula for the determining what can be called Expected Value: (EV): (Probability x Payoff) -Cost = Expected Value" (Morris, 1992: 112).Pascal offers the Expected Value as an infinitely happy life to be won when choosing God.For him, the gamble is reasonable for the reward is obviously immeasurable.Christianity offers eternal happiness, therefore you gain everything and lose nothing, while if you do not believe you gain nothing and you lose everything; in atheism there is no eternal bliss, only nothingness at death.
There is also a crass psychological edge to the gamble for Christianity as opposed to the gamble against: on the one hand, if Christianity is true, then after death the Christian will have the satisfaction of knowing he was right; if he were to lose he will never discover that he was wrong.On the other hand, the atheist, if he loses will be consciously aware of the fact that he was wrong; if he wins the bet, he will never discover that he was right because of his extinction at death.Pascal offers a gamble worth taking.
One thing must be made clear; Pascal does not offer his interlocutor an irrational leap into the dark, as if evidences do not play a crucial role in the Wager.These evidences, however, are not and cannot ever be the determining factors in considering Christianity.In fragment 835, Pascal (1966: 286) clarifies the role of these evidences explaining, "The prophecies, even the miracles and proofs of our religion, are not of such a kind that they can be said to be absolutely convincing, but they are at the same time such that it cannot be said to be unreasonable to believe in them."The gambler cannot blame Pascal for making an irrational choice, but as is true with any stubborn unbeliever, the interlocutor insists on making excuses.First, he blames Pascal for not seeing what the cards are before making the gamble.Pascal (1966: 152) responds by giving reasonable proofs such as "Scripture and the rest, etc."He does not elaborate on the "rest" of these evidences, for the gambler probably has some knowledge to what these are.Not satisfied by this, the interlocutor resorts to blaming God for his unbelief, complaining that, "I am being forced to wager and I am not free; I am being held fast and I am so made that I cannot believe."Again, the gambler has a somewhat skewed theological knowledge regarding Pascal's notion of predestination; how can a person believe if he is not chosen?Pascal knows very well what his interlocutor is trying to do, and responds swiftly by turning the tables on him.He calls out the gambler for trying to conjure up enough evidences so as to make an airtight choice based solely on reason alone.Pascal appeals to the centre of belief and unbelief: the heart.He blames the gambler's unbelief on his own passions.Pascal (1966: 152) asserts, "Concentrate then not on convincing yourself by multiplying proofs of God's existence but by diminishing your passions."The interlocutor believes that when he has faith he will give up his passions, but Pascal turns this around and posits that he must give up his passions and then faith will come.Fragment 816 is clear on this issue, "'I should soon have given up a life of pleasure,' they say, 'if I had faith.'But I tell you: 'You would soon have faith if you give up a life of pleasure.Now it is up to you to begin.If I could give you faith I would.But I cannot, nor can I test the truth of what you say, but you can easily give up your pleasure and test whether I am telling the truth" (Pascal, 1966: 273).Implied here is not that the seeker is able to give himself faith, or any other created being, for only God can give him faith, but earthly pleasures prevent him from accepting his faith.
Pascal's proposal seems somewhat ambiguous and theologically dubious, and many have speculated on what Pascal means when he implies that to be cured of unbelief one should act as if he believes.On the surface Pascal seems to indicate that acting religiously can produce faith.Some have suggested that Pascal reverses the Augustinian, Anselmian and Calvinistic credo of "faith seeking understanding" into "understanding seeking faith" (cf.Hartle, 2017: 22).We must conclude, however, that Pascal was far too Augustinian to make that reversal, and that this would destroy Pascal's entire theological impetus and would again put the onus on man's reasonable capacity (cf.Pascal, 1966: 34, 85, 138, 149-153, 248).At closer examination, Pascal is not inconsistent in his theology, but he is also not ignorant of the fact that outward religious actions conjure up inward affections; outward actions cannot be divorced from inward affections.Pensées 944 clearly states, "We must combine outward and inward to obtain anything from God; in other words, we must go down on our knees, pray with our lips, etc., so that the proud man who would not submit to God must now submit to his creature.If we expect help from this outward part, we are being superstitious, if we refuse to combine it with the inward, we are being arrogant" (Pascal, 1966: 324).As with faith, the heart and the mind cannot be separated, so as well, outward actions must accompany inward affections.The serious seeker, according to Pascal, is able to overcome the affliction of unbelief by, firstly, observing the actions of the Christian and, secondly, by attending church services and studying the rituals as tangible tools of instruction.Pascal refers to this practice in Pensées 427, where he charges the unbeliever for religious ignorance and his lack of effort to seek what the Church has to offer by way of instruction.For him, Christian belief and the practice of that belief in the Church are inseparable.
The proposition to "act as if you believe" as suggested by Pascal (1966: 152) is closely connected to the worldly passions the libertine gambler is caught up in that prevents him from believing.Pascal calls for a certain openness of mind: associate with the believer, imitate the believer and attend the religious services that confirm Christian belief.Pascal calls the interlocutor to leave his passions that aggravate his unbelief.It becomes clear that Pascal's interlocutor wants to believe but does not want to leave his life of worldly passions.Pascal encourages the gambler to avoid the dispositions that lead him to unbelief, and "to act as if you believe" by associating themselves with believers.It seems here that Pascal calls upon the "implicit" faith that Calvin (1960: 545) warned about, where the believer might share "implicitly" by his trust in the church, "understanding nothing but submitting his feeling obediently to the church; calling the believer to the teaching of the church without the benefits of understanding the meaning of that teaching."This would be a fair criticism, for Pascal was, and remained a true Roman Catholic, but Pascal does not leave his interlocutor without the eventual understanding of what he may get himself into.We can also interpret Pascal's imperative using Calvin's (1960: 547) definition of "implicit" faith, where the observation and participation of ecclesiastical "rituals" can be seen as implicit faith as a preparation of faith.Pascal would never deny Calvin's (1960: 547) proposition that "faith consists in the knowledge of God and Christ."The remainder of the Pensées can attest to this proposition and is devoted to the clarification of true faith in Christ.
To be a Christian, or a seeker for that matter, and neglect the instructions and the rituals that the Church has to offer is incomprehensible.Thus, the suggestion of Pascal "to act as if you believe," is not too far-fetched.In a similar vein, C.S. Lewis instructs his listeners, regarding the case of charity, to begin by acting to "love" the neighbour even if it does not come easy.He (2000: 131) states, "As soon as we do this we find one of the great secrets.When you are behaving as if you loved someone, you will presently come to love him".He continues to explain that this outward action of love must accompany the inward affection that the object of our love is a person made by God.In other words, just like Pascal, C.S. Lewis encourages the action without compromising the affection that must accompany it.
Pascal poses the crucial question to his interlocutor regarding the benefits of choosing Christianity and deciding to follow his advice asking: what harm has come to you from choosing to take the aforementioned course of action?The French apologist (1966:153) quickly answers his own question and lists the benefits gained: being a "faithful, honest, humble, grateful, and full of good works, a sincere and true friend."This runs in total opposition to the choice of agnosticism or indifference.In fragment 427, Pascal (1966:159) states, "Now what advantage is it to us to hear someone say he has shaken off the yoke that he does not believe that there is a God watching over his actions, that he considers himself sole master of his behaviour, and that he proposes to account for it to no one but himself?Does he think that by doing so he has henceforth won our full confidence, and made us expect from him consolation, counsel and assistance in all life's needs?"For Pascal, not choosing God has left the person without respect and his counsel should be disregarded.As a matter of fact, no self-respecting person would even ask life counsel from those who have willingly disregarded the God of the Bible.This is a serious indictment but it shows the seriousness and apologetic fervour that Pascal possesses.He ends his Wager assuring his interlocutor that he has been where they are at and so convinces the gambler that the Wager is rationally compelling and reasonably plausible.For the seventeenth century seeker there were few options: Christianity or atheism.In a time of numerous options when Christianity has less and less credibility, is the Wager still a viable option and would a postmodern Millennial still heed the advice of Pascal to consider Christianity?
The Postmodern Appeal to the Wager
In contemporary postmodern thought, there is a particular attractiveness to the Pascalian Wager and the apologetic method of Pascal as a whole, mostly because of Pascal's appeal to the affections.Another reason that postmodernists gravitate to the Wager is its seeming avoidance of any exclusive religious claim.One of the characteristics of postmodern theology is the call to either pluralism or inclusivism; Christianity is just the religion of choice, one among a myriad of choices.It is not a matter of ultimate truth but a matter of religious preference.In other words, Pascal's Wager just happens to ask one to wager on Christianity but one could just as well wager on any other available religion.This sentiment would, of course, have been anathema to Pascal because it runs counter to Pascal's intention of convincing unbelievers that Christianity and not any other religion, is the most attractive option.The main objection to Pascal's Wager, what is best known as the "Many-Gods Objection."Jeff Jordan (1994: 101) describes the many-Gods objection as follows, "The range of betting options is not limited solely to Christianity because one could formulate a Pascalian wager for Islam, certain sects of Buddhism, or for any of the competing sect found within Christianity itself."In the entire scheme of Christian apologetics, the many-Gods objection would have been a formidable objection were it not for the fact that we must read Pascal's Wager as prelude to his whole Apology.
For the postmodern, religious adherence cannot offer any certainty and must always be approached with a certain level of scepticism.In the minds of many, the Wager proposes a religious option without offering the modernistic certainty, so prevalent among modern apologists.Thus, another aspect of the Wager that might be attractive to the postmodern is the matter of uncertainty, which is the postmodern epistemic distinctive.Initially, therefore, the Wager might speak to the mind of the "postmodern libertine," but a clarification of the essence of Pascal's apology must be given before it can take a foothold in the mind of the contemporary listener, not unlike the seventeenth century libertine gambler in the days of Pascal.
The postmodern rejects the aspect of reason as determining factor in all cases of knowing.Whereas Christian thought might stand in the postmodern position when seeking to expose the pretensions of the modernist precept of autonomous and objective human reason, it must avoid disregarding the use of reason in religious knowledge.The driving force of postmodern epistemology, especially in the case of religious knowledge, is the existential impulse.Here is where the postmodern has entirely minimalized the concept of reason in religious knowledge.Pascal, without a doubt, abhorred the use of autonomous reason in apprehending God, and he made sure that he downplayed the initial use of empirical evidence to come to the knowledge of God he did not, and would never have neglected reason.We can contend that Pascal was not averse to the evidence, and thus the use of reason, in religious apprehension, for a large segment of his Pensées is devoted to giving enough evidence, although secondary as they may be, to make the Christian faith reasonable.Ultimately, giving evidence or proofs was not what Pascal had in mind when he presented his wager.Sister Hubert (1973: 70) asserts, "…Pascal intended the wager argument to be, an exhortation, not a proof…it served as the preliminary step to their acceptance of the proofs based on scripture which were to form the substantial part of Pascal's Apology of the Christian religion."Nicholas Rescher makes an acute observation that disparages the accusation of inconsistency and clarifies Pascal's position by making the distinction between the use of "evidential" reason and "practical" reason, employed by Pascal in his Wager.Rescher (1985: 44) suggests, For two very distinct species of 'reason' are at issue in Pascal -the evidential that seeks to establish facts (and in his view entirely inadequate to the demands of apologetics) and the practical that seeks to legitimate actions (and can indeed justify us in 'betting on God' via the practical step of accepting that he exists).The heart too has its reasons.Only by blithely ignoring this crucial distinction between evidentially fact-establishing and pragmatically action-validating reason can one press the charge of inconsistency against Pascal. Rescher (1985: 45-46) continues to explain that when evidence fails to settle the issue and when waiting for the evidential situation to change is not a viable option, one must make a decision one way or the other, for suspending any judgment might prove catastrophic.The best available course must be considered in these circumstances but must still be done under the guidance of rational considerations.Pascal is perfectly consistent in his use of practical reason throughout his Wager.Betting on God is the reasonable thing to do on rational reasons when evidential reasons are insufficient.
When the more moderate postmodern accepts a semblance of the Christian faith or any other religious faith practice but neglects the aforementioned distinction mentioned by Rescher, and equates the wager with a leap, he is in danger of falling prey to extreme fideism, something Pascal (1966: 76-80) did not succumb to.Alvin Plantinga (2000: 87-88) makes the distinction between the extreme fideism where reason and faith are in conflict and the fideism of the Reformed epistemologist.Pascal could be counted among the latter, where faith is placed over against demonstration but not over against knowing.Although the existential impulse might engender postmodern interest in the Pascalian wager, the postmodern assumptions that degrade the true meaning of Pascal's Apology must be taken seriously.Pascal might have the ear of the postmodern and points of contact are present in their interpretation, but we must avoid seeking too close of an affiliation with and allegiance to the postmodern wager; a careful Pascalian corrective can and must be applied.When taken in isolation, the Wager can be interpreted with other religions in mind.Postmodernists have indeed done so and have bastardized Pascal's intentions and have grossly missed his Christian apologetic intentions by applying postmodern philosophical influences.
In addition to the existential impulse that incites interest in the concept of wager for the postmodern, the matter of uncertainty is another factor favourable to postmodern interest.After all, uncertainty is one of the hallmarks of postmodern thought which repeats Nietzsche's (2015: Loc. 12954-12960) statement that "men prefer the uncertainty of their intellectual horizon."The beauty of faith, according to postmodern theology which is guided by the hermeneutics of suspicion, is the lack of absoluteness and certainty (cf.Kearney, 2011:7).Keeping this in mind, the postmodern philosopher Brian Treanor (2010: 558) holds out the hope that "by returning to the deep ground that necessitates the wager, we can recover faith, 'returning' to a second innocence, one still open to the surplus of meaning found at the wellspring of faith, but without the ignorance of the first."Treanor refers here to Richard Kearney's antitheism, better described as a return to faith from the faith; "faith as an accident of our birth to a more mature faith that frees us from the limitations of our first naiveté" (Treanor, 2010:558).According to both Kearney (2010: 8) and Treanor (2010: 546-559), this requires an antitheistic wager, which is "marked by a moment of radicalized 'innocence' that opens the door to ulterior dimensions of truth.
Richard Kearney describes this wager more in detail in his work Anatheism: Returning to God After God.In this work, Kearney (2010: xvii) points out two aspects of the wager -the philosophical and the existential.According to Kearney, the Pascalian wager is charged with calculation, blind leaps and even fideism.In other words, Kearney erroneously charges Pascal with proposing an existential wager, which ultimately results in an existential "leap" not unlike Kierkegaard's.Kearney and other postmoderns, so they claim, adhere to an existential wager that solicits fidelity and is based on imagination and hospitality (Kearney, 2010: xvii).According to Kearney our lives consist of making wagers and religious wagers are no exception.Upon closer examination, however, we discover that Kearney has misinterpreted Pascal's views and unlike Pascal's Wager, where the choice and the object of the wager is made abundantly clear, his wager is far more ambivalent and does not point directly to God but to a "God" of our own choosing.Kearney (2010: 7)
explains,
The ana signals a movement of return to what I call a primordial wager, to an inaugural instant of reckoning at the root of belief.It marks a reopening of that space where we are free to choose between faith or nonfaith…Anatheism, in short, is an invitation to revisit what might be termed a primary scene of religion: the encounter with a radical Stranger who we choose, or don't choose, to call God.
Here it becomes clear that it is Richard Kearney who proposes an existential wager that contradicts Pascal's wager of which the object is not a Stranger who we "choose" to call God but is the God of the Bible.Kearney (2010: 30) continues to explain that these encounters with the Stranger are not new but have occurred all throughout history.He cites Abraham's encounter with God in Genesis 18, but also, and just as legitimate Muhammad's encounter on the summit of Mount Hira, which Kearney describes as the "Islamic wager."In other words, the wager that Kearney describes is a religious existential wager regardless of the object of the wager.For the postmodern, the attractiveness of the wager is in exactly the reasons Kearney describes: the option to choose to wager on the god of one's own liking.
Conclusion
When we consider the Wager in isolation, the objection seems quite legitimate.Pascal's Wager, however, must be regarded as a primer where the reasonableness of Christianity will be spelled out in far more detail in the rest of his Apology.As well, the Pensées clearly spell out the reasonableness of Christianity in juxtaposition to different religions.In our apologetics we call on Pascal's wager rather than the postmodern (e.g., Kearney's) rendition of the wager, which is far too ambivalent and, still, does not give the seeker any certainty and hope (although this is exactly what postmodernists like Kearney shy away from).There is no doubt in the mind of Pascal where our certainty lies and he painstakingly clarifies the object of the Wager in the remainder of his Pensées. | 7,865.2 | 2023-12-22T00:00:00.000 | [
"Philosophy"
] |
Levels of autonomy in synthetic biology engineering
Engineering biological organisms is a complex process and challenging that could benefit from a combination of standardization and automation. This Commentary discusses the advantages and challenges of achieving high levels of autonomy in synthetic biology.
Engineering biological organisms is a complex, challenging, and often slow process. Other engineering domains have addressed such challenges with a combination of standardization and automation, enabling a divide-and-conquer approach to complexity and greatly increasing productivity. For example, standardization and automation allow rapid and predictable translation of prototypes into fielded applications (e.g., "design for manufacturability"), simplify sharing and reuse of work between groups, and enable reliable outsourcing and integration of specialized subsystems. Although this approach has also been part of the vision of synthetic biology, almost since its very inception (Knight & Sussman, 1998), this vision still remains largely unrealized (Carbonell et al, 2019). Despite significant progress over the last two decades, which have for example allowed obtaining and editing DNA sequences in easier and cheaper ways, the full process of organism engineering is still typically rather slow, manual, and artisanal. Mol Syst Biol. (2020) 16: e10019 P erhaps it is time to take a more systematic approach to automation in organism engineering, to better understand the barriers to productivity gains. In electrical, mechanical, and chemical engineering, where automation and high productivity have become the norm, the success has come from breaking down complex processes into simple, well-understood steps in a precisely managed environment. However, when engineering living organisms, we are dealing with complex and imperfectly understood systems that cannot be so easily controlled. It may therefore be more helpful to think beyond automation to autonomy. While specific definitions of autonomy vary (e.g., Beer et al, 2014;Kaber, 2018), the general theme is that automation is any machine taking over actions from a human, while autonomy is automation operating with resilience and independence in a complex open environment.
Levels of autonomy
We would like to begin by proposing a working definition of autonomy for synthetic biology engineering, to use both for evaluating the degree of autonomy offered by current systems and for considering options for future development. While there is a wide variety of definitions and frameworks regarding autonomy, we propose that a particularly well suited framework to adapt is the Levels of Driving Automation (LoDA) framework, which was developed by SAE International and is now widely used throughout the automotive engineering community. This framework consists of six levels (0 through 5) of incrementally increasing autonomy. The lower three levels in the LoDA framework, that is, no automation, driver assistance, and partial automation, can be met by isolated subsystems supervised by the human driver. The higher levels, however-conditional automation, high automation, and full automationrequire that the system be fully integrated and capable of closed loop operation.
By analogy, we may consider a synthetic biology investigator as the "driver" of a laboratory, and the collection of assistive equipment therein the vehicle that the investigator navigates toward an intended organism engineering goal. A laboratory, of course, does not exist in a vacuum, but makes use of externally supplied reagents, instruments, protocols, etc. Such dependencies do not disrupt the notion of autonomy, but merely imply additional standards and compatibility requirements, just as with a vehicle's analogous use of gasoline, automotive parts, satellite navigation, and so on. Figure 1 illustrates our proposed levels of autonomy for synthetic biology, adapting the concepts from the SAE LoDA to this domain. The primary axis is the six vertical levels of increasing degree of autonomy: • Level 0, No Autonomy: No autonomy is the current condition of most work in most laboratories, with essentially all work carried out by humans.
• Level 1, Investigator Assistance: The first level of autonomy introduces narrowly scoped systems assisting with specific laborintensive tasks, such as "high-throughput" assay instruments, pipetting robots, or specialized software packages. However, humans are still intimately entangled with their operation, which typically requires careful set-up for each task to be executed.
• Level 2, Partial Autonomy: Partially autonomous systems provide proactive assistance to the investigator. For example, a system may validate its operation against a checklist of potential problems or fill in details of a more abstract experiment plan. Since the system needs to reason about its operations, this level also requires increased use of standards, calibration, process controls, and extraction of "intuitive" knowledge about the task into a machine-interpretable form.
• Level 3, Conditional Autonomy: The third level of autonomy marks a major transi-tion, in which the machine is able to close the design-build-test-loop, running multiple cycles without human intervention, beginning to interpret routine analyses, and involving humans only in case of anomalies and at the completion of a batch. This means that all individual workflow components must be at least Level 2, and they also must be integrated and able to adapt to results from other parts of the workflow. For example, if an automatically designed construct fails in the build or test stages, the next iteration of design should adapt at least enough to not propose the same construct again.
• Level 4, Highly Autonomous Investigation: At the fourth level, the system is essentially a laboratory assistant, taking over all protocol execution and routine aspects of data analysis, while the human is still required for interpreting data with respect to goals and adjusting plans accordingly.
• Level 5, Machine Investigator: At this highest level, the human moves from investigator to manager, essentially removing themselves from laboratory operations except for setting goals and receiving results. Orthogonally, we may also consider the scope of a system's applicability. At the lower levels, scope may refer to how much of a workflow is covered by a system. Scope may also refer to the system's versatility in applicability: a narrowly scoped system might only apply a certain test protocol, while a more broadly scoped system might apply to a wide range of build or test protocols.
This does not necessarily mean that we expect to achieve all of these levels. For example, Level 5 autonomy might well likely require rather sophisticated Artificial Intelligence. Nevertheless, having this framework in hand will allow a more quantitative assessment of the current state-ofthe-art and will indicate key barriers to improved productivity.
State-of-the-Art
A number of projects have demonstrated that high levels of autonomy are indeed possible in synthetic biology. For example, Level 3 autonomy has been demonstrated with organic synthesis via an integrated fluidic system and machine learning classifier (Granda et al, 2018) and in other chemical investigations via a mobile robotic system and Bayesian sample design (Burger et al, 2020). The "Adam" and "Eve" robotic science systems (King et al, 2009) arguably attain Level 4 autonomy, via systems biology knowledge representations that allow both experiment configuration from mechanistic hypotheses and hypothesis adjustment from results.
However, just as with early demonstrations of autonomous vehicle navigation, there is a sizable gap between demonstrating that high-level autonomy is possible and actually increasing the level of autonomy that is broadly deployed. These demonstrations, while impressive, are still fragile, narrow in scope, and require considerable prior investment in configuration and curation to set up an experimental program. Returning once again to the vehicle analogy, the prior systems are all still driving on a closed test course and not the open and unpredictable urban environment of most synthetic biology research.
At present, any automation is generally provided at the level of components and partial systems. The commonly discussed notion of a design-build-test-learn engineering cycle can be useful in understanding the challenges in moving from the current stateof-the-art to a more generally available highlevel autonomy. Figure 2 illustrates this cycle, coloring components by the highest level of broadly available automation. It also includes two other commonly elided aspects: configuration and curation. Configuration connects the output from one engineering phase to the inputs of another engineering phase, such as setting up a test phase experiment using genetic constructs from a build phase plus information about their design intention and hypotheses from the preceding design and learn phases. Curation provides the machine-interpretable information required for both execution and configuration, such as protocols to be used in a test and the inventory of available equipment, constructs, strains, and reagents.
Increasing scope of applicability
Increasing degree of autonomy LEVEL 5 Machine investigator Humans only set goals and receive results.
LEVEL 4
Highly-autonomous investigation All protocol execution and data analysis by machines. Humans handle interpretation of data with respect to goals. LEVEL 3 Conditional autonomy Machine can do 'closed loop' experiment batches, making interpretations of routine analyses, flagging anomalies for humans.
Partial autonomy
Higher-level control: automated checklists, experiment details. Requires machine interpretable calibrated data/designs, process controls.
Investigator assistance
Machines handle specialized protocol tasks, routinize curation.
LEVEL 0
No autonomy All experiment design, execution, and interpretation handled by humans. Data cannot be readily compared.
© EMBO Figure 1. Levels of autonomy in synthetic biology workflows, incrementally rising from no autonomy to fully autonomous machine investigation.
Significant automation capabilities have been developed for each of the four primary phases: design, build, test, and learn. At least in some domains, there are systems providing Level 2 partial autonomy, such that a well-configured system can perform significant reasoning about tasks it executes, validate its operation, and provide meaningful debugging assistance to a human operator.
For example, Cello (Nielsen et al, 2016) (Fig 3A) automates the design of a class of genetic information processing circuits, given a specification of a desired logic function and library of device models, using those models to guide design and predict expected behavior. Notably, while Cello is not the first tool that in principle allows such a design process-others include BioCompiler, GEC, and GenoCAD-the success of Cello circuit design is largely due to its creators' curation of library of high-quality device characterization data, which is still a largely manual process with at best Level 1 automation. Similarly, Autoprotocol (Miles & Lee, 2018) ( Fig 3B) provides automation for build and test protocol execution. However, authoring new protocol scripts is a complex process often requiring significant expertise and multiple iterations. The same authoring challenge exists with all other current protocol automation platforms, such as Aquarium, Antha, and the OpenTrons API. In the learn phase, we find analysis packages like TASBE Flow Analytics (Beal et al, 2019) (Fig 3C), which carries out automated processing and quality control assessment of flow cytometry data, using heuristics and a checklist of common issues to effectively implement partial autonomy in analysis. Similar automation exists for other assays, such as the Galaxy workflow environment for bioinformatics and omics, or microscopy packages such as SuperSegger and FogBank. Nevertheless, organizing data for analysis is largely manual, and interpretation of results with respect to background knowledge and experimental goals is still left entirely to human experts.
In sum, while automation is available for any given phase, the current state-ofthe-art consistently falls short when it comes to the configuration of phase-tophase interconnections and curation of information to satisfy preconditions for execution. Some representational standards have already been developed to address configuration and curation challenges, for example, SBML for composable models of learned information, SBOL for linking between design information and the rest of the engineering cycle, and workflow integrations have been demonstrated making use of these standards.
Nevertheless, three major gaps still remain unaddressed. First, there is not yet any standard representation for protocols and protocol interfaces, complementary to SBOL and SBML. Such a representation is needed for configuration of build and test automation within larger workflows. Second, curation of information into standard representations-a precondition for automation-is still a slow and highly manual process, requiring rare joint expertise in both knowledge representations and the particular application domain targets of curation. Level 2 (partial autonomy) tools are needed in order to act as assistive partners to domain experts, thereby lowering the barriers to curation and decreasing the need for knowledge representation expertise. Finally, there is not yet a critical mass of automation-enabled tools to form an effective marketplace for automated workflows. While such a marketplace effectively exists within the learn phase for bioinformatics tooling, a sufficiency of Level 1 and Level 2 automation tools in other domains need to be adapted to use standard representations in order to facilitate workflow integration by non-specialists across larger portions of the engineering cycle.
In sum, at present, autonomy in synthetic biology has been demonstrated as high as Level 4 (highly autonomous investigation). While higher levels of autonomy should benefit investigators by allowing faster and more effective engineering, nearly all State-of-the-art in synthetic biology autonomy, showing both the core design-build-test-learn cycle and also the configuration required to connect between stages and the curation required enable autonomy for each stage and its configuration. Color indicates maximum autonomy level available via publicly available reusable components, per the levels shown in Figure 1. At best, current systems are attaining partial autonomy (Level 2) in isolated portions of the cycle, with major gaps regarding stage-to-stage connections and curation.
ª 2020 The Authors Molecular Systems Biology 16: e10019 | 2020 investigations are still minimally automated, at best having only fragments of a workflow even as high as Level 2 (partial autonomy).
From autonomy to society
The gaps noted above, protocol representation, lightweight curation, and automation Markets, all point to the critical role that open standards must play in enabling autonomy in synthetic biology. For any business involving technological innovation, Joy's Law is the principle that: "No matter who you are, most of the smartest people work for someone else." This reflects the fact that the expertise needed for complex problems is diverse, highly specialized, and difficult to transfer. Research and development in synthetic biology is a particular extreme in this respect, given its wildly diverse and interdisciplinary nature. Thus, unlike autonomous driving, high-level autonomy in synthetic biology is a problem that needs to be solved not once but countless times, as every laboratory has its own particular set of needs, goals, protocols, and available equipment. Closed or bespoke systems, like those that have achieved high levels of autonomy in the past, simply cannot bring to bear the level of marketplace resources that can be marshaled with the aid of open standards.
Once a critical mass is reached, network effects will incentivize widespread adoption of standards, as investigators make use of that marketplace to obtain a competitive edge and as tool providers aim for sales within it. At that point, we should expect an exponential take-off in the impact of synthetic biology and a wholesale global transformation, much as with the manufacturing revolution set off by electrical standards at the end of the 19 th century and the informational revolution set off by computer network standards at the end of the 20 th century. Until that point, however, the field is in a tentative state, vulnerable to being stifled by monopolistic capture (intentional or emergent), by rent-taking outside forces, or by intellectual property gridlock. The synthetic biology community thus needs to make active choices to promote markets based on open biological standards. Individual practitioners should look for ways to gain advantage through becoming early adopters.
Companies should join open standards consortia or create them in those areas where none exist at present. Investors should look for ways to leverage standards in market plays. Finally, professional societies should advocate for funding supporting open standards, and government agencies and program managers should explicitly support standards development and adoption within their portfolios.
In the coming decades, those societies that invest in open standards for synthetic biology will experience vast gains in biological productivity and the corresponding economic and societal benefits. It behooves all of us to look for ways that we can contribute to bringing that future into being. | 3,651.2 | 2020-12-01T00:00:00.000 | [
"Computer Science"
] |
Harmonization of investment and operational costs of the grain industry in the light of the theory of sustainable development
Approaches to harmonization of methods for determining the optimal level of investment in the renewal of the fleet of combine harvesters and current expenses of agricultural enterprises under conditions of implementation of sustainable development goals in the activities of economic entities are investigated. It is established that the implementation of the concept of sustainable development for the domestic agricultural sector intensifies the processes in it, brings to the fore the problems of ensuring food security of the State, increasing the production of safe products while preserving landscapes and minimizing anthropogenic pressure. At the same time, the seizure of part of the territory, mining of the liberated and adjacent to the combat zone areas raises the issue of rational use of land resources. Under such conditions, only further introduction of innovations aimed at increasing yields and the use of intensive technologies is the most important vector for the development of domestic agriculture. This places strict requirements on the technical condition of its resource potential. Unfortunately, the unsatisfactory technical condition of the grain harvesting machinery fleet and its destruction as a result of hostilities exacerbate the problem of technical support for grain production, in particular for harvesting. Under such conditions, an important task is to develop methodological techniques for determining the optimal, harmonized values of investments in the reproduction of resource potential and operating costs. The tested methodological approach allows determining the optimal level of investment in the renovation of the combine harvester fleet, taking into account the peculiarities of wheat production organization, grain price conditions, material resources, harvesting equipment, and financial factors. The calculations showed that it is economically inexpedient to invest in the renewal of the combine harvester fleet using the most common combine harvester models if one unit has less than 600 hectares of wheat crops. A positive feature of the tested approach is the ability to minimize unproductive costs by taking into account technological and market factors in determining the optimal level of costs. Instead, the introduction of innovations leads to a change in the form of the production function, which should affect the dynamics of the marginal efficiency of investments, and therefore it is promising to expand approaches to modeling and take into account the role of innovations in finding the optimal level of current costs and investments.
Introduction
The functioning of the agrarian sector on the basis of sustainable development implies that it provides three interrelated functions: economic (providing income to agricultural producers), social (ensuring food security, providing productive employment, improving the quality of life) and environmental (maintaining biodiversity, preserving the integrity of the agricultural 1254 (2023) 012122 IOP Publishing doi:10.1088/1755-1315/1254/1/012122 2 landscape, soil fertility, clean air and water resources) [1].Understanding the role of the agricultural sector, in particular the grain industry, through the prism of the Sustainable Development Goals [2] allows us to consider the latter as a guarantor of the country's economic and food security.Ensuring the sustainability of agricultural production and the realization of its social function determines the increase in grain production, which places strict requirements on the fleet of combine harvesters.Problems in this area have been repeatedly discussed during discussions on the prospects for the development of the agricultural sector.In particular, it is worth mentioning the 2.5-fold reduction in the fleet of combine harvesters of Ukrainian agricultural enterprises during 2000-2020, which led to a permanent increase in the amount of operating hours per unit.As a result, the workload per combine harvester reached almost 200 hectares at the end of 2020, while in Germany and France in 2016-2020 it did not exceed 60-70 hectares of wheat crops.
The problem can be solved by simultaneously increasing investments in the renovation of the grain harvester fleet and obtaining harvesting equipment on a land-list basis.The drop of more than 30% in gross domestic product significantly limits the investment opportunities of agricultural companies and significantly increases the cost of borrowed resources.Under such conditions, it is particularly important to justify approaches to determining the optimal level of capital expenditures and their harmonization with the level of current costs caused by the technological process.
Sustainable development is a basic concept for business and policy development that reflects the understanding that progress is impossible without addressing pressing environmental issues such as ecosystem degradation and climate change.This implies that society can only develop on the basis of meeting the needs of the present while protecting the livelihoods of present and future generations [3].Since the laws of nature and social development are unchanging and immutable [4], the solution is for society to respect the limits of "safe workspace" [5,6] and limit environmental damage [7].Although "sustainability is a pluralistic concept" [8], in a broad sense it focuses on simultaneously ensuring economic growth, societal prosperity, and environmental protection [9].In this sense, the concept of sustainable development integrates economic, social, and environmental concerns and offers a new way of thinking that recognizes the world as interconnected between nature, society, and the economy [10].
It should be noted that population growth and changes in the structure and volume of consumption cause an increase in anthropogenic pressure [11].Sustainable land use can reduce the negative impact of these stressors.The Sustainable Development Goals (SDGs) were formulated in 2015 to meet the demands of the present, and they enshrine the commitment of developed countries to end poverty and hunger by 2030.However, climate change poses a challenge to achieving these goals, as slow processes of environmental change, increased climate variability, and extreme weather events negatively affect agricultural productivity [2].That is why innovation strategies, in particular agricultural innovation systems (AIS), are key examples of potential ways to improve the economic, environmental and social performance of the agricultural sector.Not only because agriculture contributes about 30% of the world's gross domestic product and has a high return on investment [12], but also because of the longterm positive impact of agricultural research and development (R&D) on productivity growth, and because new technological solutions contribute to the sustainable use of natural resources.Nevertheless, agriculture receives about 5% of R&D investments [13].
In turn, an important economic problem in economics and econometrics research is the substantiation of approaches to determining the economic and environmental optimum of operating (technological) and capital costs in the context of rapid implementation of R&D results.A significant contribution in this area has been made by scientists who have studied the use of production functions.In particular, we note the works devoted to the identification of production functions [14][15][16][17][18][19].Secondly, studies on industrial organization [20], trade [21][22][23][24] and international economics [25], which aim to measure productivity [26], returns to scale, and more recently, prices [16,27,28] using production functions.Third, a significant contribution has been made in works that focus on the development of methods for estimating multi-product production functions [29][30][31][32][33].And fourth, a whole area of macroeconomic literature, starting with [34], is devoted to the issues of uncertainty in product prices.The approaches of these authors are based on the assumption of the form of the production function, which allows its identification [35].
Investment is a continuous, systematic activity that focuses on the entire organization, including its forms and methods [36].Investing in innovation determines the ability of an agricultural enterprise to maintain competitive advantage and better respond to rapid market and economic changes [37][38][39][40].This process is based on the use of tools and measures that are important for the transition of society and the economy to sustainable development [41,42].Recognition of agricultural innovation as a driving force for addressing environmental issues and social inequalities has led to the emergence of sustainable entrepreneurship in the agricultural sector [43].Sustainable entrepreneurship encompasses business entities that can achieve profitability by exploiting environmental market gaps [44].They address social and environmental challenges through a business approach based on human values and go a step further to mitigate the impact of economic crises while promoting economic growth and social equity [45,46].
However, today's challenges, in particular the complexity and, in some aspects, the impossibility of predicting the development of economic processes, make it important to study approaches to determining the optimal level of investment and their harmonization with the level of current costs of agricultural enterprises.
Results and discussion
The functioning of the agrarian sector on the basis of sustainable development, the realization of its economic and social function, overcoming the challenges caused by the actions of the aggressor, leads to a reorientation of approaches to modeling technological processes to effective principles, maximizing production and profit.In view of this, nonlinear production functions that maximize the output or added value were chosen as the methodological basis for modeling the cost optimum.In turn, there is a need to harmonize approaches to determining the optimal amount of operating costs that maximizes output with the amount of investment to restore the resource potential damaged as a result of hostilities.The harmonization of economic processes is usually interpreted as their mutual coordination, systematization, unification, coordination, streamlining, and compliance.Harmonization of economic processes helps to balance the functioning of a business entity.Its systemic vision switches to the coordination of the formation and use of its resource potential, investment and operating costs.Thus, the methodological basis of our study is a system of models that allows harmonizing the ratio of operating expenses and capital investments.
The first step towards solving the problem was to determine, based on the statistical processing of the 2020 reports of Ukrainian agricultural enterprises, the equation of dependence of wheat yield on variable costs per hectare of harvested area: where f 1 (x) -is the expected yield of wheat, tons/ha; x -variable production costs per 1 ha of harvested wheat area, UAH thousand.The dependence has a high level of statistical reliability, as evidenced by the value of the coefficient of determination (R 2 ), which for function (1) is 0.9106, as well as the excess of the calculated value of the Fisher coefficient (F p = 28.0)over its tabular value (F tab = 0.116).At the same time, based on the values of the Student's t coefficient, the coefficients for the linear and quadratic terms of the formula (1) were also highly reliable.In particular, with the tabular value of this coefficient from -1.72 to 1.72, its actual values with the specified members were equal to 3.2 and 6.17, respectively.
The relevance of the application of function (1) for planning calculations ensures compliance with the optimal wheat harvesting terms, which in the case of single-phase (direct) harvesting should not exceed 6-10 days after the wheat reaches full maturity.At the same time, an analysis of the conditions and timing of early grain harvesting in 2016-2020 shows that due to the insufficient quantity and unsatisfactory technical condition of most of the grain-harvesting equipment, its duration was from 32 to 55 days [47].At the same time, the extension of the duration of the harvesting campaign beyond a ten-day period caused a daily decrease in productivity by 1% [48], as a result of which more than 10% of the potential harvest, i.e. 6-6.5 million tons of grain, was lost.
Considering this circumstance, the question arose -is it possible, by slightly reducing the expected level of yield and the planned level of costs, to minimize crop losses and maximize the financial result, and how to implement such an approach in the production function (1).To solve it, a component was introduced to equation ( 1), which allows to adjust the expected potential yield by the amount of potential losses, proportional to the duration of the harvesting campaign (d ).Taking this into account, the modified form of function ( 1) is as follows: where f 2 (x, d) -is the expected yield of wheat, tons/ha; x -variable production costs per 1 ha of harvested wheat area, UAH thousand; d -duration of the collection campaign, days.
In the future, functions (1) and ( 2) were combined into a system that allows you to determine the expected yield in the event that the harvesting campaign ends in the optimal agrotechnical period or in the event that it is extended beyond a ten-day period: where f 3 (x, d) -is the expected yield of wheat, tons/ha; x -variable production costs per 1 ha of harvested wheat area, UAH thousand; d -duration of the collection campaign, days.The inclusion of the variable d in function (2) necessitated the formalization of approaches to calculating the latter.It is logical to calculate it through the ratio of the expected gross harvest and the total productivity of the combine harvester fleet of the agricultural enterprise.In turn, the expected gross yield is the product of the sown area and the planned yield.The latter, for modeling purposes, can be determined using function (1).In the meantime, the total productivity of the farm's combine harvester fleet is determined by their number, hourly productivity, and shift duration.At the same time, to take into account the production conditions and the technical condition of combine harvesters, it is advisable to introduce a shift time efficiency factor: where pl -is the area from which wheat was harvested, ha; f 1 (x) -expected yield of wheat, tons/ha; W hour -hourly productivity of the grain harvester, centner per hour; T zm -shift duration, hours (according to [49][50][51] the recommended value is 12.0 hours); K vrch -coefficient of use of the working time of the shift (according to [49][50][51] the recommended value is 0.7);kis the number of grain harvesting units, units.
Taking into account the purely individual nature of the formation of the size of wheat sowing areas and the fleet of combine harvesters for each agricultural enterprise, their ratio in formula (4) was replaced by the planned area of wheat threshing by one combine harvester (N ): where N -is the planned area of wheat threshing by one combine harvester, ha.
After that, based on the analysis of statistical reports, it was determined that domestic grain producers mainly have units with an engine power of 330-335 hp.The analysis of the market of grain harvesting equipment shows that the closest to the indicated capacity are the sixth class combines widely represented on it -New Holland CR7.90, John Deere S670, John Deere S770, CASE IH 7140, CASE IH 7240, Gleaner S97, Claas Lexion 740, Massey Ferguson 9540, Massey Ferguson 9545 [52].With this in mind, based on the analysis of the offer of aggregates from this list on the Tractorhouse.comwebsite [53], the model with the largest number of lots -John Deere S670, which has a nominal engine power of 317 hp -was chosen as the base model with hourly productivity of 111.27 centner per hour.
Further, by substituting into function ( 5) the actual and recommended values of the hourly productivity of the John Deere S670 combine (111.27t/ha), the duration of the shift (12 hours), the coefficient of utilization of the working time of the shift (0.7), an analytical expression was formed functions of the dependence of the duration of the harvesting campaign on the planned threshing area with one unit and variable costs per crop unit: where x -variable production costs per 1 ha of harvested wheat area, UAH thousand; N -is the planned area of wheat threshing by one combine harvester, ha.Later, the variable d in the second equation of system (3) was replaced by the right-hand side of expression (6): where, f 3 (x) -is the expected yield of wheat, tons/ha; x -variable production costs per 1 ha of harvested wheat area, UAH thousand; d -duration of the collection campaign, days; N -is the planned area of wheat threshing by one combine harvester, h.A graphic illustration of the dependence of wheat productivity on variable costs at different harvesting areas indicates a reduction in non-productive losses in the case of a reduction in the load on the grain harvester and an increase in the technological efficiency of grain production (figure 1).
The next step was the modeling of the impact on the economic efficiency of grain production of the intensity and load on grain-harvesting equipment during wheat threshing.For this reason, the system of equations ( 7) was transformed.In particular, based on the assumption of one hundred percent marketability of grain production, in order to determine the expected volume of marketable products, the equations were multiplied by the average price of wheat grain sold by agricultural enterprises of Ukraine in 2020, which, according to the official website of the State Statistics Service, was 386.75 UAH/ha.Taking into account the measurement of variable costs per crop unit in the system of equations (7) in thousand UAH, the price of 1 t of wheat grain was converted into the unit of the same name.
After that, to determine the expected profit, the right-hand side of the equations was reduced by the value of variable costs x and the average value of fixed costs in the production of wheat Figure 1.Impact on wheat yield of production intensity and conditions of use of harvesting equipment by agricultural enterprises of Ukraine in 2020 (according to the official website of the State Statistics Service of Ukraine http://www.ukrstat.gov.ua/).Graph of dependence of yield (centner per ha) on variable production costs per 1 ha of crops (UAH thousand) at + + + -annual load on a grain harvester of 1200 hectares; × × × -annual load on a grain harvester of 900 hectares; * * * -annual load on a grain harvester of 600 hectares; ---annual load on a combine harvester of 300 hectares; X -variable production costs per 1 ha of area, UAH thousand; Y -annual load on the grain harvester, ha.grain, which, according to the analysis of the reporting on the costs of agricultural enterprises of Ukraine for 2020, amounted to 2,711 thousand UAH/ha.
where f 6 (x, d) -is the expected yield of wheat, tons/ha; x -variable production costs per 1 ha of harvested wheat area, UAH thousand; d -duration of the collection campaign, days; N -is the planned area of wheat threshing by one combine harvester, h.Graphical interpretation of the behavior of function ( 8) indicates a decrease in the maximum profit, as well as the optimum cost, which guarantees its achievement in case of an excessive increase in the load on the grain harvester (figure 2).So, under the conditions when each harvester of an agricultural enterprise accounts for 300 hectares of wheat crops, the maximum profit of 4.2 thousand UAH/ha is guaranteed by technology with variable costs of 9.0 thousand UAH/ha.Instead, the choice of this technology at a load of 1,200 ha leads to a loss of -2.7 thousand UAH/ha.Under such a load, the technology with variable production costs of 3.9 thousand UAH/ha is optimal, for which the financial result will be equal to +0.4 thousand UAH/ha.Therefore, under the conditions of threshing 1200 hectares of wheat with each combine harvester, it would be more expedient for the farm to use the technology with variable costs per crop unit almost six times lower compared to the technology that allows to achieve maximum productivity.It is clear that the rejection of industrial technologies reduces the efficiency of using the resource potential of agricultural formations, and therefore it is logical to increase investments in the technical base of harvesting operations.But taking into account the effect of agrobiological factors, the payback of such investments has a declining character.Therefore, when determining the optimal amount of capital and current costs, model (8) was transformed by including the increase in depreciation deductions and other fixed costs due to capital investment.
So, to calculate the increase in depreciation deductions, the average costs for the purchase of a combine harvester in the reporting year -UAH 4,845.4thousand were evenly distributed over the 12 years recommended by the John Deere company as a guideline for the productive use of this brand of combine harvester.The obtained value -UAH 403.8 thousand should be distributed to the entire fleet of combines and the planned load during wheat harvesting.For example, in the case of doubling the fleet of combines, the average increase for each combine will be 50% of UAH 403.8 thousand, similarly, in the case of a fourfold increase in the fleet, the share of purchased will reach three quarters, and therefore each combine will account for 75% of 403.8 UAH.
Taking this into account, the formula for calculating the increase in depreciation deductions looks like this: where n -is the share of newly purchased grain harvesters in their total number; N -annual load on the grain harvester, ha.
In addition, a potential increase in fixed costs was formalized under the conditions of payment of interest for the use of a loan taken out to cover the costs of purchasing a combine harvester.Thus, according to the statistical data of the official website of the National Bank of Ukraine, in 2020, agricultural commodity producers attracted long-term loans for the purchase of equipment at an average rate of 16%.Thus, under the conditions of linear accrual of interest payments, the annual cost of paying interest (I) will be equal to: where n -is the share of newly purchased grain harvesters in their total number; N -annual load on the grain harvester, ha.So, taking into account the potential increase in fixed costs, the system of equations for determining the expected profit looks like this: where f 7 (x, d) -is the expected yield of wheat, tons/ha; x -variable production costs per 1 ha of harvested wheat area, UAH thousand; d -duration of the collection campaign, days; N -is the planned area of wheat threshing by one combine harvester, h.Graphical interpretation of the behavior of function (11) shows the non-linearity of changes in the payback of investments (figure 3).
In particular, the reduction of the load from 1,200 ha to 900 ha, due to the expansion of the collection equipment park, leads to an increase in fixed costs by 0.3 thousand UAH/ha.As Figure 3. Impact on the economic efficiency of wheat production of production intensity and conditions of use of existing and newly acquired harvesting equipment by agricultural enterprises of Ukraine in 2020 (according to the official website of the State Statistics Service of Ukraine http://www.ukrstat.gov.ua/).Graph of dependence of yield (centner per ha) on variable production costs per 1 ha of crops (UAH thousand) at + + + -annual load on a grain harvester of 1200 hectares; × × × -annual load on a grain harvester of 900 hectares; * * * -annual load on a grain harvester of 600 hectares; ---annual load on a combine harvester of 300 hectares; X -variable production costs per 1 ha of area, UAH thousand; Y -annual load on the grain harvester, ha. a result, it becomes possible to switch to technology with variable costs of UAH 5,000 with a simultaneous increase in the production intensity indicator by 1,100 UAH/ha (t1).At the same time, the consequence of reducing the duration of the harvesting company and reducing nonproductive costs is an increase in productivity to UAH 23.1 thousand, which, with one hundred percent marketability of production, is equivalent to an increase in revenue by 1.9 thousand UAH /ha (23.1 − 18.3 × 0.3868).As a result, the profit of the agricultural enterprise increases by 0.4 thousand UAH/ha.Similarly, under the conditions of reducing the load from 1,200 to 600 ha, the expected profit increase will reach 1,000 UAH/ha.At the same time, under the conditions of reducing the load from 1,200 to 300 hectares, the financial result will increase by only UAH 0.9 thousand, which indicates a decrease in the marginal efficiency of costs.
Thus, in the case of an increase in the park, which allows to reduce the load from 900 to 600 ha, the increase in fixed costs is 0.7 thousand UAH/ha, the optimal level of variable costs is 1.7 thousand UAH/ha, marketable products -2.9 thousand UAH/ha (30.6 − 23.1 × 0.3868).As a result, the marginal return on costs will be equal to +20.8% ((2.9−(0.7+1.7))/(0.7+1.7)×100).
On the other hand, in the event of a decrease in the load from 600 to 300 ha, fixed and variable costs, as well as marketable products, increase by 2.3, 1.9 and 2.9 thousand UAH/ha, respectively, and the marginal loss of costs is -30.9% .Therefore, under unchanged conditions (production technology, product price situation, production resources, agricultural machinery, interest rates, etc.), the mark of 600 hectares of wheat crops per John Deere S670 combine harvester is the economic limit of the feasibility of investments in the renovation of the combine harvester park of domestic agricultural enterprises by purchasing similar or similar units.
Conclusions and prospects for further research
The implementation of sustainable development goals for Ukraine determines the intensification of processes in the domestic agricultural sector, brings to the fore the problems of ensuring food security of the state, increasing the production of safe products while preserving landscapes and minimizing anthropogenic pressure.In turn, the seizure of part of the territory, contamination of the liberated and adjacent territories with explosive objects further actualize the problem of rational use of land resources.Under such conditions, only further introduction of innovations aimed at increasing yields and the use of intensive technologies is perhaps the only possible way to develop domestic agriculture.The latter places strict requirements on the technical condition of its resource potential.Unfortunately, the unsatisfactory technical condition of the grain harvesting machinery fleet and its destruction as a result of hostilities exacerbate the problem of technical support for grain production, in particular for harvesting.Under such conditions, an important task is to develop methodological techniques for determining the optimal, harmonized values of investments in the reproduction of resource potential and operating costs.
The tested methodological approach allows determining the optimal level of investment in the renovation of the combine harvester fleet, taking into account the peculiarities of wheat production organization, grain price conditions, material resources, harvesting equipment, and financial factors.The calculations showed that it is economically inexpedient to invest in the renewal of the combine harvester fleet with John Deere S670 or similar combine harvesters if one unit has less than 600 hectares of wheat crops.A positive feature of the tested approach is the ability to minimize unproductive costs by taking into account technological and market factors in determining the optimal level of costs.Instead, the introduction of innovations leads to a change in the form of the production function, which should affect the dynamics of the marginal efficiency of investments, and therefore it is promising to expand approaches to modeling and take into account the role of innovations in finding the optimal level of current costs and investments.
Figure 2 .
Figure 2. Impact on the economic efficiency of wheat production of production intensity and conditions of use of available harvesting equipment by agricultural enterprises of Ukraine in 2020 (according to the official website of the State Statistics Service of Ukraine http://www.ukrstat.gov.ua/).Dependence of profit (thousand hryvnias per ha) on production costs per 1 ha of crops (UHA thousand) at + + + -annual load on a grain harvester of 1200 hectares; × × × -annual load on a grain harvester of 900 hectares; * * * -annual load on a grain harvester of 600 hectares; ---annual load on a combine harvester of 300 hectares; X -variable production costs per 1 ha of area, UAH thousand; Y -annual load on the grain harvester, ha.
Table 1 .
The influence of the load on the John Deere S670 combine harvester on the optimal intensity and efficiency of wheat grain production by agricultural enterprises in 2020 (according to the official website of the State Statistics Service of Ukraine http://www.ukrstat.gov.ua/). | 6,354.6 | 2023-10-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics",
"Environmental Science"
] |
Summations of large logarithms by parton showers
We propose a method to examine how a parton shower sums large logarithms. In this method, one works with an appropriate integral transform of the distribution for the observable of interest. Then, one reformulates the parton shower so as to obtain the transformed distribution as an exponential for which one can compute the terms in the perturbative expansion of the exponent. We apply this general program to the thrust distribution in electron-positron annihilation, using several shower algorithms. Of the approaches that we use, the most generally applicable is to compute some of the perturbative coefficients in the exponent by numerical integration and to test whether they are consistent with next-to-leading-log summation of the thrust logarithms.
The logarithms L j (r) arise in QCD from the soft and collinear singularities of the theory. These same soft and collinear singularities are contained in the splitting functions of a parton shower algorithm. Thus running a parton shower event generator to calculate σ J (r) will produce an approximation to the series in Eq. (1). That is, the parton shower approximately sums the large logarithms. The object of this paper is to investigate the form of the result of this summation. 1 To exhibit the summation of logarithms, we rearrange the parton shower algorithm so that it is specialized to calculate just σ J (r) and so that it expresses σ J (r) directly in terms of an exponential The integral of S Y (µ 2 ; r) in the exponent has an expansion 2n j=0 e(n, j) L j (r) . (3) The operator S Y (µ 2 ; r) is determined by the parton splitting operator S(µ 2 ) in the original shower. This gives one direct access to the coefficients e(n, j). With this representation, one has the potential to prove that e(n, j) = 0 for j > n + 1. The terms with j = n + 1 are called leading-log (LL) terms and the terms with j = n are called next-to-leading-log (NLL) terms. One also has the potential to prove that e(n, j) for j = n + 1 and for j = n are what is expected in full QCD if a full QCD result is known. Our plan in this paper is to develop the general theory behind the representation (2) along the lines of Ref. [1]. In this exposition, we also present the main steps of the construction of Ref. [1] in a form that, in our opinion, makes these steps more transparent. In a companion paper [2], we apply the representation (2) to an important example, the thrust distribution in electron-positron annihilation. We consider just the thrust distribution and not other distributions involving large logarithms. However, we look in some detail at how the exact form of the shower algorithm affects the results.
II. PARTON SHOWER FROM PERTURBATION THEORY
The starting point is the perturbative cross section for an infrared safe observable in hadron-hadron collisions. We describe this briefly here. A more detailed explanation can be found in Ref. [1].
The parton shower is described using operators on a vector space, the "statistical space," that describes the momenta, flavors, colors, and spins for all of the partons created in a shower as the shower develops. The colors and spins are quantum variables and are described using a density matrix. We use this description in the parton shower event generator Deductor [3][4][5][6][7][8][9][10]. The general theory includes parton spins, so we include spins here even though Deductor simply averages over spins. With m final state partons plus two initial state partons with labels "a" and "b," the partons carry labels a, b, 1, 2, . . . , m. The partons have momenta {p} m = {p a , p b , p 1 , . . . , p m } and flavors {f } m . We take the partons to be massless: p 2 i = 0. For color, there are ket color basis states |{c} m and bra color basis states {c ′ } m |. We use the trace basis, as described in Ref. [3]. For spin, there are ket basis states |{s} m and bra basis states {s ′ } m |. Then the m-parton basis states for the statistical space are denoted by |{p, f, c, c ′ , s, s ′ } m ). A vector |ρ) in the statistical space is a linear combination of the basis states.
A. Perturbative cross section
If the QCD matrix element is calculated up to a given order, α K s , the cross section is Here the renormalized perturbative QCD density operator is represented by a vector in the statistical space |ρ(µ 2 r )). It is based on the exact matrix elements and contains all the possible partonic final states at order K. The density operator is already renormalized, typically in the MS scheme, thus it is independent of the renormalization scale, µ 2 r , up to the desired order The next factor in Eq. (4) is the operator of the bare parton distribution functions (PDFs), Here the circles, a • b, represent convolutions in the momentum fraction variables. The renormalized PDF operator for the hadron-hadron initial state is F r (µ 2 r ). The corresponding MS subtraction of initial state singularities is done by the Z F (µ 2 r ) operator, which contains factors 1/ǫ n in dimensional regularization. As described in Ref. [1], one should typically use something other than the MS scheme to define the parton distribution functions used internally in the shower. The factor K(µ 2 r ) transforms to the shower scheme for the parton distribution functions F r (µ 2 r ). The bare PDF is scale independent, This equation leads to the proper evolution equation of the renormalized PDFs. The next factor in Eq. (4) is the operator O J (r) representing an infrared (IR) safe measurement, characterized by a set of parameters r.
After applying these operators, we have a sum and integral over basis states |{p, f, c, c ′ , s, s ′ } m ). Finally, we multiply by the statistical bra vector (1| and obtain a cross section after performing the integrations using (8) (The spin states |{s} m are orthogonal and normalized, but the color states |{c} m in the trace basis that we use are not orthogonal and some of them are not normalized exactly to 1 [3].) If the calculation includes perturbative contributions up to α K s , then there is an error term O(α K+1 s ) in Eq. (4). The formula is based on standard QCD factorization for infrared safe observables. This has power suppressed corrections of order Λ 2 QCD /Q 2 J (r) where Q 2 J (r) is the lowest scale that the measurement operator O J (r) can resolve. In the rest of this paper, we mostly omit explicit mention of these error terms.
The expression in Eq. (4) simplifies substantially in electron-positron annihilation. In this case, we can replace the operator F 0 by 1.
We point out that Eq. (4) is valid only in d = 4 − 2ǫ dimensions. It is not directly useful for practical calculations.
B. IR singular operator
To define a good subtraction scheme for a fixed order calculation one can use the IR singular operator D(µ 2 r , µ 2 s ) [1]. This operator has a perturbative expansion The operators D (n) (µ 2 r , µ 2 s ) are key to defining a parton shower algorithm in a general framework. For a first order shower, one uses only D (1) (µ 2 r , µ 2 s ), but in a general framework we consider D (n) (µ 2 r , µ 2 s ) for any n. This operator describes the IR singularity structure of partonic states |ρ(µ 2 r )). When D (n) (µ 2 r , µ 2 s ) acts on a state {p, f, c, c ′ , s, s ′ } m it produces new states {p,f ,ĉ,ĉ ′ ,ŝ,ŝ ′ }m with m ≤m ≤ m + n such that the IR singularities of match the singularities of nth order QCD Feynman diagrams that connect these two states. Here the singularities include the factors 1/ǫ from virtual loop diagrams and they include the singular behavior of the diagrams when any two or more momentap become collinear or some of thep become soft. The operator D (n) (µ 2 r , µ 2 s ) depends on two scales, the standard renormalization scale µ 2 r and the shower scale µ 2 s . The shower scale acts as an ultraviolet (UV) cutoff that separates the IR and UV regions. All IR singularities are included, but only regions near the singularities with scales k 2 satisfying k 2 < µ 2 s are included. There is, of course, some freedom in choosing how the division between IR and UV is defined. Different prescriptions lead to differences in the shower ordering prescription in the parton shower algorithm produced by D (n) (µ 2 r , µ 2 s ). The singular operator is based on the MS renormalized matrix elements and is independent of the renormalization scale. Thus we have This allows us to choose the renormalization scale conveniently.
In order to avoid large logarithms of µ 2 r /µ 2 s , it is useful to relate the renormalization scale to the shower scale. We define Then we can avoid large log(κ r ) factors by choosing κ r of order 1. The singular operator is perturbative and we can always define its perturbative inverse operator, by working order by order in the perturbative expansion.
C. Fixed order cross section
We can make Eq. (4) more useful by inserting 1 in the form DD −1 , We notice that the expression D −1 (µ 2 r , µ 2 s )|ρ(µ 2 r )) is well defined in d = 4 dimensions since the inverse of the singular operator removes all the IR poles of |ρ(µ 2 r )). Accordingly, we define the subtracted hard matrix element by This gives us We will use Eq. (15) to explore parton showers. First, however, suppose that we are interested only in the fixed order cross section. Then we can choose the scale µ 2 s small enough that the measurement operator O J (r) does not resolve parton momentum scales of order µ 2 s . Then One can calculate (1|F 0 D(µ 2 r , µ 2 s ) in d = 4 − 2ǫ dimensions. The operator D(µ 2 r , µ 2 s ) creates singularities, but the initial state singularities are removed by the operator Z F (µ 2 r ) in F 0 and the final state singularities cancel after we multiply by (1| and integrate over the parton variables. Thus we obtain a finite result in the ǫ → 0 limit.
D. Operators V and X1
The operators D(µ 2 r , µ 2 s ) and F 0 are defined only in d = 4 − 2ǫ dimensions and are singular as ǫ → 0 and as parton momenta become soft or collinear. However, we have noted that (1|F 0 D(µ 2 r , µ 2 s ) is finite in d = 4 dimensions. It will prove useful to introduce an operator, V(µ 2 r , µ 2 s ), that is finite in four dimensions, does not change the number of partons, leaves the parton momenta and flavors {p, f } m unchanged, and satisfies The operator V(µ 2 r , µ 2 s ) leaves {p, f } m unchanged, but it can act non-trivially on the color and spin space. Eq. (17) does not fully define the color and spin content of V(µ 2 r , µ 2 s ). We discuss the definition further in Sec. IV, but for now, we need only Eq. (17).
Using V(µ 2 r , µ 2 s ) we define a singular operator X 1 (µ 2 r , µ 2 s ) as The "1" subscript distinguishes the operator X 1 from the operator X used in Ref. [1] and suggests the normalization condition (19).
With V and X 1 , the cross section in Eq. (15) can be written as This form will be useful to help us define a parton shower. Before we continue with the discussion of the parton shower cross section we introduce a more compact notation for operators with renormalization scale dependence. According to Eq. (11) the renormalization scale is always related to the shower scale, thus we can define The PDF operator depends only on the renormalization scale and in this case the convention is a little different, The functions specified above then depend on κ r , but we do not display this dependence. With this more compact notation, Eq. (20) is written as
E. Operator U and parton shower
The formula for the cross section σ J given in Eq. (23) is of limited usefulness if the scale Q 2 J (r), representing the lowest scale that the measurement operator O J (r) can resolve, is much smaller than the scale µ 2 h of the hardest momentum transfer in ρ h (µ 2 ) . When that happens, σ J will contain logarithms log(µ 2 h /Q 2 J (r)) that need to be summed by looking for the most important terms at all orders of perturbation theory. To that end, one can use a parton shower algorithm.
To provide a parton shower, first set the scale µ 2 in Eq. (23) to µ 2 h . Then define a scale µ 2 f that is certainly smaller than Q 2 J (r). Typically, one chooses µ 2 f on the order of 1 GeV 2 . Finally, Since µ 2 f < Q 2 J (r), the operator O J (r) does not resolve partons at the scale µ 2 f . Thus O J (r) commutes with X 1 (µ 2 f ), giving us With the use of Eq. (19), this is The operator X −1 This is the shower operator. It generates a parton shower starting at the scale µ 2 h and ending at the scale µ 2 f . Because of Eq. (19), the shower operator is probability preserving Using the notation U(µ 2 f , µ 2 h ), the cross section is (29) We have perturbatively calculated matrix elements with their IR divergences subtracted in ρ h (µ 2 h ) . Then the operator F (µ 2 h ) supplies parton distribution functions. The factor V(µ 2 h ) serves to sum threshold logarithms [1,11]. An approximation to this factor is contained in Deductor although it is lacking in other current parton shower event generators. Next, the operator U(µ 2 f , µ 2 h ) generates the parton shower and the operator O J (r) measures the desired observable in the multiparton state created by the shower. Finally, we multiply by (1| and integrate to get the desired cross section. We discuss U(µ 2 f , µ 2 h ) and V(µ 2 h ) in more detail in Secs. V and VI.
III. OBSERVABLE DEPENDENT SHOWER EVOLUTION
The operator O J (r) in Eq. (29) could represent any infrared safe observable. In this paper, we have a particular sort of operator in mind. Consider, for example, the transverse momentum distribution of a Z boson produced in the Drell-Yan process. The operator that measures the transverse momentum k ⊥ of the Z boson is defined aŝ where k Z ({p} m ) is the transverse momentum of the observed Z boson. The standard method for summing logarithms of k 2 ⊥ /M 2 Z is to start with the Fourier transform of the k ⊥ distribution. To measure this with a parton shower event generator, we can use the measurement operator We let O Z (b) serve as an example of the observable O J (r) that we consider in this paper. There are many other similar examples. We will need one property of the observable O J (r) beyond infrared safety: we assume that the operator O J (r) has an inverse O −1 J (r). To analyze the cross section σ J (r), we start with the representation (23) with µ 2 = µ 2 h , Define an operator Y(µ 2 ; r) that is finite in d = 4 dimensions, leaves the number of partons and their momenta and flavors unchanged, and is related to X 1 by Then define a new version of X 1 that depends on the measurement parameters r by This gives us and Then our cross section is With the use of Eq. (35), and commuting O J (r) past V(µ 2 h ) and F (µ 2 h ), which do not change the partonic state, this becomes Here we measure O J (r) at the hard state ρ h (µ 2 h ) , obtaining typically a very simple result. Then we measure O J (r) inside the operator Y(µ 2 h ; r). This operator has the potential to sum large logarithms.
We can also relate Y(µ 2 ; r) to the shower operator U(µ 2 f , µ 2 ) with a small final scale µ 2 f . From Eq. (33), we have Since µ 2 f < Q 2 J (r), the operator O J (r) does not resolve partons at the scale µ 2 f . Thus O J (r) commutes with X 1 (µ 2 f ), giving us Recall from Eq. (19) that (1|X 1 (µ 2 ) = (1|. This gives us That is, we compare two calculations. In the first calculation, we generate a parton shower down to a very small scale starting with any statistical state at a scale µ 2 . Then we measure O J (r) inclusively using (1|O J (r).
In the second calculation, we first operate with O J (r) on the state at scale µ 2 then measure Y(µ 2 ; r) inclusively using (1|Y(µ 2 ; r). These two calculations give the same result.
IV. THE OPERATOR MAPPING P
In Sec. II D we defined an operator V(µ 2 ) which is to obey Eq. (17), 1 V(µ 2 ) = 1 F 0 D(µ 2 )F −1 (µ 2 ). In Sec. III, we defined an operator Y(µ 2 ; r) in the same way. In each case, we start with a singular operator A and we want to define a second, nonsingular, operator B with the property When the operator B acts on an m-parton basis state {p, f, c, c ′ , s, s ′ } m , it is to leave the number of partons, their momenta, and their flavors unchanged. It may, however, act non-trivially on the colors and spins. These requirements do not fully specify B. We can be somewhat more definite by requiring that there be a linear mapping A → B, which we write in the form This mapping must satisfy and [A] P must leave m and {p, f } m unchanged, The requirement (43) is then a restriction on the spin and color matrix A, We can place another requirement on [· · ·] P : if A has the property that it leaves m and {p, f } m unchanged, then One consequence of this is that [[A] P ] P = [A] P . These requirements do not fully specify the mapping [· · ·] P . For this paper we do not need to be more specific. However in Ref. [2] we provide an example (without spin) that is useful for the analysis of a first order shower.
We will find that the combination A − [A] P appears frequently in formulas. It useful to define an operation [· · ·] 1−P by (49)
V. GENERATOR OF SHOWER
We now turn to a more detailed study of the operator U(µ 2 f , µ 2 h ) that creates a parton shower between a hard scale µ 2 h and a small, cutoff scale µ 2 f . The generator of this shower evolution is the operator Here, we differentiate with respect to the shower scale. Because of Eq. (10) (with the use of Eqs. (17) and (18)), this is the same as Because of Eq. (35), Eq. (51) gives us a differential equation for U We use the notation to represent the solution of this equation. Here T indicates the instruction to order the operators S(µ 2 ) with the smallest µ 2 to the left.
VI. THE THRESHOLD FACTOR
In Sec. II D we have defined an operator V(µ 2 r , µ 2 s ). With our notation in Eq. (21) for the scale dependence of V, the crucial property given in Eq. (17) can be written In Eq. (29) or Eq. (38), the perturbative expansion of V(µ 2 h ) contains large logarithms [1,11,12]. These are the much studied threshold logarithms [13]. We sum the threshold logarithms by writing V(µ 2 h ) as an exponential. Define Then V(µ 2 h ) can be written as Define a generator operator S V (µ 2 ) by Then U V (µ 2 2 , µ 2 1 ) is the solution of the differential equation We write the solution of this equation as As long as we expand the running coupling α s in Eq. (60) to some finite order in α s (µ 2 h ), the integral in Eq. (60) is convergent in the limit µ 2 f → 0 [1]. Thus V(µ 2 ) at small scales is almost the unit operator, That is
VII. PERTURBATIVE EXPANSIONS
The operator S(µ 2 ) can be expanded in powers of α s (µ 2 r ) = α s (κ r µ 2 ): In the general theory from Ref. [1], S(µ 2 ) is constructed from the singular operator D(µ 2 r , µ 2 s ). If we use only the first order part D (1) (µ 2 r , µ 2 s ) of D because that is all we know, then all we get is S (1) (µ 2 ). However, in a practical parton shower program (such as the Λ-ordered Deductor), one often takes a guess at approximate higher order contributions S (n) . The approximate form is obtained by changing the argument of α s in the splitting functions to κ r k 2 T and, additionally, making a special choice for κ r . Expanding α s (κ r k 2 T ) in powers of α s (κ r µ 2 ) then produces contributions S (n) (µ 2 ) for n > 1.
In Deductor, the first order contribution has three parts [1,11]: The operator S (1,0) (µ 2 ) describes parton splitting, changing an m parton state to an m + 1 parton state. The operator [S (1,0) (µ 2 )] P leaves m and {p, f } m in an m parton state unchanged, although it can modify the color state. 2 In a leading color parton shower, the color is unchanged and the eigenvalue of this operator then gives the order α s contribution to the integrand in the exponent of the Sudakov factor that represents the probability not to split between two scales. The final operator, S (0,1) iπ (µ 2 ), leaves m and {p, f } m unchanged. It gives the imaginary part of virtual graphs [1,11] and obeys (1|S The operator S V (µ 2 ) has a perturbative expansion The first order operator S V (µ 2 ) has the form [1,11,12] S (1) Here [S (1,0) (µ 2 )] P is proportional to the integral of the first order splitting function over the splitting variables and appears also in Eq. (64). In the third term, [F (µ 2 ) • P (1) ] denotes the convolution of F (µ 2 ) with the first order PDF evolution kernel P (1) . In the second term, is the derivative with respect to the shower scale of the singular operator for a one loop virtual graph. It is sometimes assumed that the effect of virtual graphs and PDF evolution cancels the integral over the splitting variables of parton splitting [14]. However, this cancellation is not complete, so that the effect of S V (µ 2 ) is quite important [12,14].
VIII. GENERATOR OF Y
We now turn to a more detailed study of the operator Y(µ 2 ; r). This operator sums logarithms, so we want to write it as an exponential. Define This gives us a differential equation for Y(µ 2 ; r) Using Eq. (75) then gives us The operators Y and S Y are nonsingular operators that leave the number of partons and their momenta and flavors unchanged. Thus we can use the mapping [· · ·] P defined in Sec. IV to write this as The expansion of Y in powers of α s starts at Y = 1 + · · · , so a useful way to write this is Now we can use Eqs. (80) and (73) recursively to generate S Y and Y in powers of α s . We write with For S Y , Eq. (80) gives This gives us S (n) Y for j < n and Y (k) for k < n.
For Y we use Eq. (73), in which an integration over an intermediate scaleμ 2 appears. We can expand α s (κ rμ 2 ) in powers of α s (κ r µ 2 ) in the form with coefficients γ derived from the QCD β-function. Using this expansion in Eq. (73), we obtain This gives us Y (n) if we know Y (k) for k < n and S (j) Y for j ≤ n.
These recursion relations successively generate S Y , Y (2) , . . . . The first order terms are and We now outline how the operator Y(µ 2 ; r) can be used. This operator is the key to calculating an observable cross section σ J (r) according to a parton shower algorithm. The operator O J (r) that defines this cross section must be infrared safe. That is, there is a scale Q 2 J (r) such that σ J (r) does not resolve parton splittings at scales µ 2 smaller than Q 2 J (r). In order to define Y(µ 2 ; r), the inverse operator O −1 J (r) must exist. The anticipated use case is that there is a distribution of direct interest that involves large logarithms and the logarithms can be summed analytically by taking an integral transform of the distribution that depends on parameters r. Then σ J (r) represents the value of this integral transform. In a companion paper [2], we examine an important example, the thrust distribution in electron-positron annihilation. Then one uses the Laplace transform of the thrust distribution and r is the Laplace parameter ν.
In the applications that we have in mind, the perturbative expansion of σ J (r) contains powers of a large logarithm L(r) when the parameter or parameters r approach some limit. Typically, we have In favorable cases, there is an analytical formula that sums these logarithms in the form It is crucial here that the maximum power of L at order α n s is j = n + 1, not 2n. We can say that a σ J (r) with this property exponentiates. One never knows all of the coefficients d(n, j), but when the coefficients for j = n + 1 are known, we can say that the formula sums the logarithms at the leading-log (LL) level. When the coefficients for j = n are also known, we can say that the formula sums the logarithms at the next-to-leaadng-log (NLL) level.
In some important cases, the color space for the partons involved in the hard scattering process is trivial. For instance, for shape observables in electron-positron annihilation, there is only one color basis vector for the qq state in e + e − → qq. Then the coefficients d(n, j) are numbers. The initial partonic state in hadron-hadron scattering has a nontrivial color structure. Then the coefficients d(n, j) may be integrals of matrices in the parton color space, with some specification for the ordering of noncommuting matrices in the exponent.
What does a parton shower algorithm say about σ J (r)? Different parton showers can give different answers, so we should have a particular parton shower algorithm in mind.
We have seen that there are two ways to express σ J (r) as given by a parton shower. First, we can use Eq. (29), (90) Typically the splitting operator S in U(µ 2 f , µ 2 h ) is based on lowest order perturbation theory, as discussed at the beginning of Sec. VII. Additionally, V(µ 2 h ) is present in Deductor, but for many parton shower algorithms V = 1. Equation (90) says to run the parton shower to its cutoff scale and then measure the observable by applying (1|O J (r). The perturbative expansion of this result has the form (88), but not directly the form (89). One can run the corresponding parton shower event generator to obtain a numerical result with statistical errors and other numerical errors. Even with errors, it is possible [2,15] to use numerical results from Eq. (90) to check these results against a known QCD analytic result.
The second way to express σ J (r) as given by a parton shower is contained in Eq. (38), The operator S Y is obtained from the shower generator S using Eqs. (83) and (85). This second expression for σ J (r) gives exactly the same σ J (r) as given by Eq. (90). However, now the logarithms L appear in the exponent in S Y . Thus we have a representation that is very close to the representation in Eq. (89). The exponent in Y is 3 If we use the perturbative expansion of S Y , this is I(r) = as numerical integrals and check how many powers of L(r) they contain. Although one can never check every term in ∆I(r), this method has the advantage that if the check for NLL summation fails for any one contribution, then we know that NLL summation fails. In Ref. [2], we present the analysis of I(r) in a form that is somewhat less general than the analysis of this paper but is better adapted to practical applications. Then we analyze I(r) analytically and numerically for the trust distribution in electron-positron annihilation.
X. CONCLUSIONS
It is, we think, of some importance to understand how accurately a parton shower algorithm sums large logarithms in an observableσ J (v).
In analytical approaches to summing such logarithms, one typically defines an integral transform of the original distribution so that one considers a cross section σ J (r) that depends on parameters r. Then the perturbative expansion of σ J (r) contains large logarithms L(r).
Sometimes, one can compare the results of the shower for σ J (r) to the results in full QCD by writing the same differential equations as for full QCD but applying the differential operators to the shower approximation rather than full QCD [16,17]. This method has he disadvantage that one needs a separate and quite elaborate analysis for each observable to be studied.
An alternative is to calculate the observableσ J (v) numerically with the parton shower event generator of interest and to compare the result with a known QCD result [2,15]. This method can work, at least for electron positron annihilation, but presents significant numerical challenges.
We have presented a reformulation of the calculation of σ J (r) according to a parton shower so that the large logarithms appear directly as an exponential. The exponent can be expanded perturbatively. This gives us a path to an analytical understanding the summation of these logarithms in the parton shower. It also provides a simple way to test this summation numerically. In a companion paper, we find interesting results [2] for the thrust distribution in electron-positron annihilation.
ACKNOWLEDGMENTS
This work was supported in part by the United States Department of Energy under grant DE-SC0011640. | 7,620.2 | 2020-11-09T00:00:00.000 | [
"Physics"
] |
Genome-wide association study of offspring birth weight in 86 577 women identifies five novel loci and highlights maternal genetic effects that are independent of fetal genetics
Abstract Genome-wide association studies of birth weight have focused on fetal genetics, whereas relatively little is known about the role of maternal genetic variation. We aimed to identify maternal genetic variants associated with birth weight that could highlight potentially relevant maternal determinants of fetal growth. We meta-analysed data on up to 8.7 million SNPs in up to 86 577 women of European descent from the Early Growth Genetics (EGG) Consortium and the UK Biobank. We used structural equation modelling (SEM) and analyses of mother–child pairs to quantify the separate maternal and fetal genetic effects. Maternal SNPs at 10 loci (MTNR1B, HMGA2, SH2B3, KCNAB1, L3MBTL3, GCK, EBF1, TCF7L2, ACTL9, CYP3A7) were associated with offspring birth weight at P < 5 × 10−8. In SEM analyses, at least 7 of the 10 associations were consistent with effects of the maternal genotype acting via the intrauterine environment, rather than via effects of shared alleles with the fetus. Variants, or correlated proxies, at many of the loci had been previously associated with adult traits, including fasting glucose (MTNR1B, GCK and TCF7L2) and sex hormone levels (CYP3A7), and one (EBF1) with gestational duration. The identified associations indicate that genetic effects on maternal glucose, cytochrome P450 activity and gestational duration, and potentially on maternal blood pressure and immune function, are relevant for fetal growth. Further characterization of these associations in mechanistic and causal analyses will enhance understanding of the potentially modifiable maternal determinants of fetal growth, with the goal of reducing the morbidity and mortality associated with low and high birth weights.
Introduction
Individuals with birth weights approaching the lower or upper ends of the population distribution are more at risk of adverse neonatal and later-life health outcomes and mortality than those of average weight (1)(2)(3)(4)(5). The factors influencing birth weight involve both maternal and fetal genetic contributions in addition to the environment. Genome-wide association studies (GWASs) testing for common variant effects on own birth weight ('fetal' GWAS) have so far identified 60 robustly associated loci (6)(7)(8). The influence of common maternal genetic variation on offspring birth weight, beyond the effects of transmitted genetic variation, is poorly understood. Studies estimating the variance in birth weight explained by fetal or maternal genetic factors, using data on twins (9,10), families (11) or mother-child pairs with genome-wide common variant data (8,12), have consistently estimated a distinct maternal genetic contribution, which is smaller than the fetal genetic contribution, with estimates ranging from 3% to 22% of the variance explained (relative to 24% to 69% for fetal genetics).
Maternal genotypes may influence key maternal phenotypes, such as circulating levels of glucose and other metabolic factors, which could cross the placenta and affect the growth of the fetus. For example, women with hyperglycemia due to rare heterozygous mutations in the GCK gene have babies who are heavier at birth (provided the babies do not inherit the mutation) due to intrauterine exposure to high maternal glucose levels (13). Additionally, maternal genotypes may act upon other maternal attributes, such as vascular function or placental transfer of nutrients, which are also likely to influence fetal growth. Such maternal environmental effects could in turn influence fetal growth separately from the effects of any growth-related genetic variants that are inherited by the fetus directly from the mother (Fig. 1). Supporting evidence for such effects from analyses of common genetic variants includes positive associations between maternal weighted allele scores for body mass index (BMI) or fasting glucose and offspring birth weight, and an inverse association between a maternal weighted allele score for systolic blood pressure and offspring birth weight (14).
The goal of the current study was to apply a GWAS approach to identify maternal genetic variants associated with offspring birth weight. This could potentially highlight novel pathways by which the maternal genotype influences offspring birth weight through the intra-uterine environment. We performed a metaanalysis of GWASs of offspring birth weight using maternal genotypes in up to 86 577 women of European descent from 25 studies, including 37 945 participants from studies collaborating in the Early Growth Genetics (EGG) Consortium and 48 632 participants from the UK Biobank (Supplementary Material, Fig. S1). We identified 10 loci, and showed, using a novel structural equation model and analyses in mother-child pairs, that the majority of these were maternal effects that were independent of the fetal genotype.
Results
The basic characteristics of study participants in the EGG Consortium discovery, EGG follow-up and UK Biobank GWAS analyses are presented in Supplementary Material, Tables S1-S3, respectively.
Maternal SNPs at 10 loci were associated with offspring birth weight at P < 5 Â 10 À8 We identified 10 autosomal loci that were associated with offspring birth weight at P < 5 3 10 À8 (Fig. 2 Table S4). The linkage disequilibrium (LD) score regression intercept (15) from the overall metaanalysis was 1.009, so there was little change in the test statistics after adjusting for this inflation. Three of these loci (KCNAB1, EBF1 and CYP3A7) were identified in UK Biobank data only, and the index SNPs were unavailable in the EGG Consortium data. Consideration of results for proxy SNPs at these three loci from the EGG meta-analysis is in the next section. For the index SNPs at the other seven loci, we observed no strong evidence of heterogeneity in allelic effects between the EGG Consortium and UK Biobank components of the meta-analysis (Supplementary Material, Fig. S4 and Table S4). The majority of the index SNPs mapped to non-coding sequence and were not in strong LD with any coding variants (r 2 < 0.95), but the index SNP in SH2B3, rs3184504, is a non-synonymous coding variant (R262W). Approximate conditional analysis (see Materials and Methods) showed no evidence of secondary signals at any locus at P < 5 3 10 À8 . In combination, the 10 loci explained 1.4% [standard error (SE) ¼ 1.2%] of variance in birth weight, whereas the variance in birth weight captured by all autosomal genotyped variants on the UK Biobank array was considerably greater: 11.1% (SE ¼ 0.6%).
Birth weight-raising alleles at KCNAB1 and EBF1 were associated with longer gestational duration The associations at KCNAB1, EBF1 and CYP3A7 resulted from analysis of UK Biobank data only, and index SNPs were unavailable in the EGG Consortium meta-analysis (imputed to HapMap Phase 2). To investigate further the evidence for association at these loci, we identified proxy SNPs (r 2 ¼ 1) for KCNAB1 (rs9872556) and EBF1 (rs2964484) that were available in HapMap Phase 2 (no proxy SNP was available at r 2 > 0.5 at the CYP3A7 locus). Meta-analysis of the EGG Consortium and UK Biobank data showed weaker evidence of association overall, with some evidence of heterogeneity between the EGG meta-analysis and UK Biobank (P ¼ 0.008 and 0.007, respectively; Table 1, Supplementary Material, Table S4 and Fig. S4). In the UK Biobank, women reported the birth weight of their first child, but not the duration of gestation. In contrast, analyses of birth weight in all but one EGG study [Queensland Institute of Medical Research (QIMR), n ¼ 892] were adjusted for the duration of gestation. It is therefore possible that the associations observed with birth weight at KCNAB1 and EBF1 in the UK Biobank reflect primary associations with gestational duration. Look-ups of the index SNPs and HapMap 2 proxy SNPs in a published dataset of the top 10 000 associated SNPs from a GWAS of gestational duration and preterm birth in 43 568 women (16) showed evidence of association at both EBF1 (P < 10 À12 ) and KCNAB1 (P < 10 À3 ; Supplementary Material, Table S5). The birth weight-raising alleles were associated with longer gestational duration.
Five associated SNPs were independent of those identified in previous fetal GWAS of birth weight The index SNPs at four of the identified loci (SH2B3, KCNAB1, TCF7L2 and CYP3A7), mapped >2 Mb away from, and were statistically independent of any index SNPs previously associated with birth weight at P < 5x10 À8 in a fetal GWAS (r 2 < 0.05) (8).
A summary of candidate genes at these four loci is presented in Supplementary Material, Table S6 [corresponding information for the other loci was reported in (8)]. At MTNR1B and HMGA2, the same index SNP was associated with birth weight in the same direction in both the current study and the previous fetal GWAS. At the four remaining loci, the maternal GWAS index SNPs were within 0.5 to 15 kb of previously reported fetal GWAS index SNPs with very different strengths of pairwise LD between the maternal and fetal GWAS index SNPs. At the EBF1 and ACTL9 loci, the maternal and fetal GWAS index SNPs were in strong LD (r 2 ¼ 0.95 and 0.99, respectively), and the directions of association were consistent, suggesting that they were tagging the same causal variant. At the L3MBTL3 locus, the maternal and fetal directions of association were consistent, but the index SNPs were weakly correlated Table S7]. At the GCK locus, the minor allele frequencies of the maternal and fetal GWAS index SNPs were very different (0.23 and 0.009, respectively), and in low pairwise LD (r 2 ¼ 0.002). Analysis conditional on the fetal GWAS index SNP in UK Biobank did not alter the association at the maternal index SNP (Supplementary Material, Table S7), suggesting that at GCK, the maternal association with birth weight was distinct from the previously reported fetal association.
Structural equation modelling applied to UK Biobank data suggested most associations were driven by the maternal genotype The partial overlap between associations identified in the current study and those identified in the previous fetal GWAS of birth weight ( Fig. 2 and Supplementary Material, Fig. S2) illustrates the expected correlation between maternal and fetal genotypes (r % 0.5). The associations between maternal genotype and birth weight identified here may represent indirect effects of maternal genotype on birth weight acting via the maternal intrauterine environment, or primary effects of the fetal genotype on birth weight that are captured (due to correlation) when assaying the maternal genotype, or a mixture of maternal and fetal effects. Analysis of UK Biobank data using structural equation modelling (SEM; n ¼ 78 674 male and female unrelated participants, of whom 33 238 individuals only reported their own birth weight, 20 963 women only reported the birth weight of their first child and 24 473 women reported their own birth weight and that of their first child; see Materials and Methods and Fig. 3) provided estimates of maternal effects adjusted for fetal genotype, and vice versa, and suggested that the associations at the majority of the loci were driven by the maternal genotype ( Fig. 4 and Supplementary Material, Table S8). In particular, the adjusted maternal effects estimated at seven of the loci (MTNR1B, KCNAB1, GCK, EBF1, TCF7L2, ACTL9 and CYP3A7) were separated from the adjusted fetal effect estimates by at least 2 SEs. The only locus at which the point estimate for the adjusted fetal effect was larger than that of adjusted maternal effect was HMGA2, suggesting this association was driven by the fetal genotype. Additional analyses (i) adjusting for fetal genotype in up to 8705 mother-child pairs and (ii) comparing the unadjusted maternal effect estimates from the overall maternal GWAS (n ¼ up to 86 577) with those from a published fetal GWAS (n ¼ 143 677), provided supporting evidence that the majority of the effects were maternally driven (Supplementary Material, Fig. S5 and Tables S8 and S9).
Known associations at the identified loci highlighted potentially relevant maternal traits including fasting glucose, blood pressure, immune function and sex hormone levels Look-ups of index SNPs (n ¼ 7 loci), or SNPs in close LD (n ¼ 2 loci at r 2 > 0.9; n ¼ 1 at r 2 ¼ 0.4), in available GWAS datasets for cardiometabolic and growth-related traits revealed several associations at P < 5 Â 10 À8 (Supplementary Material, Table S10), and further information on previously reported associations was obtained from the NHGRI-EBI catalog of GWAS (see Materials and Methods). The maternal birth weight-associated variants at MTNR1B, GCK and TCF7L2 loci are known to be associated with fasting glucose and Type 2 diabetes susceptibility (17,18), with the glucoseraising allele associated with higher offspring birth weight.
The C-allele of the missense variant, rs3184504, in SH2B3, associated with higher birth weight in our study, has been associated with multiple cardiovascular traits [lower SBP and DBP The m and f path coefficients refer to maternal and fetal effects, respectively. The residual error terms for the birth weight of the individual and their offspring are represented by e and e O , respectively, and we estimate the variance of both of these terms in the SEM. The covariance between residual genetic and environmental sources of variation is given by q.
The maternal birth weight-raising allele at ACTL9 was in LD (r 2 ¼ 1) with alleles of nearby variants associated with lower risk of atopic dermatitis (36) and higher risk of tonsillectomy (34).
At the CYP3A7 locus, an SNP in LD (rs34670419, r 2 ¼ 0.74) with our identified variant, rs45446698, has been associated with levels of the hormones, progesterone and dehydroepiandrosterone sulphate (DHEAS) (37). The maternal birth weight-raising allele was associated with lower hormone levels.
The variants at HMGA2 and L3MBTL3 have been associated with adult height (40). At HMGA2, and possibly also at L3MBTL3, the association with birth weight is through the fetal allele, not the maternal allele, so these associations with adult height are relevant for offspring, not mother. Associations at HMGA2 were additionally observed with other growth and development phenotypes: infant length (41), infant head circumference (42) and primary or permanent tooth eruption (43,44).
To identify biological pathways underlying maternal regulation of birth weight, we performed gene-set enrichment analysis using Meta-Analysis Gene-set EnrichmeNT of variant Associations (MAGENTA) (45). Seven pathways reached false discovery rate (FDR) < 0.05, including three involved in the metabolism of xenobiotics (Supplementary Material, Table S11).
Discussion
In this study, we have identified variants in the maternal genome at 10 loci that are robustly associated with offspring birth weight. Five of the identified associations are independent of those reported in previous fetal GWAS of birth weight (8), bringing the total of known independent common variant associations with birth weight to 65. Because maternal and fetal genotype are correlated (r ¼ 0.5), loci identified in GWAS of birth weight to date could either represent effects of the maternal genotype, acting via the intrauterine environment, or direct effects of the fetal genotype, or a mixture of the two (Fig. 1). Our analyses, and those of 58 previously reported loci (46), suggest that although the majority of the 65 known associations indicate direct effects of the fetal genotype, at least 7 associations from the current study [those at MTNR1B, EBF1, ACTL9, KCNAB1, GCK, TCF7L2 and CYP3A7, of which the first 3 were initially identified in fetal GWAS (8)] indicate maternal intrauterine effects.
The index SNP, rs45446698, at the CYP3A7 locus, is an expression quantitative trait locus (eQTL) for CYP3A7 in adrenal gland tissue (47). The CYP3A7 gene is part of the cytochrome P450 family 3 subfamily A gene cluster, which encodes enzymes responsible for the metabolism of multiple and diverse endogenous and exogenous molecules (48), and SNP rs45446698 tags a haplotype of seven highly correlated variants in the CYP3A7 promoter, known as the CYP3A7*1C allele (49,50). The CYP3A7 gene is predominantly expressed in fetal development, but CYP3A7*1C results in expression in adult carriers (50,51). The CYP3A7*1C allele, and correlated SNPs, have been associated with circulating levels of DHEAS, progesterone and 2-hydroxylation pathway estrogen metabolites (37,52,53). There were no associations between offspring birth weight and maternal SNPs at each of nine loci (independent of CYP3A7) that are also known to influence levels of DHEAS or progesterone (37,54) (data not shown), suggesting that neither DHEAS nor progesterone levels per se are likely to explain the association with birth weight. Because CYP3A enzymes metabolize a diverse range of substrates, there are many possible mechanisms by which maternal CYP3A7*1C might be associated with birth weight. In our conditional analysis, we observed weak evidence of an independent association with the fetal allele at this locus in the opposite direction to that of the maternal allele. Further analyses in larger samples will be required to confirm this and to investigate possible mechanisms underlying this association. However, the association at this locus, together with the results of the gene-set enrichment analysis, which highlighted pathways involved in xenobiotic metabolism, suggests that it is a key avenue for future research into fetal outcomes.
The birth weight-raising maternal alleles at the identified loci (MTNR1B, GCK and TCF7L2) are strongly associated with higher fasting glucose and Type 2 diabetes in non-pregnant adults (17,18), and with glycemic traits and gestational diabetes mellitus in pregnant women (55)(56)(57). The association between raised maternal glucose and higher offspring birth weight is the result of higher fetal insulin secretion in response to increased placental transfer of glucose (58). Our results confirm previous maternal candidate gene associations with birth weight at TCF7L2 and GCK (55,59,60) and demonstrate the key role of maternal glucose levels in influencing offspring birth weight (14,61). Notably, the Type 2 diabetes risk allele at each of these three loci was not associated with birth weight independently of the maternal allele when present in the fetus. This is contrary to what has been seen at other Type 2 diabetes loci such as ADCY5 and CDKAL1, where risk alleles in the fetus were associated with lower birth weight (8). Table 1. The colour of each dot indicates the maternal genetic association P-value for birth weight, adjusted for the fetal genetic association: red, P < 0.0001; orange, 0.0001 P < 0.001; yellow, 0.001 P < 0.05. However, there is an additional low-frequency fetal variant at the GCK locus, which is independent of the glucose-raising maternal variant associated with higher birth weight in the current study (8). Taken together with the known effects on birth weight of both maternal and fetal rare heterozygous GCK mutations (13), a complex picture of allelic variation relevant to fetal growth is emerging at this locus.
The association with birth weight at HMGA2 was previously identified in a fetal GWAS of birth weight (same index SNP) (8), and our analyses showed that the maternal SNP in our study was probably capturing a direct effect of the SNP in the fetus on skeletal growth, given previous associations with infant length, head circumference and adult height (40)(41)(42). The L3MBTL3 locus identified in our study is also a known height locus, and the associated variant was correlated (r 2 ¼ 0.13) with an SNP associated with birth weight in the previous fetal GWAS (8). It was less clear from our analyses whether the association at L3MBTL3 originated from the maternal or fetal genotype. However, analyses of maternal height alleles transmitted to offspring versus those not transmitted to offspring suggest that the majority of the association between maternal height and offspring birth weight are due to direct effects of fetal inherited alleles (62).
Our exploration of known associations at the remaining four loci indicated a number of potentially relevant maternal traits that could influence birth weight via the intrauterine environment, including higher blood pressure (associations at SH2B3 and suggestive associations at EBF1, both between the blood pressure raising maternal allele and lower offspring birth weight), which has been causally associated with lower birth weight in Mendelian randomization analyses (14), and immune function (associations at SH2B3 and ACTL9). However, further studies are needed to elucidate the mechanisms at these loci and at KCNAB1, which showed no previous associations with other traits.
We observed weak evidence of heterogeneity of effect sizes between the EGG Consortium and UK Biobank components of our meta-analysis at the KCNAB1 and EBF1 loci, which led us to investigate possible explanations. A key difference was that birth weight was adjusted for duration of gestation in the majority of EGG studies, whereas the duration of gestation was unavailable in the UK Biobank. This raised the possibility that birth weight associations at KCNAB1 and EBF1 might arise from a primary effect on gestational duration, i.e. these loci could be primarily influencing the timing of delivery, rather than fetal growth. It is of course possible that the heterogeneity indicated false positive associations in the UK Biobank dataset that were not replicated in the EGG dataset. However, directionally consistent evidence of association with gestational duration and preterm birth in a recently published GWAS (P < 5 3 10 À8 at EBF1; P < 10 À3 at KCNAB1) suggests that this is unlikely.
There were some limitations to our study. First, the birth weight of first child was self-reported by mothers in the UK Biobank study, and so was likely subject to more error variation and potential bias than measured birth weight. However, maternal reports of offspring birth weight have been shown to be accurate (63,64), and we showed that the birth weight of first child variable was associated with maternal smoking, height, BMI and socio-economic position in the expected directions. A second limitation of our study was that by performing a maternal GWAS of birth weight that does not account for the fetal genotype, the analysis was biased against identifying loci at which the fetal genotype exerts opposing effects. Proof-of-principle that such loci exist is demonstrated by the effects on birth weight of rare mutations in the GCK gene, which act in opposite directions when present in either mother or fetus, but result in normal birth weight if both mother and fetus inherit the mutation (13). Our analysis conditional on fetal genotype at the 10 loci using a novel method (46) had greatly increased power to resolve maternal versus fetal effects compared with previous analyses in limited numbers of motherchild pairs (8). Although it is not yet computationally feasible to run such an analysis genome-wide, future studies will benefit from considering maternal and fetal genotype simultaneously at the discovery stage and are thereby likely to uncover further loci.
In conclusion, we have identified 10 maternal genetic loci associated with offspring birth weight, 5 of which were not previously identified in fetal GWAS of birth weight, and at least 7 of which represent maternal intrauterine effects. Collectively, the identified associations highlight key roles for maternal glucose and cytochrome P450 activity and potential roles for maternal blood pressure and immune function. Future genetic, mechanistic and causal analyses will be required to characterize such intrauterine effects, leading to greater understanding of the maternal determinants of fetal growth, with the goal of reducing the morbidity and mortality associated with low and high birth weights. (70); the Netherlands Twin Register (n ¼ 707) (71); the QIMR study of adult twins (n ¼ 892) (72); the Twins UK study (TwinsUK, n ¼ 1603) (73).
EGG Consortium discovery studies: genotyping, imputation and GWAS analysis
Genotypes in each study were obtained through high-density SNP arrays and up to $2.5 million autosomal SNPs were imputed to HapMap Phase II. Study protocol was approved at each study centre by the local ethics committee and written informed consent had been obtained from all participants and/or their parent(s) or legal guardians. Study descriptions and basic characteristics of samples in the discovery phase are presented in Supplementary Material, Table S1.
Within each study, we converted offspring birth weight (BW, g) to a z-score [(BW value -mean(BW))/standard deviation(BW)] to allow comparison of data across studies. We excluded multiple births, stillbirths, congenital anomalies (where known) and births before 37 weeks of gestation (where known). We assessed the association between each SNP and offspring birth weight using linear regression of the birth weight z-score against maternal genotype (additive genetic model), with sex and gestational duration as covariables (gestational duration was unavailable in the QIMR study, which contributed 4.5% of EGG participants). Ancestry principal components were included as covariables where necessary in the individual studies. Genomewide association analyses were conducted using PLINK (74), SNPTEST (75), Mach2qtl (76) or Beagle (77) (see Supplementary Material, Table S1).
Genome-wide meta-analysis of 11 EGG Consortium discovery studies
Before meta-analysis, SNPs with a minor allele frequency (MAF) < 0.01 and poorly imputed SNPs [info < 0.8 (PLINK), r2hat < 0.3 (MACH or Beagle) or proper_info < 0.4 (SNPTEST)] were excluded. To adjust for inflation in test statistics generated in each cohort, genomic control (78) was applied once to each individual study (see Supplementary Material, Table S1 for k values in each study). Data annotation, exchange and storage were facilitated by the SIMBioMS platform (79). Quality control of individual study results and fixed-effects inverse variance meta-analyses were undertaken by two meta-analysts in parallel at different study centres using the software package METAL (2009-10-10 release) (80). We obtained association statistics for a total of 2 422 657 SNPs in the meta-analysis for which at least 7 of the 11 studies were included. The genomic control inflation factor, k, in the overall meta-analysis was 1.007.
Follow-up of 18 SNPs in 13 additional EGG Consortium studies
We selected 15 SNPs that surpassed a P-value threshold of P < 1 3 10 À5 for follow-up in additional, independent studies. Of these, one SNP (rs11020124) was in LD (r 2 ¼ 0.63, 1000 Genomes Pilot 1 data) with SNP rs10830963 at the MTNR1B locus known to be associated with fasting glucose and Type 2 diabetes (81). We assumed that these represented the same association signal. Given its robust association with maternal glycemic traits likely to impact on offspring birth weight, we took only rs10830963 forward for follow-up at this locus. We identified three further SNPs at loci with robust evidence (P < 5 3 10 À8 ) of association with other phenotypes, and therefore higher prior odds of association with birth weight: rs2971669 near GCK (r 2 ¼ 0.73 with rs4607517 associated with fasting glucose) (60); rs204928 in LMO1 (r 2 ¼ 0.90 with rs110419 associated with neuroblastoma) (82) and rs7972086 in RAD51AP1 (r 2 ¼ 0.27 with rs2970818 associated with serum phosphorus concentration) (83). We took forward SNPs rs4607517, rs204928 and rs7972086 for follow-up at these loci, giving a total of 18 SNPs to be examined in additional studies.
The descriptions, genotyping details and basic phenotypic characteristics of the follow-up studies are presented in Supplementary Material, Table S2. Of a total of 13 follow up studies (n ¼ 18 319 individuals), 9 studies (n ¼ 15 288) provided custom genotyping of between 4 and 18 SNPs, whereas 4 studies (n ¼ 3031 individuals) had in silico genome-wide or exome-wide SNP genotypes available. Where SNPs were imputed, we included only those with quality scores (r2hat or proper_info) >0.8. We excluded directly genotyped SNPs showing evidence of deviation from Hardy-Weinberg Equilibrium at P < 0.0028 (Bonferroni corrected for 18 tests). Where genotypes were unavailable for the index SNP, we used r 2 > 0.8 proxies (see Supplementary Material, Table S12).
Preparation, quality control and genetic analysis in UK Biobank samples UK Biobank data were available for 502 655 participants, of whom 273 463 were women (84), and of these women, 216 811 reported the birth weight of their first child (in pounds) either at the baseline or follow-up assessment visit. We converted pounds to kg (multiplying by 0.45) for use in our analyses. No information was available on gestational duration or offspring sex. A total of n ¼ 64 072 women with offspring birth weight data available also had genotype data available in the May 2015 data release. Women identified as not of British descent (n ¼ 9681) were excluded from the analysis along with those reporting offspring birth weights of <2.5 or > 4.5 kg (n ¼ 5479). 'British descent' was defined as individuals who both self-identified as white British and were confirmed as ancestrally Caucasian using principal components analyses (http://biobank.ctsu.ox.ac. uk; date last accessed August 2, 2017). A total of 1976 of the women were asked to repeat the questionnaire at a follow-up assessment and therefore had two reports of birth weight of first child. Those with values differing by !1 lb (0.45 kg) were excluded (n ¼ 280). This resulted in n ¼ 48 632 women with both genotype data and a valid offspring birth weight value, which was z-score transformed for analysis (Supplementary Material, Table S3). UK Biobank carried out stringent quality control of the GWAS genotype scaffold prior to imputation up to a reference panel of a combined 1000 Genomes Project Consortium and UK10K Project Consortium. We tested for association with birth weight of first child using a linear mixed model implemented in BOLT-LMM (85) to account for cryptic population structure and relatedness. Genotyping array was included as a binary covariate in the regression model. Total chip heritability (i.e. the variance explained by all autosomal polymorphic genotyped SNPs passing quality control) was calculated using restricted maximum likelihood implemented in BOLT-LMM (85). We additionally analysed the association between birth weight of first child and directly genotyped SNPs on the X chromosome in 45 445 unrelated women identified by UK Biobank as white British. We excluded SNPs with evidence of deviation from Hardy-Weinberg equilibrium (P < 1 3 10 À6 ), MAF < 0.01 or overall missing rate > 0.015, resulting in 17 352 SNPs for analysis in PLINK v.1.07, with the first 5 ancestry principal components as covariates.
In both the full UK Biobank sample and our refined sample, birth weight of first child was associated with mother's smoking status, maternal BMI and maternal height in the expected directions (Supplementary Material, Table S3).
Overall meta-analysis of discovery and follow-up samples A flowchart of the overall study design is presented in Supplementary Material, Figure S1. We performed inverse variance, fixed-effects meta-analysis of the association between each SNP and birth weight z-score in up to 25 discovery and follow-up studies combined (maximum total n ¼ 86 577 women; 8 723 755 SNPs with MAF ! 0.01 plus 17 352 X-chromosome SNPs in 45 445 women) using METAL (80). To check for population substructure or relatedness that was not adequately accounted for in the analysis, we examined the intercept value from univariate LD score regression (15).
Approximate conditional analysis
At each of the identified loci, we looked for the presence of multiple distinct association signals in the region 1 Mb up-and down-stream from the lead SNP through approximate conditional analysis. Conditional and joint analysis in the analysis program, genome-wide complex trait analysis (86) was applied to identify secondary signals that attained genome-wide significance (P < 5 3 10 À8 ) using a sample of 10 000 individuals selected at random from the UK Biobank to approximate patterns of LD between variants in these regions.
Candidate gene search
To search for candidate genes at the four loci not already covered by the previous fetal GWAS of birth weight (8), we identified the nearest gene, searched PubMed for relevant information on genes within 300 kb of the index SNP, and queried the index SNP for eQTL or proxy SNPs (r 2 > 0.8) reported from GTEx v4, GEUVADIS, and 11 other studies using Haploreg v4.1 (http://archive.broadinstitute.org/mammals/haploreg/hap loreg.php; date last accessed August 2, 2017).
Estimating maternal and fetal genetic effects at the identified loci
Because of the small number of cohorts with both maternal and offspring genotype data available to conduct conditional analysis, we developed a novel method using SEM to estimate the conditional maternal and fetal genetic effects on birth weight, which we subsequently applied to the maternal and offspring birth weight data in the UK Biobank. SEM is a flexible multivariable statistical approach that allows investigators to model the covariance between an observed set of variables (i.e. here an individual's genotype, their birth weight and their offspring's birth weight) as a function of several latent unobserved variables (i.e. here the genotype of the individual's mother and the genotype of their offspring). The full details of the SEM method for estimating the conditional fetal and maternal effects are described elsewhere (46). Briefly, as seen in Figure 3, we fitted a structural equation model to three observed variables from the UK Biobank study; the participant's own self-reported birth weight, the birth weight of the first child reported by the women and the genotype of the participants. Our model included two latent variables; one for the individual's mother (i.e. grandmaternal genotype) and one for the genotype of the participant's offspring. We know these latent variables are correlated on average 50% with the individual's own genotype, hence the path coefficient between each of the latent variables and the observed genotype was set to 0.5. Our model also included residual error terms for the participant's own birth weight and the birth weight of their first child, a covariance parameter to quantify similarity between the error terms, and a variance parameter to model variation in the observed genotype. Using this model, we were able to simultaneously estimate the effect of maternal and fetal genotypes on offspring birth weight.
To fit the SEM, we used OpenMx (87) in R (version 3.3.2) (88) with the raw UK Biobank data, and the P-value for the fetal and maternal paths was calculated using a Wald test. We fitted a second SEM without the child and maternal path to conduct a 2 degree of freedom test for the effect of the SNP on birth weight.
Genotype data from the UK Biobank May 2015 release was used for analysis. We included 57 711 participants who reported their own birth weight and 45 436 women who reported the birth weight of their first child, giving a total of 78 674 unique individuals in the analysis (24 473 women had both their own and their offspring's birth weight). Individuals who were not of 'British descent' (as defined earlier), or were related to others in the sample, or who were part of multiple births, were excluded. The birth weight of offspring phenotype was prepared as described earlier, whereas own birth weight was prepared as described previously (8). The included sample was smaller than that used previously to fit the same structural equation model to a different set of SNPs in the UK Biobank (46), because of a narrower definition of ethnicity and a slightly narrower offspring birth weight range. The narrower definitions were chosen here to match closely the sample analysed in the main GWAS of the current study. We adjusted the individuals' own birth weight for sex, and both birth weight measures for the 12 genetically determined principal components and genotyping batch before creating z-scores for analysis.
We analysed up to 8705 mother-child pairs from 4 studies with both maternal and fetal genotypes available [ALSPAC, Exeter Family Study of Childhood Health, HAPO (non-GWAS) and DNBC-PTBCTRLS]. We used linear regression to test the association between birth weight z-score and maternal genotype conditional on fetal genotype and vice versa (also adjusting analyses for sex and gestational duration). We combined the results from the individual studies using inverse variance metaanalysis with fixed effects. We performed a further metaanalysis to combine the overall estimates with those from the SEM using UK Biobank data.
Look-ups in published GWAS and NHGRI GWAS catalog
We looked up associations between the 10 identified loci and various anthropometric and cardiometabolic traits in available GWAS result sets. The traits and sources are presented in Supplementary Material, Table S10. Where the index SNPs at KCNAB1, EBF1 and CYP3A7 were unavailable, we used proxies (r 2 ¼ 0.99, 1.00 and 0.41, respectively). Because GWAS summary statistics for blood pressure were not publicly available, we used the UK Biobank May 2015 genetic data release and tested associations between the SNPs and systolic and diastolic blood pressure (SBP and DBP) in 127 968 and 127 776 British descent participants, respectively. Two blood pressure readings were taken approximately 5 min apart using an automated Omron blood pressure monitor. Two valid measurements were available for most participants, and the average was taken. Individuals were excluded if the two readings differed by more than 4.56 SD (1 SD was equal to 19.7 and 13.1 mmHg for SBP and DBP, respectively), and blood pressure measurements more than 4.56 SD away from the mean were excluded. We accounted for blood pressure medication use by adding 15 mmHg to the SBP measure and 10 mmHg to the DBP measure in those reporting regular use of any antihypertensive. Blood pressure was adjusted for age, sex and centre location and then inverse normalized before analysis.
We additionally queried the NHGRI-EBI catalog of published GWAS (http://www.ebi.ac.uk/gwas/home, last accessed 2 August 2017) for associations P < 5 3 10 À8 between any additional traits or diseases and SNPs within 500 kb of, and in LD with, the index SNP at each locus.
Gene set enrichment analysis
We used MAGENTA to test for pathway-based associations using summary statistics from the overall meta-analysis (45). The software mapped each gene to the SNP with the lowest P value within a 110 kb upstream and 40 kb downstream window. The P value (representing a gene score) was corrected for confounding factors such as gene size, SNP density and LD-related properties in a regression model. Genes within the HLA-region were excluded. Genes were then ranked by their adjusted gene scores. The observed number of gene scores in a given pathway with a ranked score above a given threshold (95th and 75 th percentiles) was calculated and this statistic was compared with 1 000 000 randomly permuted pathways of the same size. This generated an empirical P value for each pathway, and we considered pathways reaching FDR < 0.05 to be of interest. The 3230 biological pathways tested were from the BIOCARTA, Gene Ontology, Ingenuity, KEGG, PANTHER and REACTOME databases, with a small number of additional custom pathway.
Supplementary Material
Supplementary Material is available at HMG online. Summary statistics from the meta-analysis are available at http://eggconsortium.org/. | 8,810.2 | 2018-01-03T00:00:00.000 | [
"Biology",
"Medicine"
] |
Mid-infrared computational temporal ghost imaging
Ghost imaging in the time domain allows for reconstructing fast temporal objects using a slow photodetector. The technique involves correlating random or pre-programmed probing temporal intensity patterns with the integrated signal measured after modulation by the temporal object. However, the implementation of temporal ghost imaging necessitates ultrafast detectors or modulators for measuring or pre-programming the probing intensity patterns, which are not available in all spectral regions especially in the mid-infrared range. Here, we demonstrate a frequency downconversion temporal ghost imaging scheme that enables to extend the operation regime to arbitrary wavelengths regions where fast modulators and detectors are not available. The approach modulates a signal with temporal intensity patterns in the near-infrared and transfers the patterns to an idler via difference-frequency generation in a nonlinear crystal at a wavelength where the temporal object can be retrieved. As a proof-of-concept, we demonstrate computational temporal ghost imaging in the mid-infrared with operating wavelength that can be tuned from 3.2 to 4.3 μm. The scheme is flexible and can be extended to other regimes. Our results introduce new possibilities for scan-free pump-probe imaging and the study of ultrafast dynamics in spectral regions where ultrafast modulation or detection is challenging such as the mid-infrared and THz regions.
Introduction
Ghost imaging originally emerged in the spatial domain as a correlation technique for imaging objects.The image is obtained by correlating the intensity of a reference beam which does not interact with the object itself but has its recorded spatial intensity distribution with the intensity of a test beam that illuminates the object and is measured by a single-pixel detector.This approach can offer significant advantages, including e.g.robustness against noise and distortions and enhanced security [1][2][3][4][5] .The probing patterns used in the reference beam can either be random and captured with a high-resolution detector or pre-programmed for computational ghost imaging [6][7][8][9] .
Single-pixel imaging has been demonstrated in various wavelength regimes, and very recently at the single-photon level in the mid-infrared region using frequency upconversion 10 .
In the past few years, the concept of ghost imaging has expanded beyond the spatial domain, into the temporal 11 and spectral domains 12,13 .Temporal ghost imaging (TGI) utilizes random temporal intensity fluctuations from a chaotic light source such as e.g.pseudo-thermal light or multimode laser as a reference probe signal.This signal is then modulated by a temporal object, and the object is subsequently reconstructed from the correlation between the random intensity fluctuations and the integrated intensity after the temporal object measured by a slow detector.
Enhanced TGI schemes have also been developed including differential TGI 14 and magnified TGI 15 to improve the signal-to-noise ratio and temporal resolution of TGI, or Fourier TGI 16 that can provide additional spectral information on the temporal object.TGI has generated considerable interest across various applications, including secure communication, underwater communication, and quantum device characterization [17][18][19][20] .Furthermore, similarly to computational ghost imaging in the spatial domain, computational TGI can be implemented by utilizing ultrafast modulators to pre-program temporal patterns rather than utilizing random temporal intensity fluctuations [21][22][23] .It is even possible to utilize the finite time-varying response of slow detectors to perform one-time readout TGI, enabling the use of very slow optoelectronic detection devices including lightemitting diodes or solar cells 24,25 .However, the absence of suitable instrumentation, such as ultrafast mid-infrared electro-optic modulators for pre-programming temporal patterns at mid-infrared light sources, has been a bottleneck in the direct implementation of computational TGI in the mid-infrared.
When compared to direct detection methods, TGI for retrieving ultrafast temporal objects offers several important advantages.One key benefit is that TGI does not require to precisely resolve the temporal profile of the probing signal after interacting with the object, making the technique intrinsically insensitive to temporal distortion effects that may occur during propagation after the temporal object 11 .Additionally, because the measurement is based on integrated intensity data that can be collected with low-bandwidth photodetector that typically exhibit high sensitivity, TGI enables the retrieval of temporal objects at substantially reduced optical power levels.This feature can prove to be particularly advantageous in scenarios characterized by significant transmission losses or light scattering for example.
The original demonstrations of TGI systems have operated in the near-infrared region, leveraging mature near-infrared low-coherence fiber lasers, telecom ultrafast modulators, and detectors 11,14,15,22,23,26 .This has hindered the exploitation of the full potential of TGI.To extend the wavelength regime of TGI, a two-color scheme that employs second-harmonic generation has been recently introduced 27 .In this approach, the probing intensity fluctuations generated from a multimode quasi-continuous-wave (CW) laser source are converted in a nonlinear crystal to the second harmonic wavelength where they can be measured in real-time with a fast detector.This technique, however, requires a laser source with random intensity fluctuations at the wavelength of the temporal object as well as a large number of measurement realizations.In principle, one could employ pre-programmed patterns instead of random intensity fluctuations, but this would require ultrafast intensity modulators at the wavelength of the temporal object, which may not be available.A particular region of importance where TGI has yet to be realized due to the lack of suitable detectors and fast modulators is the mid-infrared, and expanding the operational range of TGI to the mid-infrared region holds particular promise for applications in e.g.free-space communication 28 and ultrafast pump-probe experiments in areas such as all-optical modulation 29 and semiconductor carrier lifetime measurements.
In this paper, we fill this gap and introduce the concept of frequency downconversion TGI that enables the experimental realization of computational TGI in the mid-infrared.Specifically, instead of directly pre-programming temporal patterns at mid-infrared wavelengths, the approach modulates pre-programmed temporal patterns at near-infrared wavelengths using a conventional telecom modulator and, subsequently, these modulated patterns are transferred to a mid-infrared idler via difference-frequency generation (DFG) using a temporally stable continuous-wave (CW) near-infrared pump light source.The approach is generic and can be implemented to other wavelengths regions where fast modulators and/or detectors are not available.As a proof-ofconcept demonstration, we implement computational TGI in the 3.2 to 4.3 μm range using only one mid-infrared slow detector to image a temporal object generated from the on/off keying transmission of an acousto-optic intensity modulator (AOM).We anticipate that the proposed concept of frequency downconversion-based ghost imaging can open up new possibilities for achieving ultrafast computational imaging at wavelengths where conventional pre-programmed modulation techniques are challenging to apply, particularly in the mid-infrared and terahertz regions [30][31][32] .
Results
Experimental Setup.A schematic of the experimental setup is illustrated in Fig. 1 (see also Materials and methods for details).Light from a CW diode operating at 1542 nm is amplified by a commercial erbium-doped fiber amplifier (EDFA) with a maximum power of 200 mW.The 1542 nm CW light is temporally modulated by a 1.5 μm AOM (AOM1), which is driven by the preprogrammed patterns generated from an arbitrary waveform generator (AWG).The modulated 1542 nm light and a 4 W home-built ytterbium-doped fiber laser 33 tuned at 1060 nm used as a strong pump are collimated with fiber collimators (FC), and the two collimated beams are spatially multiplexed by a dichroic mirror.The ytterbium-doped fiber laser layout and spectral characteristics are provided in Supplementary note S1.The overlapping beams are then collinearly focused with a lens of 75 mm focal length into a periodically-poled LiNbO3 crystal (PPLN, HC Photonics) with antireflection coatings at pump, signal and idler wavelengths for DFG process that converts the modulated 1542 nm signal to the idler at 3.4 μm.The idler is then collimated by a CaF2 lens, and an antireflection-coated Germanium filter is used to block the residual near-infrared beams.The output power of the idler is about 100 μW and the preprogrammed temporal patterns modulating the 1542 nm signal are effectively transferred to the idler light through the DFG process.
To demonstrate proof-of-concept mid-infrared computational TGI, the transmission of a mid-infrared AOM (AOM2) driven by a bit sequence produced by the AWG is used as the temporal object to be retrieved.The mid-infrared idler light transmitted through AOM2 is detected by a slow photodetector (Thorlabs, PDAVJ5, 2.7-5.0 µm, 1 MHz bandwidth maximum) that cannot resolve the high-speed bit sequence and recorded by a real-time oscilloscope (RS, RTO2024, 2 GHz bandwidth) triggered by the AWG.The temporal object is then recovered from the correlation operation between the intensity recorded by the slow detector and the preprogrammed patterns generated by the AWG.Preprogrammed probing temporal patterns generated from the arbitrary waveform generator (AWG) are applied to an acousto-optic modulator operating in the 1.5 μm range (AOM1) to modulate the 1542 nm laser.The probing patterns are transferred to an idler wavelength at 3.4 μm using downconversion in a periodically poled lithium niobate crystal (PPLN) with a strong 1060 nm CW pump.The 3.4 μm idler light is transmitted through the temporal object generated by acousto-optic modulator operating in the 3 μm range (AOM2) and detected by a mid-infrared slow detector (MIR slow detector).The temporal intensity from the slow detector and the preprogrammed temporal probing patterns are recorded by the oscilloscope (OSC) and the temporal object is retrieved from their correlation.FC, fiber collimator; DM, dichroic mirror (HR@1135-1600 nm, HT@1060 nm).
Frequency downconversion of pre-programmed temporal patterns.The core of mid-infrared computational TGI based on frequency downconversion is to transfer the preprogrammed temporal probing patterns from the near-to mid-infrared spectral region.When using DFG as the nonlinear conversion mechanism, the time-dependent intensity of the generated idler Iidler These results confirm that pre-programmed probing patterns can be successfully transferred to the idler at 3.4 μm by DFG, and that the idler carries all the necessary information to perform computational TGI in the mid-infrared.The quality of the retrieved ghost image as a function of the number of probing temporal patterns is investigated in Fig. 4. The speed of the temporal object is set to 625 kbps to allow for comparison of the retrieved ghost images (cyan solid lines) with direct detection (blue dashed lines).As can be expected, the signal-to-noise ratio of the TGI increases with the number of probing patterns, and it is found that the temporal object can be already resolved with about 160 distinct patterns.
Improvement in the accuracy of the retrieved ghost image for an increasing number of realizations can be quantified by evaluating the peak signal-to-noise ratio (PSNR) between the directly measured and reconstructed temporal sequences 22 Mid-infrared computational TGI with Hadamard patterns.In order to reduce the number of probing patterns to reconstruct a given temporal object, we apply an orthogonal matrix of preprogrammed temporal patterns that modulate the signal at 1542 nm.Each probing pattern Hk (t) (k respectively, along with a direct measurement of the temporal object for comparison.We can see how the computational TGI scheme with orthogonal patterns can retrieve the temporal object and this with a significant reduced number of realizations as compared to randomly selected patterns. The PSNR of the reconstructed ghost imaging obtained with 32 Hadamard patterns in Fig. 5 (b) is 18.91 dB, which is higher than that of the reconstructed ghost imaging obtained with 250 random patterns.Therefore, the proposed mid-infrared computational TGI could lead to significantly faster reconstruction of the temporal object in mid-infrared as compared to using a two-color detection TGI scheme utilizing random temporal intensity fluctuations.
We also experimentally investigated the influence of the mid-infrared photodetector bandwidth on the retrieval of the temporal object as shown in Supplementary note S3.The comparison is performed for a 5 Mbps temporal object using a mid-infrared photodetector in the test arm with 200 kHz, 500 kHz and 1 MHz bandwidth.We can see that, even for a photodetector bandwidth 25 times lower than the speed of temporal object, the temporal object is still very well retrieved.Note that it has been shown at near-infrared wavelengths that computational TGI enables to reconstruct a temporal signal with a 50 ns time scale using a detector with a bandwidth as low as 1 kHz bandwidth 22 .One can anticipate that this should also be possible with our frequency downconversion based computational TGI scheme, clearly providing new possibilities to detect ultrafast signal in mid-infrared region by using commercially available mid-infrared photodetector with MHz bandwidth.
Temporal resolution.The temporal resolution of our mid-infrared computational TGI is determined by the minimum temporal duration of the pre-programmed temporal patterns modulated on the 1542 nm light, which is 0.1 μs in our proof-of-concept demonstration limited by the modulation bandwidth of the AOM1.In principle, one could use lithium niobate electro-optic modulators with ultrahigh modulation bandwidth up to 40 GHz available at telecom wavelengths to increase the temporal resolution up to 25 ps.Moreover, very recently, the programmable generation of near-infrared pulses with temporal structure down to tens of femtosecond was demonstrated using a digital micromirror device and femtosecond light source 34 .Combining frequency downconversion with such ultra-fine temporal patterns in the near-infrared 34 , one could further increase the temporal resolution of mid-infrared computational TGI to the sub-ps level.
Extension to other mid-infrared wavelengths.While in the previous section we have focused on the particular case of computational TGI at 3.4 μm, in principle the frequency downconversion scheme is flexible and can be extended to image temporal objects at other wavelengths.To experimentally demonstrate this, we tuned the pump wavelengths and corresponding phase matching conditions to generate a tunable idler from 3.2 to 4.3 μm and performed computational TGI with Hadamard patterns in this range.Figure 6(a) shows the spectrum of the generated idler light by tuning the pump wavelength and a fixed signal wavelength at 1542 nm.Depending on the target idler wavelength, the pump is the ytterbium-doped fiber laser with tunable range in 1040-1090 nm or a random Raman fiber laser tunable from 1110 to 1150 nm range (see Supplementary note S1 for the lasers layout and spectral characteristics).The period and temperature of PPLN crystal are adjusted to fulfil the phase-matching conditions (see Materials and methods section for details).Results of computational TGI with Hadamard patterns are show in Fig. 6(b) for a temporal object at different mid-infrared wavelengths.One can see that, independently of the operating wavelength, the computational TGI successfully retrieve the temporal object over the full 3.2 to 4.3 μm investigated wavelength range (essentially limited by the transparency window of the PPLN crystal).To further extend the operating wavelength region beyond 5 μm, one may employ other non-oxide nonlinear crystals, such as ZnGeP2 35 , orientation-patterned gallium phosphide (OP-GaP) 36 , and BaGa4Se7 (BGSe) 33 .For example, DFG of 1.5 μm light modulated by the pre-programmed temporal patterns and a 2 μm tunable CW fiber laser as the pump should allow for performing computational TGI in the 6-8 μm region, while DFG with 1.3 μm tunable CW fiber laser would enable computational TGI at 10 μm 35 .
Discussion
In conclusion, we have introduced the concept frequency downconversion-TGI whereby temporal patterns modulating a CW laser signal are downconverted in a nonlinear crystal to an idler that interacts with a temporal object to be characterized.The wide availability of tunable lasers in the near-infrared 37,38 allows for flexible and versatile operation of the downconversion TGI scheme, enabling to extend TGI to wavelength regimes where there is a lack of fast detectors and modulators.Using this approach, we have experimentally demonstrated computational TGI in the wavelength range from 3.2 to 4.3 μm.We have also demonstrated computational downconversion TGI where one uses orthogonal patterns to reduce the number of distinct probing measurements.
To highlight the benefits of our approach, a side-by-side comparison with existing TGI methods is provided in Supplementary note S4.Compared to previously reported TGI schemes, the proposed frequency downconversion-computational TGI enables imaging temporal object in the mid-infrared region by using commercially available telecom light sources and modulator and very low bandwidth mid-infrared detectors.It should also be noted that instead of using a slow midinfrared detector that records the integrated intensity after the temporal object, upconversion detection after the temporal object could be applied to perform bucket detection at visible or nearinfrared region with a slow silicon or InGaAs detector.
The temporal resolution of the downconversion computational TGI is determined by the temporal resolution of the preprogrammed temporal patterns modulated on the 1.5 μm light.Using a 1.5 μm telecom electro-optic modulator with modulation bandwidth up to 40 GHz, one can increase the speed of the temporal object up to tens of Gbps at mid-infrared, and we therefore anticipate the proposed mid-infrared computational TGI scheme to provide new possibilities to characterize advanced high-speed mid-infrared intensity modulators 39,40 and enable implementing high-speed free-space optical communications in the 3-5 μm and 8-14 μm atmospheric transmission windows 28,41 even in the presence of atmospheric turbulence 42 (see Supplementary note S5 for a possible schematic of high-frequency transmission and secure communication in the mid-infrared region using frequency downconversion computational TGI).Furthermore, applying recently developed near-infrared programmable temporally structured pulses 34 in the frequency downconversion process, computational TGI with temporal resolution down to the sub-ps level could lead to a new generation of scan-free pump-probe ghost imaging for the study of ultrafast dynamics in the mid-infrared spectral region such as e.g.ultrafast all-optical modulation signal and semiconductor carrier lifetime measurements 29 .
Finally, we emphasize that the concept of frequency downconversion ghost imaging is generic, and it can also be applied in the spatial and spectral domains, which could unlock new possibilities to realize e.g.single-pixel imaging and spectroscopy in spectral region where preprogrammed modulation is difficult to apply such as the mid-infrared and THz regions 31,32.For example, computational spectral domain ghost imaging has been demonstrated in the near-infrared region using a programmable spectral filter 43,44 enabling to perform spectroscopy in weak light condition and strong turbulence 44 .With frequency downconversion, one can transfer the preprogrammed spectral patterns to the idler wave in mid-infrared and perform computational ghost spectroscopy in the mid-infrared region, opening up new perspectives for remote sensing in industrial, biological or security applications as many molecules display fundamental vibrational absorptions in midinfrared spectral region 45 .
Materials and methods
Probing patterns generation: 250 realizations of randomly chosen binary patterns or 32-order Hadamard matrix patterns are generated on a computer and used to drive the AWG (RIGOL DG4062, 60 MHz bandwidth, 2 channels).The output of AWG is divided into two paths, one is connected to the oscilloscope (R&S RTO2024, 2 GHz bandwidth) and another is connected to the RF driver of AOM1 (Gooch & Housego Fiber Q) to modulate the transmission of AOM1 and thus the temporal intensity of the 1542 nm signal.
Temporal object generation: The temporal object S(t) consisting of a test bit sequence with duration T is sent to the AWG that drives the transmission of AOM2 and modulates the idler intensity at the mid-infrared wavelength.AOM2 has a transmission above 95% and diffraction efficiency exceeding 80% in the 3.3-3.7 μm range, with a rise time of 140 nm/mm.In our experiments, the beam diameter of the mid-infrared light on AOM2 is designed as 1 mm.After interacting with the temporal object, the mid-infrared idler light is detected by a mid-infrared photodetector and the output of mid-infrared detector is recorded in the oscilloscope as PDk (t).
1542 nm light temporal waveform measurement: 10% of the modulated 1542 nm signal light after AOM1 is tapped out using a 1:9 beam splitter and detected with a near-infrared photodetector (Daheng Optics, DH-GDT-D002N, 100 MHz bandwidth).
Direct detection: For the direct detection measurements, AOM1 is removed, and the mid-infrared idler light after the temporal object is directly detected with the mid-infrared photodetector.
Mid-infrared idler light characterization: The spectrum of the mid-infrared idler light is measured with a grating-scanning monochromator (Zolix Omni-λ500i) coupled to a lock-in amplifier (SRS, SR830) and a liquid nitrogen mercury cadmium telluride detector (Judson, DMCT16-De01).The power of the idler light is measured by a power meter (Ophir, 3A, power range: 10 μW-3 W @ 0.19-19 μm).
Data processing: The pre-programmed temporal patterns Rk (t) and the waveforms of mid-infrared detector PDk (t) measured by the oscilloscope are saved and processed offline.In the offline processing, the time-integrated output of the mid-infrared detector = ∫ (′ )′ is used to reconstruct the temporal object by computing the second-order correlation between Rk (t) and Bk calculated over the probing ensemble of realizations.The source code for reconstructing the temporal object is provided in Supplementary Note S2.
Peak signal-to-noise ratio: The accuracy of the retrieved temporal object can be evaluated from the peak signal-to-noise ratio (PSNR) between the directly measured and reconstructed object: , where G and D are the retrieved and directly measured temporal object, respectively, and K is the number of points in the temporal object sequence.MAX is the peak value directly measured in the temporal sequence.
The PPLN temperature and period applied to phase match DFG at different mid-infrared wavelengths are given in Table 1.
Fig. 1
Fig. 1 Experimental setup for computational temporal ghost imaging based on frequency downconversion.
(t) is proportional to () ∝ () × () where Ipump(t) and Isignal(t) represent the time-dependent intensity of the strong pump at 1060 nm and signal at 1542 nm, respectively.Since the temporal intensity of the 1542 nm signal is modulated by the preprogrammed pattern and the intensity of the 1060 nm pump is continuous and stable over time, one expects that the temporal intensity profile of the idler follows that of the pre-programmed probing patterns modulating the signal beam.To confirm this, we first performed a direct comparison between the time-resolved intensity profile of the 1542 nm signal modulated by a sequence of pre-programmed but randomly selected binary patterns and that of the 3.4 μm idler after the PPLN crystal.Due to the limited bandwidth (1 MHz) of the mid-infrared photodetector used for the comparison, the speed of the AOM1 that modulates the 1542 nm signal was set to 625 kbps.Three examples of bit sequences recorded over a time window of 51.2 μs time at 1542 nm (red dash line) and the corresponding downconverted idler temporal profile at 3.4 μm (black solid line) are shown in Fig. 2(a).One can see how the temporal modulation of the idler at 3.4 μm nearly perfectly follows that of the signal at 1542 nm.This near-perfect correspondence is further confirmed in Fig. 2(b) showing the time-to-time intensity cross-correlation map between the bit sequence modulating the 1542 nm signal and the recorded modulation at 3.4 μm calculated over 250 consecutive 0-1 random probing sequences.
Fig. 2
Fig. 2 (a) Comparison of time-resolved intensity profiles of 1542 nm light modulated by the pre-programmed randomly selected binary patterns (red) and 3.4 μm idler temporal intensity after the PPLN crystal (black).(b) Timeto-time intensity fluctuations cross-correlation between the 1542 nm signal and 3.4 μm idler calculated over 250 temporal windows containing different probing patterns.
Figure 3 (
Figure 3(a) shows the experimental result obtained by correlating the binary probing
Fig. 3
Fig. 3 Mid-infrared TGI results using pre-programmed, randomly selected probing patterns.(a) Experimental ghost image (cyan solid line) of a temporal object with a modulation speed of 625 kbps retrieved from 250 distinct temporal probing patterns.(b) Experimental ghost image of a temporal object with a modulation speed of 5 Mbps.In (a) and (b) the blue dashed line corresponds to direct detection with a 1 MHz bandwidth mid-infrared photodetector.(c) and (d) shows additional TGI results of other examples of 5 Mbps temporal objects.The orange and purple dashed lines represent the corresponding bit sequences measured at the output of the AWG.
Fig. 4
Fig. 4 Comparisons of retrieved ghost images (cyan solid line) and direct detection result (blue dashed lines) for an increasing number of probing patterns.The speed of the temporal object was set to 625 kbps.
(t) are transferred to the idler at 3.4 μm and, after interacting with the temporal object, the integrated intensities Bko and Bke are recorded by the slow detector for the k th pattern of Hko (t) and Hke (t), respectively.The temporal object is retrieved from the secondorder correlation between Hk (t) and Bk calculated over the 32 realizations where Bk = Bko-Bke.
Fig. 5
Fig. 5 (a) Temporal waveforms measured at 3.4 μm after the PPLN crystal, confirming that the Hadamard patterns modulating the 1542 nm signal can be successfully transfer to the idler at 3.4 μm through frequency downconversion.The modulation speed of the AOM1 is set as 625 kbps.(b) Experimental result of computational TGI obtained from
Fig. 6 .
Fig. 6. (a).Idler spectrum generated by DFG of tunable fiber laser pump and 1542 nm signal light in the PPLN crystal.(b) Experimental temporal ghost image obtained from 32 realizations of Hadamard patterns for a 5 Mbps temporal object at selected mid-infrared wavelengths as indicated.
derived from the row of a 32-order Hadamard matrix whose elements are either +1 or −1.As temporal intensity cannot be negative, we employ two distinct matrices of probing patterns Hko (t) where the −1 elements in Hk (t) are substituted with 0 (see Supplementary Note S2), and Hke (t) which is the opposite pattern of Hko (t) (i.e. the 1 elements in Hko (t) are replaced with 0 and the 0 elements in Hko (t) are replaced with 1) such that Hk (t) = Hko (t)-Hke (t).The signal at 1542 nm is first modulated by the temporal patterns Hko (t) and subsequently by the temporal patterns Hke (t).Hko (t) and Hke
Table 1
Phase-matching conditions to achieve efficient DFG in PPLN at mid-infrared wavelengths. | 5,840.8 | 2023-07-17T00:00:00.000 | [
"Physics"
] |
Collective Refraction of a Beam of Electrons at a Plasma-gas Interface
In a recent Brief Comment, the results of an experiment to measure the refraction of a particle beam were reported [P. Muggli et al., Nature 411, 43 (2001)]. The refraction takes place at a passive interface between a plasma and a gas. Here the full paper on which that Comment is based is presented. A theoretical model extends the results presented previously [T. The effective Snell's law is shown to be nonlinear, and the transients at the head of the beam are described. 3D particle-in-cell simulations are performed for parameters corresponding to the experiment. Additionally, the experiment to measure the refraction and internal reflection at the Stanford Linear Accelerator Center is described. The refraction of light at an interface is as familiar as rainbows and " bent " pencils in a glass of water. The refrac-tion of charged particles at an interface between two media, on the other hand, is not commonly considered. Take for example the case of a 30 GeV electron incident on water. The refraction takes the form of a small amount of random scattering [1,2]— an rms scattering angle of 20 mrad after 1 mm. In this paper we report the collective refraction of a 30 GeV beam of electrons at a plasma͞gas interface that is orders of magnitude larger than would be expected from single electron considerations and is unidirectional. Although the density of plasma nuclei is 10 7 times less than that of the water example above, the collective response of the plasma produces a deflection of the electron beam of the order of 1 mrad. The electron beam exiting the plasma is bent away from the normal to the interface in analogy with light exiting from a higher index medium. To understand the physical picture of this collective refraction mechanism, consider the geometry shown in Fig. 1. A dense beam of electrons is incident from the plasma side on a planar boundary between a medium of ionized plasma gas and unionized gas. For simplicity, we neglect the small effect of Coulomb scattering by the gas and treat the boundary as between plasma and vacuum. We consider the case of dense beams or underdense plasmas such that the beam's density n b is greater than the plasma density n o. While the beam is fully in the plasma, the space charge at the head of the beam repels the plasma electrons out to a …
plasma nuclei is ten million times less than that of the water example above, the collective response of the plasma produces a deflection of the electron beam of the order of one milliradian.The electron beam exiting the plasma is bent away from the normal to the interface in analogy with light exiting from a higher index medium.
To understand the physical picture of this collective refraction mechanism, consider the geometry shown in Fig. 1.A dense beam of electrons is incident from the plasma side on a planar boundary between a medium of ionized plasma gas and un-ionized gas.For simplicity we neglect the small effect of Coulomb scattering by the gas and treat the boundary as between plasma and vacuum.We consider the case of dense beams or underdense plasmas such that the beam's density n b is greater than the plasma density n o .While the beam is fully in the plasma, the space charge at the head of the beam repels the plasma electrons out to a radius r c [3].The remaining plasma ions constitute a positive charge channel through which the latter part of the beam travels.The ions provide a net focusing force on the beam [4].When the beam nears the plasma boundary, the ion channel becomes asymmetric producing a deflecting force in addition to the focusing force.This asymmetric plasma lensing gives rise to the bending of the beam path at the interface.The bending of the beam by the collective effect of the (passive) medium at the boundary is the particle analog to refraction of photons at a dielectric boundary [5].
To estimate the order of magnitude of this deflection, consider the simple case of the beam at the edge of a sharp plasma boundary (Fig. 1b) of density n o .The beam of radius r b and density n b has a positive ion charge column on one side of radius r c =α(n b /n o ) 1/2 r b , where α is 1 for long beams [3] and approximately 2 for beams of length on the order of the plasma wavelength [6] (due to a resonant overshoot of the expelled electrons).For beams shorter than the plasma wavelength (λ p ) there is not enough time for the channel to reach its full extent and α can be shown to scale as , where σ z is the Gaussian bunch length and c/ω p is the plasma skin depth (=λ p /2π =c / 4πn o e 2 / m ).At the edge of the column is a layer of electrons with a total charge equal and opposite to that in the ion column; and on the other side of the beam there is no charge [7].The nearby positive charge will attract the beam toward the center of the plasma.The electric field at the beam is easily estimated for this picture from Coulomb's law applied to a cylinder with the cross-section [ 8 ] shown in Fig. 1b, yielding The impulse on the beam is found by multiplying this force by the time that the beam is within a channel radius of the edge.The time spent near the edges is 2r c /c sinφ, where φ is the angle between the beam and plasma boundary.Dividing by the particles' parallel momentum γmc gives a scaling law for the deflection angle θ valid for φ greater than θ: where N / 2π σ z is the charge per unit length of the beam, and r e is the classical electron radius.
Note that for long beams compared to the plasma skin depth (or equivalently, high plasma densities) such that α is constant, the dependence of θ on plasma density cancels out.This is because higher density, although giving a stronger deflection force, gives a narrower channel and hence a shorter time for the impulse.
We note that the impulse approximation used in obtaining Eq. ( 2) breaks down at small incident angles φ such that the deflection θ is on the order of φ.In this case, the beam can be internally reflected.From the simulations, when φ is less than θ above, the beam is deflected just enough to skim along the interface.That is for small angles φ.
The simple analytic model above was tested with the electron beam from the Stanford Linear Accelerator Center (SLAC) linac at the Final Focus Test Facility.The experimental setup has been described in Ref. [9].The plasma was created by photoionization of a column of lithium vapor by an ArF laser (193nm).The plasma density was 1-2x10 14 cm -3 with a cross section of approximately 2.5 mm × 2.1 mm and length of 1.4 m.The beam consisted of 1.9x10 10 electrons at 28.5 GeV in a Gaussian bunch of length σ z =.7mm and spot size σ x ~ σ y ~40 µm.The electron beam traversed a thin glass pellicle located 57 cm before the plasma, and overlapped with the ionizing laser beam that reflected off the same pellicle.The angle between the electron bunch initial trajectory and the laser beam φ (in Eq. 2) was remotely adjusted using the pellicle tip-tilt angle, and was monitored by measuring the deflection of a reference HeNe laser.The electron beam propagated over a distance of about 12 m after the plasma and its shape and transverse position at that location were measured using a Cerenkov radiator (1mm thick aerogel) and an imaging system.
The transverse position was also monitored independently at several other locations downstream of the plasma, including by a beam position monitor located from the plasma exit 4.3m.Sample results are shown in Figs. 2 and 3.
To compare to the experiments as well as to provide further insight into the physical mechanisms involved in the refraction we performed fully self-consistent, electromagnetic particlein-cell simulations in three dimensions [10].Fig. 2c shows a snapshot of the real space of the beam and plasma electron density (blue) from a simulation.The head of the beam has emerged undeflected from the plasma at this time, but the tail portion has been deflected toward the plasma and is near the plasma boundary.This results in a characteristic splitting of the beam downstream into two as seen in Figs.2b and 2d.
The apparent break-up of the beam into bunchlets can be understood in the following way.
There is a focusing force from the plasma that increases from head to tail as the plasma responds to the beam [6].The head is not focused, but as the plasma responds, the ion column forms producing focusing, and the first waist that forms has separated the head from the next bunchlet.This bunchlet is separated from the remainder of the beam by another waist.Note that once plasma blowout occurs, the remainder of the beam has the same focusing and deflection angle.
Figure 3 is a plot of beam deflection (θ) measured with the beam position monitor versus angle between the laser and the beam (φ).The solid curve is the prediction from the impulse model.
For incident angles φ less than 1.3 mrad, the beam appears to be internally reflected in agreement with Eqs. ( 2) and (3).
The simulations and experimental results presented here show that it is possible to refract or even reflect a particle beam from a dilute plasma gas.Remarkably, for a 28.5 GeV beam, the collective effects of a plasma are strong enough to "bounce" the beam off of an interface one million times less dense than air.It is possible to ascribe to the plasma an effective index of refraction and corresponding Snell's law for this collective phenomenon.However, the refractive index is angle-dependent and intrinsically nonlinear, and we leave it for a future paper.
The refraction and total internal reflection of light obviously have many significant applications such as the guiding of light pulses in optical fibers.It is interesting to contemplate the possibility of a similar use of internal reflection of particles to guide electricity in "vapor wires" that are rapidly created or reconfigured by shining laser beams through a gas.One can imagine fast optical kickers replacing magnetic kickers or even compact magnetless storage rings.For such applications it will probably be necessary to use an auxiliary laser or particle beam (which may be of much lower energy) to pre-form the ion channel and deflect all of a trailing particle beam.As an example, consider a gas that is ionized to a density of 10 16 cm -3 by a laser that propagates at an angle of a few milliradians to the path of a focused electron beam.For beam parameters typical of SLAC, the deflection force (Eq.2) is equivalent to a 50 kG dipole magnet, and the turn-on time is approximately ω p -1 = 200 femtoseconds. We Work supported by USDoE #DE-FG03-92ER40745, DE-AC03-76SF00515, #DE-FG03-98DP00211, #DE-FG03-92ER40727, NSF #ECS-9632735, NSF #DMS-9722121.
Fig. 1 .Fig. 2 .
Fig. 1.Schematic of refraction mechanism: a) side view and b) front view of beam and plasma illustrating how asymmetric blowout creates a net deflection force.
Fig. 3 .
Fig. 3. Experimentally measured electron beam deflection θ as a function of angle between laser and e-beam (i.e., angle of incidence φ).Solid curve is the impulse model from Eqs. (2) and (3) with α=1.4 for the bunch length and plasma density corresponding to this run (0.7 mm and 1x10 14 cm -3 , respectively).Since the beam deflection measured by the beam position monitor is a weighted would like to acknowledge the godfather of this collaboration, John M. Dawson.We thank Luis O. Silva for useful discussions and our collaborators on prior work leading to this experiment: W.P. Leemans, P. Catravas, E. Esarey, S. Chattopadhyay, P. Volfbeyn, and D.Whittum; as well as S. Rokni and T. Kosteroglou.We thank Phil Muntz of USC for generously sharing his laser and Dr. Peter Tsou of JPL for providing the aerogel. | 2,842 | 2001-09-28T00:00:00.000 | [
"Physics"
] |
Geometric bloch vector solution to minimum-error discriminations of mixed qubit states
We investigate minimum-error (ME) discrimination for mixed qubit states using a geometric approach. By analyzing positive operator-valued measure (POVM) solutions and introducing Lagrange operator (cid:2) , we develop a four-step structured instruction to find (cid:2) for N mixed qubit states. Our method covers optimal solutions for two, three, and four mixed qubit states, including a novel result for four qubit states. We introduce geometric-based POVM classes and non-decomposable subsets for constructing optimal solutions, enabling us to find all possible answers for the general problem of minimum-error discrimination for N mixed qubit states with arbitrary a priori probabilities.
I. INTRODUCTION
Distinguishing between quantum objects is one of the most fundamental tasks in information theory.Because of its particular importance in quantum communication [1,2] and quantum cryptography [3] , discriminating among these objects plays a very important role in quantum information protocols.
The quantum state discrimination problem has been the subject of many researches during recent decades [4].The starting point of these efforts date back to 1978 when Helstrom addressed this issue in his famous book [2].He studied the case of two states and obtained the minimumerror probability in the framework of quantum detection and estimation theory [5].Since then, based on the different approaches on the issue (error-free protocols with a possibility of failure and protocols with minimum-error probability) many developments have been achieved [6][7][8][9][10][11][12][13][14][15][16][17][18][19].Finding which one is the most suitable is highly dependent on the purpose of the process.
The extension to the general case of N possible states {ρ i } N i=1 with associated a priori probabilities {p i } N i=1 is not a straightforward problem.For these problems, however, there are necessary and sufficient conditions on the optimal measurement operators π i known as the Helstrom conditions [2,20,21] where Γ = i p i ρ i π i is a positive Hermitian operator known as Lagrange operator.In fact, these two conditions are not independent and the second condition can be obtained from the first condition [20,21].So, the main condition which is both necessary and sufficient condition for an optimal measurement is Eq.(1).However, we can still use Eq. ( 2) in our procedure.It is also of particular importance to note that all minimum-error measurements that give the optimal probability define a unique Lagrange operator [22].
The Helstrom conditions can be used to check the optimality of a candidate measurement, however, they cannot be used to construct the optimal measurement for a general problem.They can be useful for the problem with some symmetries among states which can be a guide for guessing the form of measurement in such a problem [23][24][25][26][27][28].
Although, for a while, only problems with certain symmetries seemed solvable, in recent years there have been successes in solving problems by employing the conditions (1) and (2).Particularly, for the qubit states, the Bloch representation provides a useful tool to solve the problem of ME discrimination of qubit states.For example, Hunter in 2004 presented a complete solution for pure qubit states with equal a priori probabilities [29].Samsonov in 2009 applied the necessary and sufficient conditions for indicating an algorithmic solution for N pure qubit states [30].Deconinck and Terhal in 2010 used the discrimination duality theorem and the Bloch sphere representation for a geometric and analytic representation of the optimal guessing strategies among qubit states [31].Bae in 2013 represented a geometric formulation for a case with equal priori probabilities in a situation where quantum state geometry is clear [32,33], and in the same year a complete analysis for three mixed states of qubit systems was done by Ha et al. [34].Weir et al. in 2017 used the Helstrom conditions constructively and analytically for solving a problem with any number of qubit states with arbitrary priori probabilities [35] with giving the central role in their approach to the inverse of Γ instead of Γ itself.Later, they used their method to find optimal strategies for trine states [36].
In this paper, we reconsider the problem of ME discrimination of N mixed qubit states with arbitrary a priori probabilities using a different geometric approach.In 2010, Deconinck and Terhal indicated a general algorithm to find the Lagrange operator Γ by using the geometric properties of the Bloch sphere.Our main purpose in this paper is to extent their work and give an analytical method in detail to find all solutions of a general mixed qubit states problem, as well as studying some properties of these solutions.
Our starting point is the Helstrom conditions.Employing these conditions and the Bloch representation for qubit states, one can find all possible ME POVM answers of a given problem with just knowing the Lagrange operator Γ.Generally, we solve the problem first for arbitrary priori probabilities, then, as a special case, for equal a priori probabilities, compare this method with existed results [32].For equal a priori probabilities minimal sphere, circumsphere, plays an important role to find Γ.Furthermore, for a general problem with arbitrary a priori probabilities, we establish a general structured instruction to obtain Γ, with some practical tips involving the polytope of states we introduce two classes of unchanged guessing probability and unchanged measurement operators.This instruction will be used to find Γ for a general ME problem with two, three, and four qubits.A direct solution of the last case of four-qubit states is completely new.Equipped with these tools, we are able to find all possible ME discrimination measurements of the problem.By introducing non-decomposable POVM answers of the problem an alternative way to obtain a general optimal answer is achievable by considering a convex combination of all these non-decomposable POVM sets.
This paper is organized as following: In Sec.II, we review the problem of ME discrimination in a general way by reformulating the problem for qubit states using the Bloch vector representation and represent a way to find all optimal answers of a problem using the Lagrange operator Γ by constructing a four-step instruction.Next, in Sec.III we solve a general problem of mixed qubit states with arbitrary priori probabilities for two, three, and four qubits cases.As an example, for three states we resolve the case of trine states to compare it with the known results in [36].Also, another illustration for the case of four qubit states is helpful.In Sec IV we consider some cases of N > 4 including N = 5 and N = 6 states and indicate regions chracterizing by first non-decomposable answer in different colors.We also revisit the special case of N states with equal a priori probabilities.Finally, the last section includes the summary and conclusions.
II. QUANTUM STATE DISCRIMINATION
Suppose that we are given N states ρ i , i = 1, • • • , N , with priori probabilities p i .Our task is to discriminate among these states by performing the optimal measure-ment with minimum probability of error.Such measurement needs to have N outcomes corresponding, respectively, to each of the N states ρ i .A general measurement can be described by a positive operator-valued measure (POVM) [37][38][39], which is defined by a set of operators {π j } satisfying Each of the possible measurement outcomes j is characterized by the corresponding POVM element π j .The probability of observing outcome "j" when the state of system is "i" is Then, the probability of making a correct guess when the given set is {p i , ρ i } N i=1 will be In a minimum-error discrimination approach the aim is to seek the optimal measurement that gives rise to the minimum average probability of error occurred during the process or, equivalently, to the maximum probability of making a correct guess given by P guess = max{P corr }.This is achievable, as we mentioned previously, if and only if the POVM elements satisfy the Helstrom conditions (1) and (2).
To continue, let us reformulate conditions (1) and (2) by summing the second one over j which results in where we have defined ρi = p i ρ i .From these equations ( 6) and ( 7) (and its Hermitian conjugate π i (Γ − ρi ) = 0 ∀i), in the case that Γ − ρi is not a full rank operator, one can easily see that supp{Γ− ρi } ⊥ supp{π i }, or equivalently, supp{Γ− ρi } ⊆ ker{π i }, where supp{•} and ker{•} denotes the support and kernel of an operator, respectively, or in other words, equations ( 6) and (7) imply that both determinants det{Γ − ρi }, det{π i } have to be zero .So far, everything is quite general and no reference has been made to qubit states.In what follows we consider, however, the qubit cases.
A. Discrimination of qubit states
Solving a general problem for minimum-error (ME) discrimination is not an easy task and there is no analytical solution in general.However, there are some approaches for qubit states [29-32, 34, 35].In this work, we consider the problem of the most general case of qubit states using a geometric approach.We use the Bloch vector representation to write ρ i and Γ as Where v i = (v xi , v yi , v zi ) is the Bloch vector corresponding to the state ρ i , σ = (σ x , σ y , σ z ) is a vector constructed by Pauli matrices.Moreover, γ 0 = Tr(Γ) = P guess and γ = (γ x , γ y , γ z ).Non-negativity of ρ i requires 0 ≤ |v i | ≤ 1.The lower and upper bounds are achieved for the extreme cases of maximally mixed and pure states, respectively, so |v i | can be regarded as a kind of purity.Using Eqs. ( 8) and ( 9), one can write where we have defined the sub-normalized Bloch vector ṽi = p i v i associated with the state ρ i and its priori probability p i .But in this paper, for simplicity, we call both vectors v i and ṽi Bloch vectors.It follows from inequality (6) that So, this equation is equal to the main Helstrom condition, Eq. ( 1), and from now on, we will check optimality of our answers by Eq. (11).
To continue, if the inequality strictly holds, i.e. positive definiteness of the determinant of Γ− ρi , inevitably corresponding measurement operators π i have to be zero.But if equality holds, i.e. zero determinant of Γ − ρi , then π i does not have to be zero.So, in the case of nontrivial answer we obtain the following useful formula Regarding γ 0 = P guess , this relation implies that the distance between two vectors γ and ṽi is equal to the difference between two probabilities, i.e. the guessing probability and the priori probability p i associated with state ρ i .The greater p i , the closer are two vectors ṽi and γ.Keeping in mind that Eq. ( 12) does not generally hold for all states.For the sake of simplicity, without loss of generality, we consider that the set of {ρ i } N i=1 and corresponding priori probabilities {p i } N i=1 are arranged in such a way that the first {ρ i } M i=1 are those states that the rank of (Γ − ρi ) is less than 2, so it means that these states are detectable in optimal case and the other states {ρ i } N i=M+1 are states that for them (Γ − ρi ) is a full rank operator, so there is no way to detect them in an optimal way, i.e. π i = 0 for i = M + 1, • • • , N .With this terminology and using the fact that det(π i ) = 0 when Γ − ρi = 0 or full rank, it turns out that the measurement operators π i 's for i = 1, • • • , M must be rank-one and therefore proportional to projectors.As a qubit projector is characterized by its unit vector ni on the Bloch sphere, we have for i = 1, • • • , M , where α i 's are real and range from zero to one, α i = Tr(π i ).The completeness relation (3) requires the following conditions on the parameters α i 's where 0 ≤ α i ≤ 1.On the other hand, inserting Eqs.(10) and (13) in the Helstrom condition (7), and by using Eq. ( 12) we find the unit vector ni as Obviously, knowing vector γ is equivalent to know all the unit vectors ni corresponding to the nonzero measurements π i .
Corollary 1.As a result of Eqs.(14), γ has to be confined in the convex polytope of the points ṽi 's.
Corollary 2. It can be concluded from Eqs. (12) to (14) that by translation of all vectors ṽi 's, with unchanged p i 's, by a fixed vector inside the Bloch sphere the POVM answers will be unchanged, and the only thing that matters is relative locations of ṽi 's.So, if all of ṽi 's move with the same displacement vector a (ṽ i → ṽi +a), we expect the same answer.In other words, a set of {p i , ρ ′ i } has the same answer of measurement operators and guessing probability as the initial set {p i , ρ i }, where This class is characterized by an unchanged guessing probability.
From the above discussion it follows that in order to characterize the optimal measurements, we have to know both the vector γ and the real parameters α i 's.If γ is known, then by using Eq. ( 14) it is an easy task to find all possible sets of parameters {α i }.Each set of {α i } provides a different answer for the optimal POVM.So, finding γ is our priority that exposes all possible answers.As Γ is determined with a scalar γ 0 and a vector γ, we need at most four states to determine Γ satisfying Eq. (12).
On the other hand, if we are given a set of ME discrimination POVM {π i }, we are able to find the Lagrange operator Γ (or equivalently γ) with the help of relation Γ = i ρi π i .Then by using Eq. ( 14) we can obtain all possible optimal answers.
B. Types of states and measurements in an optimal strategy
Similar to [31] we distinguish the states in a general qubit state discrimination problem.We showed that γ has to be confined in the convex polytope of the points ṽi 's.So, based on different representation of γ the states in a discrimination problem can be considered as unguessable, nearly guessable, and guessable states [31].The states for which Γ − ρi is a full rank operator (inequality part of Eq. ( 11)) are called unguessable since their related POVM elements are always null.The nearly guessable states are the states that although for them the operator Γ − ρi is not full rank, they do not appear in any optimal measurement and therefor their related measurement operators are null, and finally guessable states are those states that satisfy Eq. ( 12) and their POVM elements are nonzero for some optimal measurements.
It is important to study under what conditions the optimal measurement of a discrimination problem is unique.First, based on the preceding discussion, we have the following proposition for a discrimination problem with N ≤ 4 Proposition 3. In case of N ≤ 4, the set of parameters {α i } satisfying Eq. ( 14) is unique if the states form a N − 1 simplex inside Bloch sphere.However, it is neither necessarily the case for N > 4, nor for N ≤ 4, where the states do not form a N − 1 simplex, in a sense that each possible set of {α i } leads to a complete solution for the optimal measurement operators.
Proof.The proof simply comes from the theorem 3.5.6 in [40].Based on this theorem, a point in an k-simplex with vertices x 0 , x 1 , • • • , x k can be written in a unique way of the vertices.So, in the problem of N ≤ 4 qubit state discrimination if these states form a simplex the convex combination for γ is unique.
Consequently, in the cases with non-unique solutions we may face with some problems with different POVM measurements that give us the same optimal guessing probability.Thus, one can think about the possibility of constructing different measurements by combining these measurements in a convex way.This fact motivates us to divide optimal measurements in two different families of measurements which we call them decomposable and non-decomposable measurements.
is an optimal measurement for some discrimination problems.According to (13) we are able to define the set of Bloch unit vectors E = {n 1 , n2 , • • • , nm nm nm } such that their convex polytope contains the Origin (Eq ( 14)).If one can find a subset E ′ ⊂ E in such a way that its related convex polytope still contains the Origin, then M is decomposable, otherwise, it is non-decomposable.Based on the three dimensional direction of ni ni ni 's, non-decomposable sets have at most four elements.
An exception for this definition is the existence of a measurement operator π j proportional to unit matrix 1, then, there is an one-element non-decomposable set If M is decomposable then there is at least one subset E ′ ⊂ E which its corresponding POVM, M ′ , is optimal for the same minimum-error discrimination problem {p i , ρ i } N i=1 .In other words, any subset E ′ ⊂ E with at most four elements of ni ni ni 's that contains origin in its convex polytope of its elements, and if we cannot discard any of its elements in a way that origin still be in the convex polytope of the remaining elements, then it is a non-decomposable subset.The proof for this proposition is straightforward according to the definition of decomposable and nondecomposable measurements.
C. Qubit states: Arbitrary priori probabilities Now, we are taking a step further by considering a general problem of a set of qubit states with arbitrary priori probabilities.Before we provide an instruction to find the answers of a general problem {p i , ρ i } N i=1 , consider that one is able to find the Lagrange operator Γ by using Eq.(12).Based on the previous discussion, we know that one of the answers of an ME problem is determined with at most four number of states ρ i 's satisfying Eq. ( 12), so one can test Step (i).-First, if there is a ρ j with the greatest priori probability (p j > p i , ∀i), it must be examined whether Γ is ρj or not.If Γ = ρj , we have from Eqs. ( 8) and ( 9) that γ 0 = p j and γ = ṽj .Then, to satisfy Helstrom condition, Eq. ( 11), it requires the following condition where p ji := p j − p i and dji := |ṽ j − ṽi |.
If the above condition holds for every i, the answer for POVM will be π i = δ ij 1 and P guess = p j .It is called no measurement strategy; simply guessing the state ρ j [41].
Step (ii).-If the above condition is not met, it has to be tested two qubit states together.But first we consider two qubit states ρ l and ρ m with arbitrary priori probabilities p l and p m ( p l ≥ p m ).By applying Eq. ( 12) for states ρ l , ρ m and then subtracting the results we get which defines a hyperbola with its foci at ṽl and ṽm .This hyperbola is characterized with parameters a, b, c 1.The hyperbola of Eq. ( 18), which only the left part of this hyperbola could be an allowed area for γ. (this hyperbola exists if Rlm be positive, Eq. ( 21)) which can be derived from Bloch vectors of states ρ l , ρ m and their corresponding probabilities p l , p m (see Fig. 1) as and It follows from Eq. ( 11) that only the left branch of the hyperbola could provide a candidate answer for γ.
In this case of two-state case the candidate γ is located at the distance of from point ṽl on the connecting line between ṽl and ṽm .As Rlm 's are distances, they are definitely positive.So, possible candidates for γ 0 and γ are and where the primes indicate that they are still candidate answers.Now, the question is: which two states to pick?
In this case we have the following proposition Proposition 5.In a case where a two element optimal POVM exists the answer can be obtained by considering two states which maximize γ ′ 0 lm .Proof.For simplicity, consider that γ ′ 012 is the maximum.Assume that there are states i and j with γ ′ 0ij < γ ′ 012 that satisfy Helstrom conditions.It means that then, writing inequality for l = 1, 2 and summing them gives the second inequality comes from the triangle inequality.So, we end up in γ ′ 0ij > γ ′ 012 which is in contradiction with our first assumption γ ′ 0ij < γ ′ 012 .
To continue, we should find two states which maximizes Eq. ( 22).Then test these two states using the following condition which comes from the Helstrom condition, Eq. ( 11).The cases i = l, m are trivial.If the Eq. ( 22) is maximized for more than a pair of states, we must test Helstrom condition for each of them.Note that ME discrimination might have some answers and what we need is just one of them to find γ.If the condition ( 27) is not met we proceed to the next step.
Step (iii).-Inthis step we consider three-state case.Let us label them by l, m, n. γ ′ is constructed from these three states by drawing three hyperbolas arising from each pair of states l, m, n (lm, ln, mn).The convex polytope in this case is a triangle and it is enough to just calculate the point that two hyperbolas meet (γ ′ ).This point must be inside the triangle.The related probability can be obtained from Eq. (38) .As before, for three states that maximizes this probability we must test Helstrom condition.The cases i = l, m, n are trivial.Likewise the second step, we might have more than one three-state case to maximize Eq. (38).In this case we must test the Helstrom condition for each of them.
Step (iv).-Ifwe still do not have the answer, we have to consider four-state case.This step must reveal the answer since we know that there is always a nondecomposable answer with maximum four number of states.In this case, although there are 4 2 = 6 hyperbolas, it is enough that three hyperbolas, with a common focus point ṽm , meet at one point inside the polytope to obtain γ ′ .The related probability can be obtained from Eq. (50).A four-state case that maximizes this probability must be states we are searching for to find γ.As before, this four-state case that maximizes Eq. (50) might not be unique.Deriving of equation (38) and Eq.(50) for steps (iii) and (iv) respectively will be discussed in next section.
By looking at the Bloch representation of Γ, Eq. ( 9), we see that there are just four unknowns to be determined, a scalar γ 0 and a three dimensional vector γ.To this purpose, one can easily find them by considering four qubit states with their corresponding Bloch vectors (See Eq. (12).So, we do not expect to have more than four states.Furthermore, as a typical ME discrimination problem has, at least, a non-decomposable answer, one can see that these four steps are enough to find γ.Although we introduced a way to find required states which lead to the Lagrange operator, the fourth step seems less probable, because we often expect that to have at least one non-decomposable POVM answer with less number of elements for a given probable.
At this point, it is useful to explicitly express some remarkable results: • According to the above instruction, equation (18) has a key role to find γ, which is unaffected as long as the relative distances between states dlm 's, and also priori probabilities p i 's are consistent.It suggests that by translation or rotation of the polytope, or equivalently all ṽi 's as a whole, with a fixed vector inside the Bloch sphere, P guess is consistent.This class is characterized by an unchanged guessing probability.
• Furthermore, by translation and rescaling of the polytope inside the Bloch sphere, with unaffected p i 's, measurement operators which are defined by Eq. ( 13) will also be unchanged.This class is also characterized by an unchanged measurement operators.
• Based on the mentioned instruction, to have a complete solution of a general qubit state discrimination we need to find all non-decomposable sets, of the problem (It can be simply done, after finding γ, by looking at the geometry of the ni 's; see the explanation after proposition (4)).Then, we are able to obtain all of the possible optimal POVM's using convex combinations of all non-decomposable sets, i.e. k i=1 β i M ′ i .Where k is the number of all non-decomposable sets, 0 ≤ β i ≤ 1, and k i=1 β i = 1 to satisfy completeness relation of a POVM measurement.For this purpose, in the next section, we will solve the problem for these cases analytically.Some of these results like the problem of two states discrimination have been known for a long time.
B. Three States
We now consider a general case of three arbitrary qubit states with priori probabilities p 1 ≥ p 2 ≥ p 3 .To proceed further, note that discrimination of an arbitrary set of three qubit states can be reduced to the discrimination of three qubit states, all embedded in x− z plane, defined by where θ and φ are defined in Fig. 2. For a proof see Appendix A. The corresponding Bloch vectors ṽi 's are given by ṽ1 = (0, ap 1 ), ṽ2 = (bp 2 sin θ, bp 2 cos θ), ṽ3 = (−cp 3 sin φ, cp 3 cos φ), where (x, z) is a simplified representation for (x, 0, z).With the assumption that p 1 is the greatest priori probability, first we have to check whether ρ1 is equal to Γ or not.So, with the help of Eq. ( 17), we need to check the following conditions Where |(x, z) t | = √ x 2 + z 2 .Obviously, if the above conditions are not met, we must calculate P guess from Eq. ( 22) for every two states and then check the Helstrom condition using Eq. ( 27).The explicit form of this conditions to detect one of the pairs of (ρ 1 , ρ 2 ), (ρ 1 , ρ 3 ) or (ρ 2 , ρ 3 ), respectively, are with the corresponding γ ′ ij = (x ′ ij , z ′ ij ) t for these states from Eq. ( 23) where p ij := p i − p j and Rlm is defined in Eq. ( 21).
For any specific problem with known p i , θ and φ, each condition of Eqs.(33) to (35) that is satisfied will be the only solution of the problem.Accordingly, the optimal POVM elements are given by π i = 1 2 (1 + ni • σ), (see Eq. ( 13)), where the corresponding unit vectors of the nonzero measurement elements can be written as Finally, if none of the above conditions were met, the solution for γ must be found while all three states are detectable.It means that three hyperbolas should meet at one point.Note that since three points are embedded in a plane it is enough to find the intersection between two hyperbolas in this plane.So, by using the properties of hyperbolas the guessing probability can be obtained as where ).( 40) To find the vecor γ, we can use the geometric of triangle in Fig. 3.For this purpose, by using the Gram-Schmidt method, we can use two edges of the triangle ṽ2 − ṽ1 and ṽ3 − ṽ1 to construct an orthonormal basis from them for the plain which the triangle lies in.If we show the orthonormal basis by k1 and k2 , then where Hence, γ can be written as So, POVM elements can be obtained using Eqs.( 13) and (15).
It is of the notice that we first altered the Bloch vectors with a rotation matrix, Appendix A, and a displacement.In case of ρ i and ρ j to be two detectable states, optimal operators are π i = ρ ni and π j = ρ −n i with ni = ṽi −ṽ j dij .So, the answers of the optimal operators {π i } are easily obtained with the vectors of the original problem.But, for the case in which three states are detectable, after obtaining ni 's we must rotate them back, with inverse of rotation matrix, to the original problem and then use the formula π i = ρ ni to get the optimal operators for the original problem.
Here, for more illustration of this case, let us consider discrimination among trine states, i.e. the qubit states associated with equidistant points on the surface of the Bloch sphere.They are defined by Then, based on the previous discussion, it is possible to rotate these states on the Bloch sphere to align them on x − z plane By writing the corresponding density matrices and comparing them with Eq. ( 29), we get a = b = c = 1 and θ = φ = 2π/3.
To continue, we first assume the case with equal priori probabilities.As the three states are pure and located on the surface of the Bloch sphere, finding the answer is straightforward.According to the discussion in Sec.II A (or more straightforward, later, from Appendix.C), the guessing probability is P guess = 2 3 and the corresponding POVM operators for the original problem are π i = 2 3 |ψ i ψ i |, known as the trine measurement [23].Next, we consider the case with arbitrary priori probabilities.We use the same parameterization of Weir et al. [36] for priori probabilities; p 1 = p + δ, p 2 = p − δ, and p 3 = 1 − 2p, where the assumption 0 < p 3 ≤ p 2 ≤ p 1 implies that 1 3 ≤ p ≤ 1 2 and 0 ≤ δ ≤ min{3p − 1, p}.Following our method of Sec.II C, one can see that the "no measurement strategy" cannot be optimal for γ 0 = p 1 and γ = p 1 ẑ.It is obvious by investigating that the condition p 1i ≥ d1i is not satisfied for every i.Equations ( 31) and (32) give the same result as well.So, we must have either two or three POVM measurement elements.For two-state case one can see that Eq. ( 22) is maximum for states ρ 1 and ρ 2 .In this case Eq. ( 33) gives where d12 = 3p 2 + δ 2 .Solving Eq. ( 46) for δ, gives us four roots, which the only valid values for In this region of two-state case the guessing probability is In the region where the POVM measurement has to be a three-element POVM, we can use Eq. ( 18) for three states to find γ = (γ x , γ y , γ z ).By using these equations and some algebraic calculations, the desired γ can be obtained as Using P guess = γ 0 = p i + |ṽ i − γ| or Eq. ( 38) the guessing probability can be calculated The equations ( 47) and (48), and this P guess are in complete agreement with [36] that was obtained with a different approach.
C. Four States
In this section we consider a general problem of four qubit states.According to the previous discussion, we can divide this problem into two cases: (i) When the four states form a two dimensional convex polytope, in this case according to Caratheodory theorem the vector γ can be written as a convex combination of at most three points, therefore the optimal measurement has at most three nonzero elements.In this case, we only need to solve the problem either by using the three steps instruction in the Sec.II C. (ii)When four states form a three dimensional polytope, i.e. a tetrahedron.Based on the type of states a four element optimal POVM may exist.If so, to find the guessing probability and optimal measurement we use the properties of tetrahedrons.Each tetrahedron is composed of four triangle faces, six edges, and four vertices.Based on the location of γ, the number of detectable states in an optimal way can be specified.If γ be on one vertex then an optimal answer is a no measurement strategy and the other three states are unguessable.If it lies on one edge then an optimal measurement will be a POVM with two non-zero elements, i.e. two guessable states.Furthermore, lying γ on one of the faces means that three states can be detected through an optimal measurement and one state will remain unguessable.Finally, if γ be an interior point of tetrahedron then all four states are guessable, i.e. six hyperbolas meet at a single point.To find the guessing probability in this situation, let us show the vector γ as an interior point in the tetrahedron ṽ1 ṽ2 ṽ3 ṽ4 , therefore the guessing probability can be written as (51) we end up with P guess = ζ 2 which we will later see is consistent with result of Corollary 7 for equal a priori probability states, as well as the result obtained in [32].For other cases, based on the values of α and β we can have different optimal answers including two-element and three-element measurements.Fig. 5, for different values of α and β, shows the simplest answer.In this case, since four vectors in (57) constitute a thetrahedron, all answers are unique answers.
Knowing the optimal answers for N ≤ 4, we have everything to solve the general problem of N qubits states.For N > 4 cases since the optimal answer will be fulfilled with a non-decomposable optimal POVM with at most four non-zero elements, we can use the guessing probabilities that were found for N ≤ 4 cases.For this purpose, the guessing probability of a problem of N states ,{p i , ρ i } N i=1 , can be rewritten as where S is a subset of {ρ i } N i=1 with the number of elements N S (N s ≤ 4).P S guess in this relation can be obtained by using equations ( 22), (38) and (51).
IV. N > 4 CASES
Now that we know how to find answers for the cases N = 2, 3 and 4, we have all we need to solve a general problem of N qubit states.To simplify this task and save our time, we wrote a simple code with the Mathematica.This code is based on our instruction and its primary task is searching for the first optimal answer including γ and P guess [42].
To start, we first consider an example of a five qubit states consisting of four equiprobable symmetric qubit states (57) (p 1 = p 2 = p 3 = p 4 = p), and one more state v 5 = ζ(sin θ cos φ, sin θ sin φ, cos θ) with a different prior probability δ = 1 − 4p.We want to analyze how this state disturbs the optimal answer of four symmetric states which we already know their optimal answer is a four-state case for small δ.For δ ≤ 0.2, one can observe that the only optimal answer can be obtained by a four-element measurement.So, the optimal measurement is still the same as the problem of four equiprobable symmetric qubit states with the same purity, i.e. γ = 0 and not disturbed by the new state.However, in this case the guessing probability decreases by increasing δ (P guess = 2pζ = ζ 1−δ 2 ).As δ exceeds this value (δ > 0.2) other cases will appear as well.Fig. 6 shows the situation for δ = 0.22 and p = 0.195.Based on the location of the state ρ 5 on the Bloch sphere all three cases can happen.As δ increases, the region of four-state case becomes smaller and finally disappears, i.e. γ can be revealed with either two or three-state cases.And for high values of δ, in most cases, a two-state case will give the optimal answer.
To continue, we first reconsider all the five states of the previous example.Then, we change its third and fifth states.The angles of state ρ 3 in spherical coordinate are {3π/4, arccos (−1/ √ 3)}.We replace its azimuthal angle with a variable ω.Then, we can set the polar angle of ρ 5 to π/3.Their prior probabilities are unchanged as before.So, there are three fixed states, with two rotating states on circles.Fig. 7 shows the answers for 0 < φ, ω < 2π.
For another example, we want to consider a more general case of six qubit states with no symmetries or equal The first qubit state is parametrized by two spherical angles φ and θ, so it is free to rotate on a sphere of radius 0.85.Fig. 8 shows the different regions for which the γ can be obtained.Based on this figure, in many cases a three-element measurement will fulfill the guessing probability and there is no need to go to the next step to find γ.Moreover, the region for four-element measurements are small.However, it should be noted that it does not mean there is no optimal four-element measurements for the rest of region where two-state and threestate are answers.For instance, for θ = 1.585948 and φ = 1.288376, there are three possible optimal measurements with unique γ = (0.070525, 0.057738, 0.048073) and P guess = 0.370574.Two of them which are nondecomposable answers can be obtained by considering two three-state sets of {ρ 1 , ρ 2 , ρ 3 } and {ρ 1 , ρ 3 , ρ 4 }.The third one which is decomposable can be obtained by four states {ρ 1 , ρ 2 , ρ 3 , ρ 4 }.We note that the optimal answer in green area can not be obtained either with two-element or three-element measurements.
As the last example, let us consider the special case of N qubit states ρ i with equal a priori probabilities i.e. p i = 1/N for i = 1, • • • , N .We show in appendix C in order to reach P guess , we have to choose a sphere with maximum number of states lying on.It is simply the minimal sphere covering all the v i 's (the same result was obtained in [33] with a different approach).We call it circumsphere, defined by {R, O}, where R = |v i − O| is its radius, and O is its circumcenter (Fig. 9).In view of this and the fact that 0 < R ≤ 1 and P guess = γ 0 , we get Moreover, Eq. ( 15) reduces to A more detailed discussion on the case of equal a priori probabilities can be found in the appendix C.
V. CONCLUSION
In this paper, we have revisited the problem of minimum-error discrimination for mixed qubit states.
For this aim, we employed the necessary and sufficient Helstrom condition in a constructive way to obtain the discrimination parameters for a typical problem of ME qubit state discrimination.Our tools in this way are the representation of qubit states in terms of the Bloch vectors.For the case of arbitrary priori probabilities each two qubit states construct a hyperbola and the desired γ will lie on one of its parts which is next to the more probable state.Using these tools, we introduce an instruction to find the Lagrange operator Γ.Then, with this Lagrange operator, we can find all optimal POVM measurements.
We also discuss some properties of the POVM answers involving the geometric of the polytope of qubit states inside the Bloch sphere, and introduce some classes of answers like the classes of unchanged guessing probability and unchanged measurement operators.
We show that for an optimal strategy, there might be some states that are not detectable i.e. their related POVM elements are zero.So, in the problem of ME discrimination of N qubits {ρ i } N i=1 some states might be undetectable.We indicate them as unguessable, nearly guessable, and guessable states, assuming that there are M number of guessable states in a typical optimal problem (1 ≤ M ≤ N ).
We show that every POVM set M , can be divided into a limit number of non-decomposable POVM subsets E ′ .Finding all of these subsets is an alternative way of constructing a general ME answer of the given problem.They might be also practical in a case that detecting some states ρ j 's are not interested, so we might be able to use those non-decomposable subsets that do not include nj nj nj 's.It is also more reasonable when preparing measurement operators are expensive.
To illustrate the proposed instruction, we need to know the solutions for N ≤ 4. So, we solved the problem for these cases.The case of two states has been known for long time (Helstrom formula).For the case of three qubit states, by applying some rotation and translation we show that the problem can be reduced to the problem of three qubit states in x − z plane and therefore we obtain a full analysis of the problem using our approach.Then, as an specific case, the problem of trine states with arbitrary priori probabilities is considered.We show that the results are in complete agreement with previous findings.Moreover, We also solve the case of four qubit states for the first time, using the geometry of tetrahedron and the intersection of 4 2 = 6 hyperbolas deriving from each two states.
And finally, we use with this instruction to solve some example for the cases with N ≥ 4 including examples of five and six qubit states with non-equal prior probabilities and the general case of N states with equal priori probabilities.In the later case , equal priori probabilities, finding Γ corresponds to finding a sphere with maximum number of states on it.
Appendix A: The Rotation Matrix for three qubit states Discrimination Any qubit state ρ i can be identified with a point inside the Bloch sphere using its Bloch vector v i .The same is true for multiplication of the state by its priori probability i.e. ρi = p i ρ i and ṽi = p i v i .Three states together with their priori probabilities represent three points inside the Bloch sphere.From geometry we know that three non-collinear points determine a plane and each plane can be described by its normal vector which is a vector orthogonal to the plane (i.e.orthogonal to every directional vector of the plane).Having the normal vector in hand, in the next step we can find the rotation matrix which rotates this vector to align it in the y-direction.This rotation matrix then rotates each ṽi in such a way that the y-component of all ṽi become equal.With a translation along y-axis, one can eliminate the y-components of these rotated vectors.Finally, with an additional rotation in x − z plane, we can rotate ṽi in a way that the state with largest priori probability be aligned in the z-direction.Since these rotations and the translation does not affect the relative distances and angles between states, the guessing probability to this new set of states is equal to the original one.The POVM's can be easily related as well, with a rotation, as was explained in the Sec.III B.
To obtain corresponding rotation matrix consider three points P 1 ,P 2 and P 3 in the Bloch sphere optimal measurement operators of the first set { 1 N , ρ i } N i=1 are still optimal answers for the new set { 1 N +K , ρ i } N +K i=1 because defined ni 's from the first problem with N states are still unchanged for the new problem, Eq. (C7).However, this is not the only answer of the new problem.(iii) The guessing probability of the new problem is given in terms of the guessing probability P guess = 1 N (1 + R) of the original one as P N ew guess = N N +K P guess , Eq. (C5).Corollary 8. Consider the case of N equiprobable qubit states with the same purity |v i | located on the circumsphere {R, O}.If any of the following statements be true then the other ones will also be true.In other words, these statements can be interchangeably used in this particular case.
(ii) The radius of the circumsphere is given by R = |v i |.
All implications are trivial and can be inferred by the results given above.A particular case is when there are N equiprobable pure states, |v i | = 1.It follows from the corollary that the guessing probability of N equiprobable pure qubit states with O at Origin is an example which achieves its maximum value P guess = 2/N .This is because the radius of this circumsphere can not be greater than one.
to reach the answer!But, with the following instruction we can find it very quicker.
FIG. 3 .
FIG. 3. Representation of three qubit states problem when a three elements non-decomposable POVM is optimal.
FIG. 5 .
FIG. 5.The answers of four qubit states with the same purity ζ and different prior probabilities depending on two parameters α and β.When α and β intends to zero the answer is obtained by considering all four states.By increasing α and β, γ can be revealed by either two-state or three-state case.
FIG. 6 .
FIG. 6.The answers for five qubit states, all with the same purity ζ.Four equiprobable symmetric states (p = 0.195) with one disturbing state (p5 = δ = 0.22) rotating on a sphere with radius ζ and angles {φ, θ}.In this case, three-state case in most cases reveals γ.
these three points is defined byax + by + cz + d = 0, (A2)where the coefficients a, b and c determine the components of normal vector n vector n is the unit vector orthogonal to every direction vector of the plane, it can be obtained by the following equation n = (P 2 − P 1 ) × (P 3 − P 2 ) |(P 2 − P 1 ) × (P 3 − P 2 )| .(A4) | 10,949.4 | 2021-08-27T00:00:00.000 | [
"Mathematics"
] |
Site-Specific 68Ga Radiolabeling of Trastuzumab Fab via Methionine for ImmunoPET Imaging
Bioconjugates of antibodies and their derivatives radiolabeled with β+-emitting radionuclides can be utilized for diagnostic PET imaging. Site-specific attachment of radioactive cargo to antibody delivery vectors provides homogeneous, well-defined immunoconjugates. Recent studies have demonstrated the utility of oxaziridine chemistry for site-specific labeling of methionine residues. Herein, we applied this approach to site-specifically radiolabel trastuzumab-derived Fab immunoconjugates with 68Ga, which can be used for in vivo PET imaging of HER2-positive breast cancer tumors. Initially, a reactive azide was introduced to a single solvent-accessible methionine residue in both the wild-type Fab and an engineered derivative containing methionine residue M74, utilizing the principles of oxaziridine chemistry. Subsequently, these conjugates were functionalized with a modified DFO chelator incorporating dibenzocyclooctyne. The resulting DFO-WT and DFO-M74 conjugates were radiolabeled with generator-produced [68Ga]Ga3+, to yield the novel PET radiotracers, [68Ga]Ga-DFO-WT and [68Ga]Ga-DFO-M74. In vitro and in vivo studies demonstrated that [68Ga]Ga-DFO-M74 exhibited a higher affinity for HER2 receptors. Biodistribution studies in mice bearing orthotopic HER2-positive breast tumors revealed a higher uptake of [68Ga]Ga-DFO-M74 in the tumor tissue, accompanied by rapid renal clearance, enabling clear delineation of tumors using PET imaging. Conversely, [68Ga]Ga-DFO-WT exhibited lower uptake and inferior image contrast compared to [68Ga]Ga-DFO-M74. Overall, the results demonstrate that the highly facile methionine-oxaziridine modification approach can be simply applied to the synthesis of stable and site-specifically modified radiolabeled antibody–chelator conjugates with favorable pharmacokinetics for PET imaging.
Materials and Instrumentation
Chemicals were purchased from Merck Chemicals Ltd or Fluorochem Ltd unless otherwise specified and used without further purification.Other solvents were purchased from VWR International and used without further purification.18.2 MΩ water was used to prepare all buffers and aqueous solutions.Oxazridine-N3 1 and DBCO-PEG4-DFO were prepared as previously described. 1,2Trastuzumab was obtained as the biosimilar Herzuma in solution (21 mg/mL) from the Pharmacy Department at Guy's and St. Thomas' NHS Trust, London.Fresh human serum was obtained from a healthy volunteer.PD-10 size exclusion columns were purchased from GE Healthcare UK Ltd.Zeba TM Spin Desalting columns, 7 kDa MWCO, 0.5 mL, 5 mL and 10 mL were purchased from Life Technologies Limited, UK.Amicon Ultra 0.5 mL centrifugal filters (10 kDa MWCO), OverExpress TM C43 (DE3) Chemically Competent Cells and Overnight Express TM Instant TB Medium were purchased from Merck Chemicals Ltd.Instant thin layer chromatography (iTLC-SG) were obtained from Varian Medical Systems UK, Ltd.NuPAGE TM 4 to 12 %, Bis-Tris, 1.0 mm mini protein gels were obtained from Life Technologies Limited, UK.We thank S. Elledge (University of California, San Francisco) for providing the trastuzumab M74 Fab expression vector.Protein A affinity chromatography and size exclusion chromatography were performed on an AKTA pure 25L purification system using an HiTrap Protein A HP 5 mL column (Cytiva) and HiLoad 16/600 Superdex 75 pg column (GE Healthcare), respectively.Protein concentration was determined using a Thermo Scientific Nanodrop One spectrophotometer.High resolution mass spectrometry data and intact protein mass spectrometry were recorded by Mass Spectrometry Service, Imperial College London using a Waters LCT Premier (ES-TOF) spectrometer.HPLC-MS analysis were conducted on a Waters 515 HPLC pump, 2998 photodiode array detector and 3100 mass detector using either a Waters XSelect CSH C18 Column, 130 Å, 5 μm, 4.6 mm x 100 mm column or a Waters XBridge BEH C18 Column, 130 Å, 5 μm, 4.6 mm x 100 mm column.Gamma counting was performed using a Wallac 1282 Compugamma Universal Gamma Counter.Peptide mapping analysis was performed by the BSRC Mass Spectrometry and Proteomics facility at the University of St Andrews.Analytical size exclusion radioHPLC traces were acquired using an Agilent 1260 series HPLC system with an in-line radioactivity detector (LabLogic Systems Limited 1"NaI/PMT Detector).A BioSep TM SEC-s2000 column (5 μM, 145 Å, 300 x 7.8 mm) was used with a mobile phase of PBS supplemented with sodium ethylenediamine tetraacetate (2 mM) at a flow rate of 1 mL/min.iTLC strips were visualised using a Lab Logic Dual Scan-RAM radio-TLC/HPLC Scanner.SDS PAGE gels were visualised using an Invitrogen iBrightFL1000 imaging system (bright view) and GE Healthcare Amersham Typhoon imager (autoradiography).
Preparation of Trastuzumab M74 Fab fragment
Trastuzumab Fab plasmid vector containing HC.M107L, LC.T74M mutations was kindly provided by Elledge et al. (University of California, San Francisco).The protein sequences of light chain and heavy chain are shown below.Fab fragment was expressed in OverExpress TM C43 (DE3) chemically competent cells (Merck) on a 2 L scale.C43 (DE3) cells were grown in Overnight Express TM Instant TB Medium (Merck) at 37 o C with shaking at 190 rpm until the optical density at 600 nm reached 0.75 after which the temperature was lowered to 30 o C and further incubated for 16 h.The cells were harvested by centrifugation (5000 g, 20 min), resuspended in 20 mM Phosphate buffer, and lysed by sonication (500 W, 20 % amplitude, 15 s on 45 s off for 10 min).Cell lysate was collected by centrifugation (38758 g, 30 min) and the supernatant was pass through a 0.45 μm syringe filter prior to purification.Purification: 1) Cell lysate was first applied to a 5 mL HiTrap Protein A HP column (binding buffer: Phosphate-buffered saline (PBS), elution buffer: 0.1 M sodium citrate buffer pH 3) on an AKTA pure 25L purification system followed by 2) size exclusion chromatography using a HiLoad 16/600 Superdex 75 pg column (GE Healthcare, UK) (Elution buffer: PBS).Samples for ESI-TOF MS analysis were prepared at 10 μM in 0.1 M ammonium acetate or in H2O (desalted using 0.5 mL Zeba 7-kDa desalting columns, 1500 g, 2 min).Samples were additionally analysed by SDS-PAGE.
Heavy Chain Sequence:
Preparation of Trastuzumab WT Fab fragment
Following a literature procedure. 3Trastuzumab was supplied as the biosimilar Herzuma in solution (21 mg/mL) from the Pharmacy Department at Guy's and St. Thomas' NHS Trust, London.Briefly, Trastuzumab IgG (15 mg, final concentration: 67 μM) was reacted with 1.7 % wt/wt of immobilised papain (250 μg/mL) for 22 h at 37 o C in a buffer containing 20 mM sodium phosphate monobasic, 10 mM disodium EDTA and 80 mM cysteine HCl (pH 7), cysteine HCl was added immediately before digestion.After digestion, Tris.HCl buffer was added (pH 7.5, 2 mL) and the mixture centrifuged (3200 g, 10 min).The supernatant containing the Fab and Fc fragments were collected.Purification was conducted using a 20 mL HIPrep Q HP anion exchange column on an AKTA Pure 25L purification system (binding buffer: 50 mM Tris-HCl pH 8 buffer, elution buffer: 1 M NaCl in 50 mM Tris HCl pH 8 buffer).Samples for ESI-TOF MS analysis were prepared at 10 μM in 0.1 M ammonium acetate or in H2O (desalted using 0.5 mL Zeba 7-kDa desalting columns, 1500 g, 2 min).Samples were additionally analysed by SDS-PAGE.
Peptide mapping analysis
Proteins were diluted to 10 μM with ammonium bicarbonate and digested with trypsin.The samples were analysed by nanoLCMSMS on a Sciex 5600 QTof mass spectrometer coupled with Sciex Eksigent 425 nanoLC.The LC was configured in trap elute format, with Waters acuity UPLC M class symmetry trap column 100 Å 5 μm 2 G 180 μm x 20 mm trap and Waters acquity UPLC Mclass HSS1 1.8 μm 75 μm x 150 mm column, both Waters.5μL of sample was injected onto the trap in 15 μL/min of loading buffer (0.05 % Trifluoroacetic acid in water), and run for 5 min.The trap was switched in line with the analytical column and the sample eluted at a gradient over 35 min (A = 100 % water with 0.1 % formic acid, B = 20 % water 80 % acetonitrile, 0.1 % formic acid, 2 % A to 1 min, linear to 40% A over 25 min, linear to 95 % A over 4 min, hold for 1 min, linear back to 2 % A, and re-equilibrate for 4 min).The flow from the column was sprayed directly into the nanospray orifice at a voltage of 2300 V positive ionisation.Mass spectrometry data was collected from 350 to 1250 m/z for the survey scan with the top 20 most intense peaks selected by Data Dependant Acquisition conditions for MSMS with collision induced dissociation (CID) fragmentation.Raw data was exported and extracted using ms convert (ProteoWizard).The data searched using Mascot search engine (MatrixScience) against an internally generated database of 6700 protein sequences.Settings were 20 ppm on the MS and 0.1Da on the MSMS data, with variable oxidation of methionine.For the identification of the Trastuzumab modification (C(6) H(9) N(5) O 167.1686 Da) was set as a variable modification on methionine residues.
Cell culture for HER2 in vitro binding studies
The human breast cancer cell lines HCC1954 (cultured in RPMI-1640) and MDA-MB-231 (cultured in DMEM low glucose (1 g/L)) were purchased from American Type Culture Collection.Growth media were supplemented with 10 % foetal bovine serum (FBS), 100 units/mL penicillin, 100 µg/mL streptomycin, and 2 mM ʟ-glutamine.Cell lines were harvested twice weekly using a formulation of 0.25 % trypsin/0.53mM EDTA in Phosphate Buffered Salt (PBS) Solution without calcium and magnesium, which was then neutralised with the appropriate medium containing FBS.Cells were maintained in a humidified chamber containing 5 % CO2 at 37 o C.
FigureFigure
Figure S7 ESI-deconvoluted mass spectrum of the reaction between trastuzumab N3-WT and DBCO-PEG4-DFO, to yield a mixture containing unmodified WT Fab and DFO-WT.
FigureFigureFigure
Figure S9 ESI-deconvoluted mass spectra of the reaction between trastuzumab WT Fab and 20 eq. of Ox-N3 1. Signals observed at 47566 m/z (labelled red) corresponds to unmodified WT Fab and 47732 m/z (labelled blue) corresponds to singly modified WT Fab conjugate and 47898 m/z (labelled green) corresponds to dual modified WT Fab conjugate.
Figure
Figure S12 SDS PAGE analysis (Left: bright view image, Right: autoradiography) of [ 68 Ga]Ga-DFO-M74 and [ 68 Ga]Ga-DFO-WT from crude labelling reactions (1 and 2) or after Zeba spin purification (3 and 4).Unchelated [ 68 Ga]Ga migrates to bottom of SDS-gel.The radioactivity signal from the radiolabelled conjugates were coincident with the stained protein bands corresponding to the M74 and WT Fab fragments. | 2,293.2 | 2023-09-26T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Quantum transport in graphene nanoribbon networks: complexity reduction by a network decimation algorithm
We study electronic quantum transport in graphene nanoribbon (GNR) networks on mesoscopic length scales. We focus on zigzag GNRs and investigate the conductance properties of statistical networks. To this end we use a density-functional-based tight-binding model to determine the electronic structure and quantum transport theory to calculate electronic transport properties. We then introduce a new efficient network decimation algorithm that reduces the complexity in generic three-diemnsional GNR networks. We compare our results to semi-classical calculations based on the nodal analysis approach and discuss the dependence of the conductance on network density and network size. We show that a nodal analysis model cannot reproduce the quantum transport results nor their dependence on model parameters well. Thus, solving the quantum network by our efficient approach is mandatory for accurate modelling the electron transport through GNR networks.
In this publication, we study GNR networks of different sizes and densities. We tackle the question how transferable quantum and (semi-)classical results are in the mesoscopic range where quantum effects may be relevant. For this, we perform quantum transport (QT) calculations as well as nodal analysis (NA) in comparison. This work is organized as follows: In section section 2 we first introduce the model system of the GNR network. Second, we give a brief overview about the general basic QT equations. Third, we present our new recursive network decimation scheme algorithm to efficiently calculate the key quantities. We then extend the linear complexity scaling of the standard recursive algorithm for linear chains to higher dimensional sparse networks. Finally, we present results of GNR networks for varying network density and size (network base area). In section 3 we first introduce the NA model adjusted for our GNR quantum network purpose. Then we present results of GNR networks with identical geometric parameters as for the QT calculations and compare the two approaches. We conclude that a single NA model cannot replicate all QT features, summarize our results and end with an outlook in section 4.
2 Quantum transport in nanoribbon networks
Model System
The model system we study are GNR networks, see figure 1, which are constructed as follows. A given number N of GNRs are randomly distributed within the network base area A by randomly selecting a point and a direction. Here, we focus on one type of nanoribbon: The networks consist of zigzag GNRs with a width of 3 unit cells (also called 6-zGNR), which equals 12 carbon atoms (corresponding to a width of 1.3 nm). We use this type of GNR because it is large enough to describe realistic devices yet sufficiently small to obtain fast results. Each GNR has a length of 20 unit cells, which equals 5 nm. They are passivated to avoid unphysical dangling bonds, i.e. a hydrogen passivation is The colour of the ribbons corresponds to their z-coordinate, ranging from red (lowest) to yellow (highest). Blue cells mark the semi-infinite electrodes where the alignment of the carbon atoms (black) and the attached hydrogen atoms is indicated.
used as this occurs during the fabrication process [26]. Thick strips in figure 1 represent a single GNR (color coded for its spatial position), each of its small sub-stripes marks one unit cell. Each new GNR is placed in the lowest possible layer. This means that if it would intersect with an GNR already placed, it is placed at the next higher layer with a vertical distance of 3.35 (red / orange / yellow in figure 1) as this corresponds to the distance between two graphene layers [27]. That means, the GNRs are assumed to be flat, and curvature effects are neglected. In reality, overhanging ends of stacked GNRs could bend. The resulting curvature would lead to a reduced in-plane transmission and thus a reduced conductance. This effect is in the range of up to 40% reduction, depending on the curvature [28]. Furthermore, additional connections between GNRs could occur, which would effectively lead to a larger network density ρ, thus counteracting the bending effect. Curvature effects are not included here, not least to keep the model simple.
At the edges of the network base area, protruding GNR unit cells are shifted to the opposite side of the network imposing periodic boundary conditions. For this system, the dimensionless network density is defined as with the area covered by a single nanoribbon Ω = 6.39 nm 2 (for 6-zGNRs) and network area A. For example, = 1.5 means that the GNRs could fill one and a half layers if aligned properly. Furthermore, one ribbon at the left and right network boundary is elongated infinitely to act as an electrode within the transport formalism (indicated in blue in figure 1).
Quantum transport theory
Electronic transport is described quantum mechanically by using quantum transport theory [29] in combination with an underlying electron structure theory. The conductance in the limit of a small bias can be computed from the transmission based on the Landauer-Büttiker formula [30] Here, G 0 = 2e 2 /h is the conductance quantum, T (E) the transmission function, and f (E) the Fermi distribution function. In order to calculate the transmission function, the system is divided into a finite channel (C) and two semi-periodic electrodes (L -left and R -right) as depicted in figure 2. Each part is described by a Hamiltonian matrix H L,C,R and coupling matrices τ LC,CR . The resulting channel Green's function is where S C is the channel overlap matrix, S LC,CR are the coupling matrices of the overlap, and Σ L = (τ CL − ES CL )G L (τ LC − ES LC ) and Σ R = (τ CR − ES CR )G R (τ RC − ES RC ) are the self energy corrections due to the coupling of C to the electrodes L and R. Finally, η is a small numerical value for improving convergence, which shifts the singularities at eigenenergies into the complex plane. The electrode surface Green's functions G L/R are calculated efficiently using the renormalization decimation algorithm (RDA), which is a fast recursive algorithm [31,32]. We use a value of η = 10 −5 for calculating G C and for G L,R via the RDA.
Finally, the transmission function is where describe the broadening of the original channel states due to the coupling to the electrodes.
The Hamiltonian and coupling matrices themselves are computed in advance by an electronic structure calculation. To treat the large network systems studied here, we use the so-called non-self-consistent DFTB model [33,34], which readily provides the distance-and orbital-dependent Hamiltonian and overlap matrix elements. For describing the organic molecules present here, the parameter set 3ob [35,36] is used. This set uses a sp 3 -basis appropriate for our purpose and has been verified against other organic molecules, especially sp 2 -hybridised ones, with a similar structure. The cutoff distance was chosen to be a CO = 3.7 Å. This value includes the 3rd nearest-neighbor interaction in plane but excludes the 4th one, implying rather small GNR unit cells and reducing the decimation calculation time to an optimum. In addition, it ensures the inclusion of 2nd nearest-neighbor interaction out of plane, i.e. between two GNRs in different layers.
Network Transport: Reducing the Complexity through the Network Decimation Scheme
Using the aforementioned methods, the conductance for an arbitrary system can in principle be obtained for a given, adequate parameter set. However, due to the matrix inversion in (3) the number of necessary operations scales cubic with the number of atoms in the system. This is alleviated by dividing the channel into smaller cells to calculate an effective channel Green's functionG C that is equivalent to G C in (3) but of lower dimension.
Proficient algorithms benefit from this subdivision and result in a linear scaling with the number of atoms. For example, the recursive Green's function formalism (RGF) [10,11] has been widely used for similar calculations of linear chains [16][17][18][19][20][21][22]. Two slightly different versions of the RGF, the renormalization decimation scheme (RDS) and forward iteration scheme (FIS) are utilized [37] and adopted to the networks studied here, as shown in figure 3.
These methods are combined in the network decimation scheme introduced here to reduce the complexity of the network incrementally by virtually decreasing the number of nodes that need to be considered. The starting point is a system where each GNR unit cell, cf. figure 1, corresponds to one node. The RDS is given by the equations The indices i, j, k refer to the respective nodes. τ i,j is the coupling matrix from node i to node j. The presence of a node causes a shift of all the neighbouring energy levels. Using RDS, this shift is applied to the Hamiltonian and coupling matrices of the surrounding nodes. Thus, after applying (5) node i is effectively eliminated as all its information is now stored in its neighbours and the connections between them. The FIS takes advantage of the fact that the overall Hamiltonian is sparse and that the electrodes only couple to a small part of the system. It is defined by which represents the influence of the electrodes and the neighbourhoods of the cells propagating through a linear chain of cells. In the endG C is obtained, which can be understood as an effective Green's function of the system.
As a network of cells as shown in figure 1 is far from constituting a linear chain, (5) and (6) are combined to obtaiñ G C . In order to minimise the number of computer operations, they are applied as follows: We start the decimation scheme at the outermost edges, where the nodes have only one neighbour (see circled cells in figure 3a). These cells are successively decimated using RDS, until only clusters and connections between them remain (see figure 3b). Then, connections between clusters are treated, where nodes only have two edges each. Subsequently, the RDS is applied to the clusters of many interconnected cells, where the nodes with the fewest edges are decimated one at a time (see figure 3c). In the end, only the cells connected to the electrodes remain present (see figure 3d), as they can't be decimated using RDS. In this final step, FIS is used and G C is obtained. We implemented this QT algorithm in python using numpy and scipy. For further details we would like to refer the interested reader to [38].
This hierarchic decimation procedure works especially well for weakly connected networks, i.e. networks with few overlapping areas compared to the total network. The standard recursive procedure, which is often used for linear chains, would provide a subdivision in transport direction only. For linear chains this results in an O(N ) scaling with the number of atoms N . For 2D and 3D networks, it yields a O(N 2 ) and O(N 7/3 ), respectively, scaling, because only one dimension with N 1/d atoms scales linear and the other dimensions with N (d−1)/d atoms scale cubical due to the matrix inversion. The decimation scheme presented here treats this more efficiently as all directions are subject to decimation. In the limit of weakly connected networks, this implies a vanishing fraction of overlapping areas (node clusters in figure 3) compared to the original network, and the algorithm scales like O(N ) independent of the network dimension. The complexity scaling is summarised in table 1.
However, such a O(N ) scaling is not achievable for dense, realistic systems in general. Clusters with a high degree of interconnections scale with O(N 2 ) and contain typically between 50 to 90% of node sites. Nevertheless, in comparison to the standard procedure with a N 2 scaling for 2D systems, a scaling between N and N 2 can be achieved using the novel network decimation scheme, which allows one to treat larger systems by quantum transport methods, as the one we discuss here. Of course, in the limit of denser networks, the linear scaling breaks down. In the extreme limit where the whole network is one cluster, a quadratic scaling O(N 2 ) is obtained, which is the same as for 2D systems treated with the standard algorithm of linear chains. For 3D systems, a slightly better scaling may be achievable, but this is out of the scope of our work as we focused on quasi 2D networks with few layers.
Results
For each set of geometric parameters (GNR type, area, number of GNRs) 500 random and percolating networks were generated, for which the QT calculations are performed and statistically analysed. Percolating or non-percolating networks could arise due to the finite interaction distance given by the cutoff distance a CO .
We first state that the usual percolation behaviour is observed: Figure 4 shows the percolation probability as a function of the network density for different network sizes (base areas A denoted by colour). There is a minimal network density below which the percolation probability is exactly zero because the cumulative length of the GNRs is smaller than the side length of the network base area. At the percolation threshold network density the percolation probability increases markedly and tends towards one for large densities. The overall trend can be approximately described by a logistic function fit (red line in figure 4) [39]. A finite-size effect can be seen as larger network base areas result in steeper curves, that is, sharper transitions. However, percolation aspects are not the focus of this work, and we refer the reader to [40,41], while we focus on percolating network ensembles in the following. From an engineering point of view, non-percolating networks would be "broken devices" and not relevant for applications. Figure 5 shows the average conductance of percolating networks as a function of the network density for various network sizes (base area A denoted by colour as before). For each data set (i.e., for a fixed base area A), two different regimes can cleraly be distinguished: (A) The "linear chain regime" for low densities < 1. Here, the networks are weakly connected and possess few, often only one transport path. They thus behave like an effectively linear chain with few perturbations. In the limit of a network consisting only of one GNR covering the whole system (not included in figure 5), this behaviour tends towards the ballistic transport regime with constant conductance G = 2G 0 . In a macroscopic picture for larger network sizes, this can be interpreted as a series circuit, where the conductance decreases roughly like G ∼ 1/N with N being the number of tunnelling regions between any two GNRs contribution to the chain. N increases with increasing effective chain length eff , which in turn increases with density as long as only one chain exists: N ∼ eff ∼ . Thus, the conductance decreases with increasing density G ∼ 1/ , as can be seen in figure 5, especially for small base areas A.
(B) The "interference dominated diffusive regime" for high densities > 1. Here, the networks are highly connected, i.e. a significant amount of the networks contributes to percolation and many transport paths are present. An increase of the network density results in further increasing the number of connections and transport paths, thereby increasing the conductance. In a macroscopic picture this can be interpreted as a parallel circuit, where the conductance increases roughly like G ∼ M with M being the number of parallel GNR paths. M increases with increasing number of layers or system height H, which in turn increases with increasing network density , leading to M ∼ H ∼ . Thus, the conductance increases with increasing density G ∼ , as can be seen in figure 5 for all base areas A. A linear regression for ≥ 2.5, i.e. the range where virtually all networks are percolating, has been done (solid lines in figure 5), yielding a slope ∂G ∂ = 2.55 · 10 −3 G 0 . Both trends (A) and (B) can be seen best for the smaller network base areas, e.g. the system with a base area of 14 × 6 nm 2 . In between both regimes (A) and (B), a transition region must be present and is realised as a plateau-like minimum around ≈ 1.
In addition, the conductance for a fixed density depends on the network size (base area A indicated by colours as in figure 5). To this end we increased the area A = W L in width W and length L, while fixing the aspect ratio W/L = const. In the "linear chain regime" (A) increasing A leads also to an increased effective chain length eff and thus a decreased conductance, as can be seen in figure 5. In the "interference dominated diffusion regime" (B) increasing A leads on the one hand to an increased number of parallel paths due to the increased width W , which will increase the conductance G ∼ W . On the other hand the increased length requires more tunnelling events between two GNRs. But when the system gets very large the scattering events in the diffusive regime of 1D systems lead to enhanced interference and strong localisation effects, which will decrease the conductance exponentially with length L. In the mesoscopic range of the networks discussed here, the strong localisation regime may not be fully reached yet (as shown for similar purely 1D systems [20][21][22]). In summary, we get G ∼ H W/e L , which explains the increasing conductance with increasing ∼ H and the decreasing conductance with equally increasing W and L (i.e. increasing A).
Comparison with semi-classical transport and nodal analysis
In this chapter we will address the question to what extent our results for GNR networks could be reproduced based on a classical transport approach. Quantum-classical correspondence has been established as a useful tool in mesoscopic physics. The classical approach neglects all interference effects, and comparison with the quantum transport results allows one thus to classify their importance and a deeper characterisation of the system properties.
Model System
In order to approach the GNR networks in section 2 using classical transport based on nodal analysis (NA), the networks are represented by layers of two-dimensional (2D) polygons (see figure 6). Each polygon has a width and length corresponding to their atomic (quantum) counterpart, which is roughly 1.3 × 5 nm 2 in the 6-zGNR case. These stripes are now randomly placed on the base area A = LW , starting in the lowest layer. Analogous to section 2.1, overhanging stripes are shifted to the opposing side of the base area.
Two stripes are deemed interacting, if they are in adjacent layers and overlap in the x, y plane. The centre of the overlapping region determines the x, y coordinates of the two resulting nodes, one on each ribbon taking part in the interaction. These coordinates are then used to order the emerging nodes of a ribbon, so that each node is connected to its nearest neighbours. This way the network consisting of ribbons is represented as nodes and edges, which are then investigated using classical NA. Using Kirchhoff's current law, an arbitrary circuit consisting of ohmic resistances can be expressed aŝ where the conductance matrixĜ represents the connections between the nodes, the vector ϕ indicates the potential at each node and I contains the net currents in each node. The currents are all zero due to Kirchhoff's first law, with the exception of two contacts where the current is inserted into or extracted from the system (see blue markers in figure 6). G must be chosen to best reflect the results obtained for the quantum system in section 2. Thus, the following definition is used: nodes not connected .
The choice for the j = k and the unconnected node case is motivated by the NA definition. Additionally, one needs to distinguish between intra-ribbon connections and inter-ribbon connections. The conductance between two nodes on the same ribbon is assumed to be G intra = 2G 0 , which corresponds to the conductance of an ideal GNR with ballistic transport. For two nodes on separate ribbonŝ is chosen, effectively describing electron tunnelling. This conductance is assumed proportional to the overlapping area A ij = A ji of the ribbons between two nodes i and j (see figure 6). The factor g t is to be optimized as to yield the best agreement with the quantum transport results obtained in the previous section 2. Figure 7 depicts the percolation probability of the NA approach for different network base areas. The qualitative behaviour is the same as described in section 2.4. Quantitatively there are some small differences in the threshold density and slope in comparison to the QT results that can be related to the differences between the models used. Figure 8 shows the conductance as a function of the network density for different network base areas (denoted by colour), calculated with the NA approach. Analogous to the QT results in section 2.4, two regimes can be found for each data set (fixed base area).
Results
(A) The "linear chain regime" for low densities < 1. The general qualitative trends are similar to the ones of the QT results in figure 5: Weakly connected networks in the limit of only one chain yield a conductance G ∼ 1/L ∼ 1/ that decreases with increasing length and density, which in a macroscopic interpretation corresponds to a series circuit. The decrease with density and base area A can be seen in the diagram. The effect is less pronounced than for the quantum results. Nevertheless, the limit of only one lead spanning the whole system G = 2G 0 is the same (not shown).
(B) The "ohmic regime" for high densities > 1. Also here, the trend for fixed base area is similar to the quantum transport calculations. Highly connected networks with many transport paths yield an increasing conductance G ∼ H ∼ with increasing height and density, as can be seen in the diagram. In a macroscopic interpretation this corresponds to a parallel circuit. In contrast to the quantum treatment before, however, the base area A dependence is now different: Although the conductance increases with width W as before, the length dependence differs because interference effects and thus an exponential conductance reduction with increasing length cannot be captured in NA calculations. Thus, a (only) roughly inverse length dependence G ∼ 1/L is found. In total, we find the Ohmic behaviour G ∼ HW/L. As we kept the aspect ratio W/L constant for all A, the conductance G depends only on height H and not on area A. This can nicely be seen in figure 8 as all curves roughly fall together. The solid line represents a linear regression in this ohmic regime for ≥ 2.5 where all networks are percolating. The corresponding derivative yields the slope for the specific parameter g t = 0.1 G 0 /nm 2 . As the derivative depends on g t , it could be adjusted such that the NA value equals the QT value, however, only for this specific value g t . To this end one could perform multiple NA calculations with many different g t and choosing the best fitting g t by minimising the difference between the QT and NA conductances. Nevertheless, this g t will be specific for the set of geometric parameters chosen. A generic value of g t , that brings the NA calculations in agreement with QT calculations for arbitrary geometries, cannot be found. A comparison of the density dependence of the conductance G( ) in the quantum transport (QT) and nodal analysis (NA) model reveals a semi-quantitative agreement in the behaviour for low (linear chain regime). However, for high characteristic deviations are visible, the most striking being that the conductance G does not depend on the network size (or base area A) in the NA model due to the ohmic behavior. In contrast, quantum interference effects depending on the network size A change this behavior in the QT model and result in an A dependence that cannot be reproduced with the NA model. However, the NA parameter g t can be tuned such that a best fit to the QT case is obtained for a (single) given network of base area A. Finally, we briefly discuss the distribution of the conductance within a statistical analysis. This is shown in figure 9 for QT (blue) and NA (orange) within a) the low and b) the high density regime. Comparing both regimes, the distribution is much broader for low than for high . This is due to the fact that in the linear chain regime (low ) the conductance depends sensitively on the specific tunnelling geometry. For the large amount of geometries where the GNRs are barely touching each other the conductance is fully determined by the corresponding tunnelling distance. For the NA there is a hard cutoff and thus no such long tail is present.
In the high density regime, many transport paths exist within the network. Consequently the electrons can choose between different pathways to avoid tunnelling-restricted geometries, which leads to a suppression of the low-conductance tail of the distribution in figure 9b) in comparison to figure 9a). However, the QT calculations allow for strong localization effects as discussed before. Thus, specific geometries show an exponentially suppressed conductance even in the high situation. This explains the low conductance contributions for the QT model (blue) in figure 9b).
Summary and Outlook
The network decimation scheme introduced in this paper represents an efficient decimation scheme for quantum transport, which enables the treatment of comparably large quantum networks, that otherwise would not be accessible using standard recursive procedures or even direct inversion. The conductances of GNR networks with varying density were calculated using this novel algorithm and compared with results from nodal analysis as a classical transport method. Although these two approaches may be able to yield similar results for specific geometries in the largedensity regime for ≥ 2.5 by parameter tuning, single parameterization of a nodal analysis model cannot generically reproduce the QT results with all features for different geometric parameters. For low densities quantum effects such as tunnelling and interference dominate and yield an increased conductance, which is only partly accessible by semi-classical nodal analysis. The dependence on the network size (base area A) for high observed for QT cannot be reproduced by NA. In turn, the numerically and conceptually cheap nodal analysis can be expected to yield qualitatively reliable results for specific GNR networks with given, fixed geometric parameters and for ≥ 2.5. In summary, the efficient network decimation algorithm introduced here is both necessary to accurately calculate the conductivity of GNR-networks and allows one to extent full QT calculation to large systems that were previously only accessable by less correct semiclassical methods.
The work presented here will be the basis for manifold future investigations ranging from accessing extended parameter regimes (aspect ratio of the network, GNR width, or GNR type) to other material systems (carbon nanotubes or other types of nanowires), and is not restricted to carbon. The extension to other networks, for example the simulation of porous materials, is possible. Eventually, special geometries can be investigated, such as periodically repeating structures, bent/curved structures or multi-terminal devices. | 6,350.4 | 2022-12-14T00:00:00.000 | [
"Physics"
] |
Resonance production at SPS energies : CERES and NA 49
We present results on resonance production by the NA49 and CERES collaborations. The measurement of the differential yields and spectral distributions of the K∗(892), Δ(1232), ρ, φ and Λ(1520) resonances from their leptonic and hadronic decay channels at different C.M.S. energies and for various colliding systems allows us to study in-medium modifications of the resonance mass, width and yield and constrains the properties of the hadronic phase. For K∗(892)0, a strong system size dependence of the yield relative to kaon production is found. The production of the Δ(1232) resonance is consistent with thermal model expectations. φ meson spectra and yields reconstructed in the leptonic and hadronic decay channels are in agreement. Low-mass dilepton spectra indicate significant regeneration of the ρ meson and a strong modification of the ρ spectral function.
Introduction
Lattice QCD calculations predict a transition from confined hadronic matter to a chirally symmetric state of deconfined quarks and gluons at an energy density around 0.7 GeV/fm 3 [1] and a transition temperature of about 150-170 MeV [2].Such conditions are believed to be reached in ultra-relativistic nucleus-nucleus collisions, where a transient state of high temperature and extreme energy density is created [3].The system expands and cools and evolves into a hadron gas which finally decouples into the observed hadrons.Resonances created prior to chemical freeze-out (no more inelastic collisions) probe the evolution of the fireball to break-up at kinetic freeze-out (no more elastic collisions).They may interact with the medium in which they are produced and experience modifications of their mass, decay width and branching ratio relative to the 'vacuum' values measured in e + e − collisions [4][5][6].Short-lived hadronic resonances with a lifetime similar to the fireball lifetime or smaller, are expected to decay and regenerate inside the medium: several generations probe different conditions of medium density and temperature.Furthermore, hadronic charged decay products may rescatter in the hadronic fireball stage [7], and thus their momenta do not allow to reconstruct this state in an invariant mass analysis.The interaction depends on the cross-section of the decay product with each particle in the fireball, the speed of each decay product relative to a typical fireball particle and, in particular, nuclear density.An analysis of the apparent yield relative to the expectation at chemical freeze-out constrains the properties of the hadronic phase.
There are various experimental parameters that can be varied.Via the choice of the resonance under study, the particle mass, quantum numbers (notably the strangeness content) and lifetime are determined.Some resonances can be reconstructed in hadronic and leptonic decay channels.Since leptons leave the fireball without further interactions and are not subject to rescattering, a comparison of resonance yields in the leptonic and hadronic decay channel can be used to study the rescattering effects.For the same reason, resonances reconstructed via leptonic decays probe the entire fireball history, whereas in the hadronic decay channel one might be more sensitive to resonances formed at a late stage of the fireball evolution with reduced density and rescattering probability (surface bias).Hence, in the leptonic decay channel a higher sensitivity to modifications of the mass and width in the early stage of the fireball is expected.Variation of the collision energy, and the size of the colliding nuclei allows to study different fireball sizes 1 and compare different freeze-out densities and temperatures.
In heavy-ion collisions enhanced strangeness production is found relative to p-p collisions.The enhancement was predicted to arise from gluon fragmentation into quark-antiquark pairs which is believed to have significantly lower threshold than strange-antistrange hadron pair production channels [10].Statistical hadron gas models have been successfully employed to describe the measured particle yields at various collision energies [11][12][13].In this hadron gas picture, enhanced production of strange particles in collisions of large nuclei arises as a consequence of the increased reaction volume, relaxing the constraints of local strangeness conservation.The comparison of yields of strangeness carrying resonances in nucleus-nucleus and p-p collisions needs to gauge a possible modification of the yield after chemical freeze-out against the expected strangeness enhancement.
In these proceedings, we give an overview of resonance production measured by the NA49 and the CERES experiments at the CERN SPS.In Sec. 2, we give a short description of the NA49 and CERES experiments and explain common features of the experimental signal extraction and correction procedures.In Sec. 3, results on K * (892) 0 and K * (892) 0 production measured via the hadronic decay channel in central Pb-Pb, Si-Si, C-C and inelastic p-p at 158 AGeV ( √ s NN = 17 GeV) are summarized.
In Sec. 4 we report measurements of Δ ++ production in in Pb-Au collisions at 158 AGeV by the CERES experiment.CERES results on dilepton production in the invariant mass region of the ρ and below for Pb-Au collisions at 158 AGeV are presented in Sec. 5.The φ meson was reconstructed by CERES simultaneously in the hadronic and leptonic decay channel in Pb-Au collisions at 158 AGeV, and the NA49 collaboration has studied the beam energy dependence of φ production at SPS.These results are discussed in Sec. 6.We conclude with a summary and discussion in Sec. 7.
Experimental setup. Resonance reconstruction.
The NA49 experimental apparatus [14] at CERN is based on a fixed-target hadron spectrometer using heavy-ion beams from SPS accelerator.It consists of four large-volume time projection chambers (TPCs) for charged-particle tracking, two of which (VTPC) operate inside the magnetic field of two superconducting dipole magnets providing an excellent momentum measurement.Two larger main time projection chambers (MTPCs) are placed downstream, outside of the field.Charged-particle tracks are reconstructed from the charge deposited along the particle trajectories in the TPCs using a global tracking scheme which combines track segments from the same physical particle detected in different TPCs.The typical momentum resolution in Pb-Pb collisions is σ(p)/p 2 = (0.3 − 7) × 10 −4 (GeV/c) −1 depending on track length.The interaction vertex is determined using the reconstructed tracks.In p-p collisions [15], additional information on the trajectory of the projectile from proportional chambers (BPDs) in the beam line was used for the vertex fit.Particle identification is based on measurements of the specific energy loss in the detector gas (dE/dx) of the TPCs The particle identification capabilities close to mid-rapidity are enhanced by a time-of-flight (TOF) scintillator system behind the MTPCs.
The CERES experiment [16][17][18][19] was conceived as a di-electron spectrometer.The Pb ions from the SPS impinge on a segmented Au target.The interaction vertex is reconstructed using charged particle track segments from two silicon drift detectors (SDD).Electrons are identified by their ring signature in two ring imaging Cherenkov (RICH) detectors, which are blind to hadrons below p ∼4.5 GeV/c.The original experimental setup was upgraded by a downstream radial drift Time Projection Chamber (TPC) [19] in a magnetic field.The TPC allows to reconstruct hadrons over the full momentum range via particle tracking and improves the momentum resolution of the spectrometer (allowing e.g. to reconstruct the φ meson with a mass resolution of Δm/m=3.8%).The TPC also provides additional electron identification via measurement of the specific energy loss dE/dx.The spectrometer provides full azimuthal acceptance in the pseudorapidity range 2.1 < η < 2.65.
Experimentally, in all cases considered in these proceedings, resonances are reconstructed via their decay into two charged particles.Since the decay daughters are not distinguishable from the other tracks in the heavy-ion environment, the signal is extracted on a statistical basis.All particle pairs of opposite charge are considered.In the invariant mass region of the signal, physically uncorrelated pairs contribute to the combinatorial background.The shape and level of this background can be determined experimentally constructing the invariant mass distribution from a priori uncorrelated pairs: combining like-sign pairs with equal charge2 , or combining tracks from different events.In the latter case, the mixed event background has to be normalized correctly to reproduce the level of background under the signal.After subtraction of the uncorrelated background contribution, typically a small residual background due to remaining correlations is observed.The signal is usually extracted from a combined fit to the signal plus background invariant mass distribution 3 .
Unless mentioned otherwise, resonance yields and spectra are corrected for the branching ratio of the decay channel considered, detection efficiency, geometrical detector acceptance and in-flight decays, as well as vertex reconstruction efficiency in the case of p-p collisions.
Yields measured in nucleus-nucleus collisions can be compared to a reference (measured yields from elementary collisions or theoretical expectations) to judge potential effects of resonance regeneration or daughter rescattering.Here one remark is at order: in many cases, the resonance yield can not be measured down to zero momentum due to experimental constraints (geometrical acceptance, detection efficiency).In this case, the total yields are obtained extrapolating thermal fits to the data.Therefore, if a theory comparison on the level of differential spectra is not possible, conclusions based on total yields have to carefully consider the experimentally accessible phase space.On the part of the experiments, measured spectra are often compared to blast-wave fits [21], based on a simplified parametric hydrodynamical description of the particle emitting source.
Within the experimental uncertainties, no mass shift or modification of the width of the meson is observed in central Pb-Pb collisions.The transverse mass spectra yield inverse slope parameters (T=339 ±9 for K * (892) 0 and T=329 ±12 for K * (892) 0 ) much larger than for kaons, but closer to the higher mass φ meson.Small deviations of the transverse mass differential yield from a purely exponential shape might be interpreted as signature of a momentum dependent attenuation of the K * in the fireball, but are also reproduced by a blast wave fit.
In the left panel of Fig. 1 [22], we present the yield normalized to the number of wounded nucleons (participants, but excluding nucleons participating only in secondary interactions) in the four collision systems under study.This quantity seems to increase from p+p to C-C and Si-Si collisions and then decrease to central Pb-Pb, indicating possibly an interplay between strangeness enhancement in nucleus-nucleus collisions and the interaction of the K * (892) and its decay products in the produced fireball.In the ratios K * (892) / K + and K * (892) / K − the effect of strangeness enhancement should approximately cancel, since kaons and K * (892) contain the same valence quarks.The strong system size dependence observed in these quantities, shown in the right hand panel of Fig. 1, indicates a sizeable effect of interactions in the fireball with destruction dominating regeneration.
Δ(1232) production measured by the CERES experiment
We present preliminary results on Δ(1232) ++ production [23] in Pb-Au collisions at 158 AGeV from the CERES collaboration.The resonance is reconstructed in the pπ + channel.The proton and pion trajectories are reconstructed in the TPC and SDD detectors within 2.1< η <2.7.Protons are identified using the TPC dE/dx information.For background rejection, a minimum transverse momentum of both daughters (p p t >0.1 GeV/c, p π t >0.15 GeV/c) are required and the pair opening angle is restricted to (0.05< Θ <0.46).The uncorrelated background contribution is estimated from mixed events and subtracted.An example for the invariant mass distribution for one bin in p t is shown in Fig. 2 (left panel).Experimental effects introduce a slight bias on the values of the reconstructed mass (∼50 MeV) and width (∼20 MeV), which can be reproduced by detector simulations using the nominal values as an input.Within experimental uncertainties, resonance mass and width are consistent with the PDG [24] values.
The raw yields are corrected for efficiency and acceptance.A dedicated investigation of the systematic uncertainties for this specific analysis was not carried out.The acceptance and efficiency correction factors and the signal extraction procedure are similar to the case of the φ meson reconstructed in the K + K − decay mode [20] described in Sec. 6. Therefore we estimate the systematic error to be of the order of 12%.
The transverse momentum spectrum in the rapidity interval 2.0< y Δ <2.4 is shown in the right panel in Fig. 2. The distribution is well described by a thermal fit with a slope of 318±36 MeV.We find that the spectrum can also be well reproduced by a blast-wave fit.The total yield of Δ(1232) ++ is dN/dy =5.4±0.91(stat)±0.65(syst) for the 7% most central events.Using a negative hadron multiplicity N h − (2.0<y π <2.4) of 66.4±0.8 as in [20], the measured ratio Δ ++ /h − = 34.5±5.8(stat)±4.1(syst) •10 −3 is consistent with the thermal model predictions Δ ++ /h − = 47.6•10−3 [12] within statistical and systematic uncertainties.In the momentum range covered by the measurement, no indication for resonance absorption or regeneration or rescattering is found.
Low-mass electron-positron pairs from CERES
The CERES collaboration has measured e + e − pair production at SPS for various collision systems and beam energies [25][26][27][28].In these proceedings, we focus on results for central (σ/σ geo =7%) Pb-Au collisions at 158 AGeV [29].Electron legs are reconstructed in the pseudorapidity range 2.1< η <2.65.To suppress background from conversion electrons, a minimum single track transverse momentum p t > 0.2 GeV/c and a minimum pair opening angle of 35 mrad are required.The e + e − signal pair yield is corrected for electron reconstruction efficiency and normalized to the average charged particle multiplicity N ch .
In Fig. 3(a), the e + e − invariant mass distribution is compared with the 'hadronic cocktail', which comprises the yield from hadronic decays in A-A collisions after chemical freeze-out [28].In the mass region 0.2< m ee <1.1 GeV/c 2 , the data are enhanced over the cocktail by a factor 2.45±0.21(stat)±0.35(syst) ± 0.58(decays).The improved mass resolution of the spectrometer after the upgrade with a radial TPC [19] provides access to the resonance structure in the ρ/ω and φ region (see Sec. 6).
In Fig. 3(b), the data are compared with model calculations incorporating enhanced dilepton production via thermal pion annihilation and a realistic space-time evolution [30].The calculated dilepton yield was filtered by the CERES acceptance and folded with the experimental resolution.Temperature and baryon density dependent modifications of the ρ-spectral function have been taken into account: the dropping mass scenario which assumes a shift of the in-medium ρ mass [4,31], and the broadening scenario where the ρ spectral function is smeared due to coupling to the hadronic medium [6,32] (see also Fig. 4).The calculations for both spectral functions describe the enhancement reasonably well for masses below 0.7 GeV/c 2 .In the resonance region, however, there is a notable difference between the calculations.In particular, in the mass region between the ω and the φ, the data clearly favor the broadening scenario over the dropping mass scenario.
In order to exhibit the shape of the in-medium contribution, we subtract the hadronic cocktail (excluding the ρ meson) from the data (Fig. 4).The vacuum ρ-decay contribution to the data ('cocktail ρ') is completely negligible compared to the measurements, indicating significant ρ resonance regeneration in the fireball.The excess data exhibit a very broad structure reaching low masses and exceed the vacuum ρ contribution by a factor 10.6±1.3.The data are compared to model calculations.Yield and spectral shape are well described by the broadening scenario but are not consistent with a dropping ρ mass: while the dropping mass calculation yields a rather narrow distribution, peaked at around 0.5 GeV/c 2 , the measured excess is spread over a significantly wider mass range.
The measurement of the ρ spectral function in the di-electron channel provides access to very low invariant masses.In this mass regime, a particular mechanism contributes strongly to the di-electron yield: the strong coupling of the ρ to baryons in the hot and dense medium via "Rhosobar" excitations (ρ →BN −1 ).The importance of this mechanism is demonstrated in Fig. 4(b), where the data are The same data compared to calculations including a dropping ρ mass (dashed) and a broadened ρ-spectra function (long-dashed). Figure from [29].compared to in-medium hadronic spectral function calculations with and without baryon-induced interactions.The calculation omitting baryon effects falls short of the data for masses below 0.5 GeV/c 2 , while the inclusion of baryon interactions describes the low-mass yield very well, providing strong evidence that the observed modifications of the ρ-spectral function are foremost due to interactions with the dense baryonic medium.
φ meson production measured by CERES and NA49
The electron and hadron reconstruction capabilities of the CERES experiment allow for simultaneous reconstruction of the leptonic (e + e − ) and charged kaon (K + K − ) decay modes of the φ meson in the 7% most central Pb-Au collisions [20].To study the φ meson in the K + K − decay mode, charged tracks are reconstructed in the TPC and combined to pairs.For the di-electron decay channel, electrons are identified using the RICH detectors and the dE/dx signal in the TPC.In the dilepton channel, the ρ meson could extend into the invariant mass range of the φ if its spectral function is modified in the medium.This physical background, along with a contribution due to QGP radiation, is estimated from theoretical models [6,33,34] and subtracted.
In Fig. 5, left panel, the φ meson yield for both decay modes as a function of transverse momentum are presented.The dilepton data are scaled to the acceptance of the hadronic channel.The φ meson yields and inverse slope parameters (T=273±9(stat)±10(syst) MeV) obtained in both decay modes agree within the experimental uncertainties.It should be noted that because of the long lifetime of the φ meson (τ = 47 fm) only a fraction decays inside the fireball.Thus, only a fraction of the φ mesons can be expected to be influenced by the surrounding medium.
In the NA49 experiment, the φ meson was reconstructed in the K + K − channel in central Pb-Pb collisions at 20, 30, 40, 80 and 158 AGeV/c, using charged Kaons identified in the MTPC [35], for rapidity intervals varying from 0<y<1.0 (158 AGeV/c) to 0<y<1.8 (20 AGeV/c).At all energies, the width and mass of the φ meson is consistent with the free-particles values.No indication for a mass shift or broadening is observed.The transverse momentum spectra obtained for the five beam energies are shown in Fig. 5, right panel.The spectrum obtained for 158 AGeV is compared to the results from the CERES experiment, after scaling the CERES data to account for small differences in acceptance and centrality.The results agree within errors, as do the total yields.In all cases, the data are well described by thermal fits, showing no sign of decay daughter rescattering.The measured yields for all energies can be described by a statistical hadron gas model with strangeness undersaturation, indicating consistently that the abundances of φ mesons are unchanged with respect to the values at chemical freeze-out.[20].Right panel: φ transverse mass spectra from NA49 for different collision energies [35].
Summary and discussion
Production of the K * , Δ, ρ, and φ resonances in nucleus-nucleus collisions at SPS energies was studied by the NA49 and CERES collaborations.The results from NA49 for K*(892), φ, and preliminary results for Λ(1520) [36] are summarized in Fig. 6 from [22].The measured yields relative to the expectation from a Hadron Gas Model [13] are plotted versus the respective lifetimes (3.91, 12.7 and 46.5 fm/c).The suppression with respect to the model predictions seems to get stronger with decreasing lifetime of the resonance.This suggests that a large part of the reduction of the K * yield may be caused by rescattering of its decay daughters during the hadronic stage of the fireball and implies that this stage lasts for a time at least comparable to the lifetime of the resonance.From twopion interferometry, the total lifetime of the system to kinetic freeze-out is estimated to about 6 fm/c [9] to 8 fm/c [8] with a duration of emission of 2-3 fm/c.Despite its large width / short lifetime (1.7 fm/c), no significant modification of the Δ resonance yield was observed by the CERES collaboration in the measured momentum range.It is interesting to note that similar observations were made at in Au-Au collisions at √ s=200 GeV [37] (despite the different fireball composition at both energies [38]).The authors interpret a small rise of the ratio of Δ ++ /p yields for more central collisions as a hint for resonance regeneration and the dependence of regeneration and rescattering.
Fig. 3 .
Fig. 3. (Color Online) (a) Invariant e + e − mass spectrum compared to the expectation from hadronic decays.(b)The same data compared to calculations including a dropping ρ mass (dashed) and a broadened ρ-spectra function (long-dashed). Figure from[29].
Fig. 4 .
Fig. 4. (Color Online)e + e − pair yield after subtraction of the hadronic cocktail.The broadening scenario is compared to a calculation assuming a density dependent dropping ρ mass (a) and to a broadening scenario excluding baryon effects (b). Figure from[29] Fig. 4. (Color Online)e + e − pair yield after subtraction of the hadronic cocktail.The broadening scenario is compared to a calculation assuming a density dependent dropping ρ mass (a) and to a broadening scenario excluding baryon effects (b). Figure from[29]
Fig. 5 .
Fig. 5. (Color Online) φ meson reconstruction in CERES and NA49.Left panel: transverse momentum spectrum of φ mesons reconstructed by the CERES experiment in the e + e − decay mode (circles) and in the K + K − decay channels (triangles)[20].Right panel: φ transverse mass spectra from NA49 for different collision energies[35].
Fig. 6 .
Fig.6.(Color Online) Ratio of measured yields in central Pb-Pb collisions to the statistical hadron gas model prediction for K * (892), φ meson, and preliminary measurements for Λ(1520) versus the lifetime τ of the resonance state.Figure from[22]. | 4,993.8 | 2012-11-01T00:00:00.000 | [
"Physics"
] |
Tracing armed conflicts with diachronic word embedding models
Recent studies have shown that word embedding models can be used to trace time-related (diachronic) semantic shifts in particular words. In this paper, we evaluate some of these approaches on the new task of predicting the dynamics of global armed conflicts on a year-to-year basis, using a dataset from the conflict research field as the gold standard and the Gigaword news corpus as the training data. The results show that much work still remains in extracting ‘cultural’ semantic shifts from diachronic word embedding models. At the same time, we present a new task complete with an evaluation set and introduce the ‘anchor words’ method which outperforms previous approaches on this set.
Introduction
Several recent studies have investigated how distributional word embeddings can be used for modeling language change, and particularly lexical semantic shifts. This includes tracing perspective change through time, usually for periods equal to centuries or decades; see (Hamilton et al., 2016b) among others. One of the main problems in these studies is the lack of proper ground truth resources describing the degree and direction of semantic change for particular words. Unfortunately, there is no such manually compiled compendium of all the semantic shifts that English words underwent in the last two centuries. The problem is even more severe for studies using more fine-grained time units spanning days or years, rather than decades, like in (Kulkarni et al., 2015) or (Kutuzov and Kuzmenko, 2016): When trying to uncover subtle changes of perspective (for example, 'Trump' moving towards being associated with 'president' rather than 'millionaire'), it is difficult to find gold standard annotations for rigorous evaluation of the proposed methods.
In this paper, we make use of a social science dataset which to the best of our knowledge has not been introduced in the NLP field before. This dataset is described in section 3 and comprises a manually annotated history of armed conflicts starting from 1946 up to now. Together with word embedding models trained on temporal slices of the Gigaword news corpus (Parker et al., 2011), this allows us to properly evaluate several methods for tracing semantic shifts. We monitor changes in the local semantic neighborhoods of country names, applying it to the downstream task of predicting changes in the state of conflict for 52 countries at the year-level. This is essentially a classification task with 3 classes: 1. Nothing has changed in the country conflict state year-to-year (class 'stable'); 2. Armed conflicts have escalated in the country year-to-year (class 'war'); 3. Armed conflicts have calmed down in the country year-to-year (class 'peace').
The results of this evaluation provide some insights into the performance of current semantic shift detection techniques and describe the best combinations of hyperparameters. We also propose the 'anchor words' method and show that it outperforms previous approaches when applied to this classification task.
Related work
Significant results have already been achieved in employing word embeddings to study diachronic language change. Hamilton et al. (2016a) proposed an important distinction between cultural shifts and linguistic drifts. They showed that global embedding-based measures, like comparing the similarities of words to all other words in the lexicon in (Eger and Mehler, 2016), are sensitive to regular processes of linguistic drift, while local measures (comparing restricted lists of nearest associates) are a better fit for more irregular cultural shifts in word meaning. We here follow this latter path, because our downstream task (detecting armed conflicts dynamics from semantic representations of country names) certainly presupposes cultural shifts in the associations for these country names (not a real change of dictionary meaning). Additionally, local neighborhood measures of change are more sensitive to nouns, which makes them even better for our purpose.
It is important to note that in (Hamilton et al., 2016b) and other previous work on the subject, proper names were mostly filtered out: their authors were interested in more global semantic shifts for common nouns. In contrast to this, for the practical task of monitoring news streams, we here make proper names (countries and other toponyms) our main target. We are mostly interested in what is happening to this or that named entity, not in whether there were subtle changes in the meaning of some common noun. Another difference between the previous work and ours is that our time span is much smaller: not decades but years.
Data description
In this section we provide some background on the conflict dataset that forms the basis of our experiments, and the modifications we have applied to extract the gold standard to evaluate diachronic embeddings models.
The UCDP/PRIO Armed Conflict Dataset 1 maintained by the Uppsala Conflict Data Program 2 and the Peace Research Institute Oslo 3 is a manually annotated geographical and temporal dataset with information on armed conflicts, in the time period from 1946 to the present (Gleditsch et al., 2002). It encodes both internal and external conflicts, where at least one party is the govern-ment of a state. The Armed Conflict Dataset is widely used in conflict research; thus, this can be the beginning of a fruitful collaboration between social scientists and computational linguists.
The collection of the dataset started in the mid-1980s under the name Conflict Data Project, but has since then evolved constantly. In the autumn of 2003 the amount of work on conflict data collection led to a change in the name of the project and it was thus turned into the Uppsala Conflict Data Program.
An essential notion in the UCDP project is that of armed conflict, defined as 'a contested incompatibility concerning government and/or territory where the use of armed force between 2 parties results in at least 25 battle-related deaths' (Sundberg and Melander, 2013). Note that armed force here means the use of arms in order to promote the parties general position in the conflict, resulting in deaths. In turn, arms means any material means, e.g. manufactured weapons but also sticks, stones, fire, water etc. Organized actor can mean a government of an independent state, or a formally or informally organized group according to UCDP criteria [Ibid.].
The subset of the data that we employ is the UCDP Conflict Termination dataset. 4 It contains entries on starting and ending dates of about 2000 conflicts. We limited ourselves to the conflicts taking place between 1994 and 2010. We omitted the conflicts where both sides were governments (about 2% of the entries), for example, the 1998 conflict between India and Pakistan in Kashmir. The reason for this is that with these entries, distributional models have a hard time telling the name of the state (conflict actor) from the name of the territory (conflict location). Thus, we analyzed only the conflicts between a government and an insurgent armed group of some kind (these conflicts constitute the majority of the UCDP dataset anyway).
Another group of the omitted conflicts is where at least one of the sides was mentioned in the full Gigaword less than 100 times. The rationale for this decision was that these conflicts have too little contextual coverage in the corpus for our models to learn meaningful representations for them. These cases constitute about 1% of the entries.
In total, the resulting test set mentions 52 unique locations and 673 unique armed conflicts. It also includes the UCDP intensity level of the conflict in the current year: 493 conflicts are tagged with the intensity level 1 (between 25 and 999 battlerelated deaths), and 180 conflicts with the intensity level 2 (at least 1,000 battle-related deaths). For location-year pairs with no records in the UCDP dataset we assign the tag 0, indicating that there were no armed conflicts in this location at that time.
We then represented this data as a set of data points equal to the differences (δ) between the location's conflict state in the current year and in the previous year, 832 points in total (52 locations × 16 years). If there were several conflicts in the location in this particular year, we used the average of their intensities. As an example, for Congo, the However, for practical reasons it is more useful to predict a human-interpretable class of the conflict state change, rather than a scalar value. A version of this test set was produced where δ values were transformed to classes: The 'shifting' classes War and Peace constitute 10% and 11% of the data points respectively. Thus, they are minority classes and we are mostly interested in how good the evaluated models are in predicting them. Below we describe the evaluated approaches.
Evaluated approaches
For training distributional word embedding models, we employed the Continuous Bag-of-Words 5 algorithm proposed in (Mikolov et al., 2013), as implemented in the Gensim toolkit (Řehůřek and Sojka, 2010). This was chosen because it allows us to straightforwardly update the models incrementally with new data, unlike, for example, 5 Continuous Skipgram showed comparable but slightly worse results, thus we report only those for CBOW.
Representing time in the models
As we are dealing with temporal data, we experiment with different methods for representing chronological information in word embedding models. All Gigaword texts are annotated with publishing date, so it is trivial to compile yearly corpora starting from 1994. Then, we trained three sets of word embedding models, differing in the way they represent time: 1. yearly models, each trained from scratch on the corpora containing news texts from a particular year only (dubbed separate hereafter); 2. yearly models trained from scratch on the texts from the particular year and all the previous years (cumulative hereafter);
incrementally trained models (incremental).
The last type is most interesting: here we actually 'update' one and the same model with new data, expanding the vocabulary if needed. Our hypothesis was that this can help coping with the inherently stochastic nature of predictive distributional models. However, this turned out to be not entirely true (see Section 5).
Detecting and quantifying semantic shifts
Once the sets of models are there, one can detect semantic shifts in a given query word w q (in our case, always a location name), with two major existing approaches: 1. align two models (current and previous year, M cur and M prev ) using the orthogonal Procrustes transformation, and then measure cosine similarity between the w q vectors in both models, as proposed in (Hamilton et al., 2016b); 2. alternatively, define a set of anchor words related to the semantic categories we are interested in, and then measure the 'drift' of w q towards or away from these 'anchors' in M cur compared against M prev . This is the method we propose in this paper.
The first approach outputs one value of cosine similarity for each data point, representing the degree of the semantic shift, but not its direction. In contrast, the anchor words method can potentially provide information about the exact direction of the shift. This can be quantified in two ways: 1. for each anchor, calculate its cosine similarity against w q in M cur and M prev (dubbed Sim hereafter); 2. as above, but instead of using the cosine, find the position of each anchor in the models' vocabulary sorted by similarity to w q ; we normalize by the size of the vocabulary so that rank 1 means the the anchor is the most similar word to w q while rank 0 means it is the least similar (we dub this approach Rank).
The selection of anchor words is further described in Section 5, but for now note that both methods produce two vectors R prev and R cur , corresponding to the models M cur and M prev . Their size is equal to the number of the anchor words, and each component of these vectors represents the relation of w q to a particular anchor word in a particular time period.
To compute the differences between these vectors, one can either: 1. calculate the cosine distance between these 'second-order vectors', as described in (Hamilton et al., 2016a); we dub this SimDist or RankDist, depending on whether Sim or Rank was used; 2. element-wise subtract R prev from R cur to get the idea of whether w q drifted towards or away from the anchors; we dub this SimSub or RankSub.
In the first case, the output is again one value, and in the second case it is the vector of diachronic differences, with the size equal to the number of the anchor words. These 'features' can then be fed into any classifier algorithm.
Results
To predict the actual 'direction' of the semantic shift (whether armed conflicts are escalating in the location or vice versa), one needs to perform classification into 3 classes: war, peace and stable.
To evaluate the approaches described in Section 4, we need a set of anchor words strongly related to the topic of armed conflicts. For this we adopted the list of search strings used within UCDP to filter the news texts for subsequent manual coding (Croicu and Sundberg, 2015): kill, die, injury, dead, death, wound, massacre. Additionally, an expanded version of this list was created, where every initial anchor word is accompanied with its 5 nearest associates (belonging to the same part of speech) in the CBOW model trained on the full Gigaword. This resulted in a set of 26 words (some nearest associates overlap). The classification itself was done using a onevs-rest SVM (Boser et al., 1992) with balanced class weights. The features used were either the cosine distance between R prev and R cur (in the case of SimDist and RankDist) or the result of R cur − R prev (in the case of SimSub and RankSub). In the first case we have only one feature, while in the second case the number of features depends on the number of the anchor words.
The results for CBOW, evaluated with 10-fold stratified cross-validation, are presented in Table 1 in the form of macro-averaged F1.
The labels for approaches are the same as in section 4. Procrustes is our baseline: it does not use any anchor words, only the cosine distances between w q in aligned models.
Overall, one can see that more words in the anchor sets is beneficial, and using R cur − R prev (Sub) is almost always better than cos( R cur , R prev ) (Dist). As for the using of either cosine similarities (Sim) or ranks (Rank) as R values, there does not seem to be a clear winner.
We also tried to concatenate similarities and ranks to produce the feature vector of size 52. However, this did not improve the classifier performance. It is interesting that the best results are shown by the separate models: at least for this particular task, it does not make sense to employ schemes of updating the models with new data or concatenating new corpora with the previous ones. It seems that the models trained from scratch on yearly corpora are more 'focused' on the events happening in this particular year, and thus are more useful.
Note that for the Procrustes alignment baseline it is vice versa: separate models are the worst choice for alignment, probably because they are too different from each other (each initialized independently and with independent collection of training texts). Anyway, the anchor words approach outperforms the Procrustes alignment baseline in all types of models. Hamilton et al. (2016b) report almost perfect accuracy for the Procrustes transformation when detecting the direction of semantic change (for example, the meaning of the word 'gay' moving away from 'happy' and towards 'homosexual'). However, our task and data is different: the time periods are much more granular and we attempt to detect subtle associative drifts (often pendulum-like) rather than full-scale shifts of the meaning. Table 2 provides the detailed per-class performance of the best model (separate CBOW with the expanded word list, using differences in anchor ranks as features). In parenthesis, we give the performance values for the stratified random guess baseline. Detecting stability breaks seems to be more difficult than detecting the 'no changes' state. The performance for the 'war' and 'peace' minority classes is far from ideal. However, it is significantly better than chance.
Conclusion
In this paper, we evaluated several approaches for extracting diachronic semantic shifts from word embedding models trained on texts from differ-ent time periods. We have focused on time spans equal to one year, using the Gigaword news collection as the training corpus. As the gold standard for testing, we adapted a dataset from the field of conflict research provided by the UCDP and containing manually annotated data about the dates of armed conflicts starting and ending all over the world. Thus, we applied diachronic word embedding models to the task of predicting the events of conflicts escalating or calming down in 52 geographical locations, spanning over 16 years (1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010) 6 .
The conclusion is that tracing actual real-world events by detecting 'cultural' semantic shifts in distributional semantic models is a difficult task, and much work is still to be done here. The approaches proposed in the previous work -mainly for large-scale shifts observed over decades or even centuries -are not very successful in this more fine-grained task. Our proposed 'anchor words' method outperforms them by large margin, but its performance is still not entirely satisfactory, achieving a macro F1 measure of 0.36 on the task of ternary classification ('stable', 'escalating', 'calming down').
We plan to further study ways to improve the performance of diachronic word embedding models in the area of armed conflicts and other types of events. If successful, these techniques can be used to semi-automate the labor-intensive process of manually annotating the social science data, as well as to mine news text streams for emerging events and trends. It can also be interesting to trace differences in diachronic representations relative to the source of the training texts (for example, the NYT newspaper against the Xinhua news agency). | 4,170.6 | 2017-08-01T00:00:00.000 | [
"Computer Science",
"Linguistics",
"Political Science"
] |
The 3-Stage Optimization of K-Out-Of-N Redundant IRM With Multiple Constraints
In the present scenario Reliability plays a key role for solving complex executive problems. This paper addresses a redundant Integrate Reliability Model (IRM) optimization for the k-out-of-n configuration system with multiple constraints. Generally the reliability of a system is treated as function of cost, but in many real life situations other considera- tions apart from conventional cost constraint like weight, volume, size, space etc., play vital role in optimizing the system reliability. Quite a few IRM’s are reported with cost constraint only in optimizing the system reliability. As the literature informs that few authors mentioned IRM’S with Redundancy and this paper focuses a novel method of optimizing a Redundant IRM with multiple constraints to encounter the hidden impact of additional constraints apart from the cost constraint while the system is optimized by considering the k-out-of-n configuration system.
INTRODUCTION
The paper is focused to design and to optimize an Integrated Reliability Model for Redundant system with multiple constraints for the k-out-of-n configuration system as a beginning in the mentioned area of the research and initiated optimizing the system reliability. Integrated Reliability Model (IRM)refers to the determination of the number of components (x j ), component reliabilities (r j ), Stage reliabilities (R j ) and the system reliability (R s ) where in the problem considers both the unknowns that is the components reliabilities and the number of components in each stage for the given cost constraint to maximize the system reliability. So far in literature the integrated reliability models are optimized using cost constraint alone where there is an established truth between cost and reliability.
This prompted the author to present a piece of novel aspect of Reliability Optimization through modeling by considering an Integrated Reliability Model for a Redundant System by treating Weight and Volume as additional constraints apart from the conventional Cost constraint to optimize the System Reliability, to negotiate the hidden impact of the additional constraints like Weight and Volume for the k-out-of-n configuration reliability model.
MATHEMATICAL MODEL
The objective function and the constraints of the model maximize (1) subject to the constraints non-negative restriction that x j is an integer and r j , R j >0
MATHEMATICAL FUNCTION
To establish the mathematical model, the most commonly used function is considered for the purpose of reliability design and analysis. The proposed mathematical function (5) Where aj, b j are constants.
System reliability for the given function (6) The number of components at each stage X j is given through the relation The problem under consideration is maximize (8) subject to the constraints (9) Where λ1, λ 2 , λ 3 are Lagrangean multipliers.
CASE PROBLEM:
To derive the optimum component reliability(r j ), stage reliability(R j ), number of components in each stage(x j ) and the system reliability(R s ) not to exceed system cost Rs.1000, weight of the system 1500 kg and volume of the system 2000 cm 3 .
CONSTANTS:
Stage f j g j h j r j,min r j,max 1 0.9 0.5 0.2 0.5 0.99 0.9 0.5 0.2 0.5 0.99 3 0.9 0.5 0. 6. HEURISTIC METHOD The Lagrangean multipliers method gives a solution to arrive at an optimal design quickly rather than sophisticated algorithms. This is of course done at the cost of treating the number of components in each stage (x j ) as real. This disad-
ReseaRch PaPeR
vantage can be overcome, by the heuristic approach. Heuristic methods, in most cases employ experimentation and trial-and-error techniques. A heuristic method is particularly used to come rapidly to a solution that is reasonably close to the best possible answer, or 'optimal solution'.
SENSITIVITY ANALYSIS
It is observed that when the input data of constraints is increased by 10% there is only a 4.09% increase in system reliability. When the input data is decreased 10%, there is only an 8.3% decrease in system reliability. When one factor is varied, keeping all the other factors unchanged, the variation in the system reliability is as shown in the following 4.09% increase The analysis confirms that the volume factor is more sensitive to input data than are cost and weight.
DYNAMIC PROGRAMMING
The heuristic approach commonly provides a workable solution which is approximate one. To validate the established redundant reliability system and to obtain the much needed integer solution the Dynamic Programming method is applied. The Lagrangean Method can be used as the input for the Dynamic Programming Approach, in order to determine the stage Reliabilities, System Reliabilities, Stage Cost and the System Cost. The Dynamic Programming Approach provides flexibility in determining the number of components in each stage; Stage Reliabilities and the System Reliability for the given System Cost. As per the procedure the parameter values derived from the Lagrangean are given as inputs for the Dynamic Programming Approach to obtain the integer solution.
CONCLUSIONS
The integrated reliability models for redundant systems with multiple constraints for the k-out-of-n configuration system is established for the commonly used mathematical function using Lagrangean method approach where component reliabilities (r j ) and the number of components (x j ) in each stage are treated as unknowns. The system reliability (R s ) is maximized for the given cost, weight and volume by determining the component reliabilities(r j ) and the number of components required for each stage( x j ). The Lagrangean Multiplier Method provide a real valued solution, the Heuristic approach is considered for analysis purpose which provided a near optimum solution wherein the values of component reliabilities ( r j ) are taken as input to carry out heuristic analysis. The analysis of Heuristic approach results in gaining a solution which ought to be an approximate one even after its validation and to derive the much needed scientific integer solutions for the defined problem, the Dynamic Programming approach is applied. The advantage of Dynamic Programming is that the number of components required for each stage (x j ) directly gives an integer value along with the other values of the parameters, which is very convenient for practical implementation for the real life problems. | 1,426.6 | 2011-10-01T00:00:00.000 | [
"Engineering"
] |
Effect of mechanical damage and wound healing on the viscoelastic properties of stems of flax cultivars (Linum usitatissimum L. cv. Eden and cv. Drakkar)
As plant fibres are increasingly used in technical textiles and their composites, underlying principles of wound healing in living plant fibres are relevant to product quality, and provide inspiration for biomimetic healing in synthetic materials. In this work, two Linum usitatissimum cultivars differing in their stem mechanical properties, cv. Eden (stems resistant to lodging) and cv. Drakkar (with more flexible stems), were grown without wound or with stems previously wounded with a cut parallel or transversal to the stem. To investigate wound healing efficiency, growth traits, stem biomechanics with Dynamic Mechanical Analysis and anatomy were analysed after 25-day recovery. Longitudinal incisions formed open wounds while transversal incisions generated stem growth restoring the whole cross-section but not the original stem organisation. In the case of transversal wound healing, all the bast fibre bundles in the perturbed area became lignified and pulled apart by parenchyma cells growth. Both Linum cultivars showed a healing efficiency from 79% to 95% with higher scores for transversal healing. Morphological and anatomical modifications of Linum were related to mechanical properties and healing ability. Alongside with an increased understanding of wound healing in plants, our results highlight their possible impact on textile quality and fibre yield.
Introduction
From a material scientist's point of view, stems of non-woody plants can be considered as fibre-reinforced composites, which are made up of unlignified collenchyma fibres or lignified sclerechyma fibres embedded in a matrix of lignified or unlignified parenchymatous tissues. Having in mind that the capability to repair external and internal damage at different length PLOS two plants per pot and were grown in 12-cm diameter and 7-cm deep pots filled with compost. Watering regime was set-up daily to ensure non-limited water supply during growth. All plants were cultivated outside in the Botanic Garden of Freiburg, Germany. At 69 DAS, each plant was marked at 5 cm from the stem apex. A third of all the plants were kept as control without wound, a third was wounded with a cut parallel to the fibres ( Fig 1A) and the last third wounded with a cut perpendicular to the fibres (Fig 1B). Wounds were performed below the 5 cm-mark with a razor blade (Derby Extra Double Edge Razor Blade) where wound depth and length were controlled with a blade holder to ensure a reproducible wound size (around 400μm deep and 3-cm long). Depth was selected in order to damage the sclerenchyma fibres distributed in the stem periphery. Plants were then left to recover until the mechanical measurement at 95 DAS.
Measurement of morphological traits and mechanical properties
Plant height was measured prior to mechanical tests at 95 DAS (just before flowering stage). 3-cm long segments of Linum stems were taken from below the mark previously made at 69 DAS containing either no wound for the controls, a longitudinal wound for parallel damage to the fibres or a transversal wound for damage perpendicular to the fibres (Fig 1). The maximum sample length fitting the DMA test was 30 mm, therefore samples of 30 mm were used to perform the mechanical analysis. The resulting length between the clamps was 13-16 mm. In the case of a transversal wound, the wound was placed in the middle and the sample cut 15 mm below and above the wound to obtain 30 mm segment (Fig 2A). In the case of a longitudinal wound, the middle of the wound was used as reference point and the sample cut 15 mm below and above this point to obtain 30 mm segment ( Fig 2B). Diameters were measured at four points (every 1 cm) along each segment and along two perpendicular directions. In the case of a wounded segment, one of the directions was along the wound. The cross-sectional area was calculated on basis of the measured diameters, whereby the control plants have an almost perfect round shape. The healed cross-sections have an elliptical shape by means of the open wound of longitudinal cuts or the overflowing cell building of transversal cuts. Because the DMA software uses area as a perfect circle to calculate the mechanical parameters, an "equivalent circle diameter" was calculated for the controls and the wounded stems. Dynamic mechanical tests (TA Instruments DMA Q800 Dynamic Mechanical Analyzer) were carried out on the segments, in tensile configuration and strain sweep mode, at 25-28˚C, with an excitation frequency of 1 Hz and a pre-load force of 0.01 N. The two ends of each segment were tightened in the fixed and the moveable clamp, using a screwdriver Snap-on USA QDriver3 to ensure a constant fixation torque of 34 N cm for all specimens. The fixation torque was chosen so that the segments were held firmly in the clamps, but not excessively squeezed at the ends. The initial length of the specimens (length between the clamps) was 13-16 mm, and the amplitude of deformation was increased from 1 to 17 μm, corresponding to strain of 0.006% to 0.13%. The strain was calculated as the ratio between the amplitude of deformation and the initial length of the specimen. In this dynamic oscillation test method the raw signals of force, amplitude of deformation and phase angle were measured. These parameters were used to calculate the storage stiffness (K'), which corresponds to the in-phase force response of the material to the imposed sinusoidal deformation, and loss stiffness (K''), which corresponds to the out-of-phase force response of the material. Tan Delta is then calculated as the ratio of K'' to K'. Finally, the following relationship between the stiffness in tension and the tensile elastic modulus is used: where K is the stiffness in tension (N m -1 ), E the tensile modulus (MPa), A the sample crosssectional area (m 2 ), L sample length (m). This analysis provides an average tensile modulus of the stem, considering that it is a hierarchically organised composite material composed of several phases (tissues, cells, fibres, cell walls), which scale is below the overall scale of the samples. The tensile storage and loss moduli are obtained as follows: and where E', E'' are respectively the Storage and Loss Modulus (MPa), and K s ', K s '' the measured Storage and Loss Stiffness of the sample (N m -1 ). The cross-sectional area A, to be introduced in Eqs 2 and 3 to obtain the Storage and Loss Moduli (MPa) was calculated using the diameters of the segments measured as described above. We analysed in this study both Stiffness, which is a direct measurement from the DMA force sensors, and the Moduli, which are all calculated including the variability of the diameters, but which are often considered in mechanical analysis.
The number of replications for the cv. Drakkar were n = 18 for the control stems, n = 16 for the stems with a longitudinal wound and n = 16 for the stems with a transversal wound. The number of replications for the cv. Eden were n = 18 for the control stems, n = 18 for the stems with a longitudinal wound and n = 18 for the stems with a transversal wound (raw data see S1 Table).
Anatomical studies
Each tested segment was fixed in a Glutaraldehyde solution and embedded in resin. Crosssections of segments were prepared with diamond-blade microtome at thicknesses of 3 μm and 1 μm and mounted on microscope slides. Thin sections of 1 μm were stained with Acridine Orange highlighting lignified tissues in yellow to green. Sections of 3 μm were stained with an overview-staining comprising a mix of Safranin, Acide Yellow, and Methylene Blue showing lignified tissues in red. Microscopic pictures were performed with an Olympus BX 61. A Zeiss filter set FITC (Excitation 495 nm, Emission 517 nm) was used for the fluorescent staining. Cross sections of approximatively 0.3 mm from fresh material were sectioned by hand with a razor blade. In order to double check lignified and non-lignified tissues, two staining were used: Fuchsin-Chrysoidin-Astrablue (FCA) highlighting lignified tissues in red-pink; and hydrochloric acid (10%) and phloroglucinol (3%) in 95% ethanol highlighting specifically lignified tissues in red. FCA pictures were performed with an Olympus BX 61 and phloroglucinol pictures with a D50 Nikkon camera and AF-S Micro Nikkor 60 mm lens.
Statistical analysis
Mechanical properties (i.e. Storage Modulus, Stiffness and Tan Delta) were analysed in relation to strain amplitude (0.005 to 0.12%), wound type (Control, Longitudinal and Transversal) and Linum usitatissimum cultivar (cv. Eden and Drakkar). Morphological traits (i.e. plant height and diameter) were only investigated in relation to wound type and Linum usitatissimum cultivar (see S1 Fig in Supporting Information). Because we measured repeated mechanical properties (through strain sweep mode) for each stem, we carried out the analysis using linear mixed-effects models using the function lmer (lme4 package) in the statistical software R [29] following the model-building methods for repeated measures and longitudinal data [30,31]. Linear mixed-effects models were also indicated to analyse morphological traits especially to take biological variability into account (random factors).
We were mainly interested in the effect of wound type and cultivar on the mechanical properties after healing. Therefore "wound type" and "cultivar" were treated as fixed effects. The parameters "pots" and "samples" were treated as random effects where "samples" are nested into "pots" as each pot contained two plants. The parameter "Strain", calculated from the strain amplitude, was treated as a fixed effect nested into "samples" (Fig 3). We obtained full models where the fixed effect "Strain" was nested into random effects (see S1 Fig in Supporting Information). Both random slopes and random intercepts were included in the full models to take into account any variability at the level of wound type, Linum cultivar and Strain. In other words, we allowed the response (e.g. mechanical properties) to vary differently to each wound type, or Linum cultivar or Strain in order to reflect the biological variability in plants. Only significant parameters were retained for the final models (see S1 Fig in Supporting Information). The significance of both random and fixed effects was judged by performing likelihood ratio tests and comparing AIC (Akaike Information Criterion) and BIC (Schwarz/Bayesian Information Criterion) values between models. The relevance of random parameters was also tested by comparing models without random effects (linear model) and with random effects (linear mixed-effects model) with likelihood ratio tests performed with parametric bootstrap approach [31]. When necessary, optimizer options were used to solve convergence problems with the optmix package. Furthermore, the function mstep (lmerTest package) was used to perform a final check of the significant parameters to be retained. All estimates were taken from the final model in each case and used to draw final models on the raw data. Finally, the function lsmeans was used to assess the significant difference between "wound type" and "cultivar" (Post Hoc tests) and to obtain the true means (least square means) with their 95% confidence intervals. Least squares means were calculated from the final models, therefore took into account all factors (both fixed and random effects). The variables Storage Modulus, Stiffness versus Strain were all log-transformed to meet the expected relationship on log-log scale. However, means (least squares means) are presented on the original scale with their 95% confidence interval (CI). The relationship of Tan Delta versus Strain was not log-transformed as the transformation did not improve significantly the fitting. Plant Height and Diameter were also not log-transformed. The injured stems also differed considerably from the controls in terms of tissue organisation and histology. Linum usitatissimum control stems consist of an epidermis, a cortex, bundles of bast fibres (sclerenchyma fibres), phloem, cambium, xylem and pith (Drakkar: Fig In the middle of the wound, where wound edges are completely apart, main changes are the severe loss of a major part of the cross section and the development of a protective barrier (suberin and lignin deposition) on the wound edges and inside along the pith (Drakkar: Fig 4K-4N, Eden: Fig 5K-5N). Tissue organisation remains remarkably the same in the remaining stem tissues. Again, some of the bast fibre bundles bordering the wound are also lignified ( Fig 5I and 5J).
Stem morphology and anatomy
In the case of transversal wound healing, tissue organisation is profoundly perturbed with a larger region of the stem being affected (Drakkar: Fig 4P-4S, Eden: Fig 5P-5S). The injured tissues (i.e. cortex, bast fibres, phloem and part of the xylem) show a complete closing of the wound thus restoring the whole cross section but not the original stem organisation. Wound healing stimulated a ligno-suberized deposition in the epidermis and part of the cortex forming a protective barrier. All the bast fibre bundles in the perturbed part remained cut up without reconnection of the fibre ends but became lignified and pulled apart by parenchyma cells growth while the undamaged bundles remained non-lignified with a regular distribution in the periphery. The phloem and cambial tissues are greatly disturbed and a pronounced formation of parenchyma cells and narrows vessels occurs. Some non-lignified narrow vessels are observed within the xylem could possibly be remains of former cambium before incision. Vascular vessels of the phloem and xylem present an organisation parallel to the incision instead of a classic radial orientation for the xylem. Pith is also affected with some cells showing lignification (Fig 6D and 6E).
Calculating the model parameters and fitting models
Storage modulus. The relationship between Storage Modulus and Strain was significantly negative for both cultivars and all wound types (Fig 7; Table 1a). Strain significantly affected (Tables 2a and 3).
The type of wound also significantly affected the Storage Modulus (χ 2 (4) = 35.50, p < 0.0001). The average Storage Modulus decreased with both longitudinal and transversal wound compared to control stems without wound (Table 2a). Transversal wound healing recovered the Storage Modulus most efficiently, closely approaching the control properties (t (108.16) = 0.94, p = 0.62 from Tukey Post hoc tests), whereas in stems with longitudinal wound recovery of Storage Modulus was significantly lower than controls (t(108.16) = 3.91, p = 0.0005) and significantly lower than in the stems with transversal wounds (t(108.16) = -2.93, p = 0.011). Stems with longitudinal and transversal wounds showed lower mechanical properties by around 21.0% and 5.5% respectively compared to control stems (Table 3). Thus wound-healing could not recover completely the Storage Modulus.
Interestingly, the Linum cultivar × wound type interaction was not significant (χ 2 (2) = 0.50, p = 0.78). It indicated that Linum cultivar and wound type were not inter-dependent and although both cultivars showed different properties, they both reacted the same way to the Wound healing of flax cultivars different wound types. This interaction was therefore removed from the final model. In contrast Linum cultivar × Strain (χ 2 (1) = 39.13, p < 0.0001) and wound type × Strain (χ 2 (2) = 30.21, p < 0.0001) interactions were significant and thus kept in the model. Furthermore, the Linum cultivar × wound × Strain interaction was not significant (χ 2 (2) = 3.31, p = 0.192) and therefore removed from the model. For the random effects, the pot effect was not significant (χ 2 (78) = 22.70, p = 1) and was therefore removed from the model. Also, having two stems per pot did not affect their properties. In contrast, the random effect of sample was highly significant (from bootstrapping approach, p < 0.0001) indicating the marked biological variability between Linum stems. Keeping random effect was essential to the model. The table of estimates from the final lmer model from Storage Modulus is shown in Table 1a and the data plotted together with the model fitting in Fig 7. Stiffness. The relationship between Stiffness and Strain was significantly negative for both cultivars and all wound types (Fig 8; Table 1b). Strain significantly affected Stiffness (χ 2 (4) = 385.58, p < 0.0001) indicating that Stiffness was decreasing with Strain in all cases.
The cultivar of Linum significantly affected Stiffness (χ 2 (2) = 39.99, p < 0.0001). Eden cultivar produced stems with higher Stiffness compared to Drakkar (t(108.16) = -5.111, p < 0.0001 from Tukey Post hoc tests). On average for Drakkar stems this mechanical property was lower by around 29.6% compared to Eden stems (Tables 2b and 3). The type of wound also significantly affected Stiffness (χ 2 (4) = 30.12, p < 0.0001). The average Stiffness decreased with both longitudinal wound and transversal wound compared to control stem without wound (Table 2b). Transversal wound healing recovered Stiffness the most efficiently approaching the control properties being not significantly lower (t(108.16) = 1.87, p = 0.153 from Tukey Post hoc tests), whereas Stiffness of stems with longitudinal wound recovery was significantly lower than controls (t(108.16) = 3.77, p = 0.0008). In contrast to Storage Modulus, Stiffness of stems with longitudinal wound was not significantly lower than those with transversal wound (t (108.16) = -1.88, p = 0.150). Longitudinal wounds and transversal wounds lead to a lower stiffness by around 27.0% and 14.4%, respectively, compared to control stems (Table 3). Thus wound-healing could not completely recover the stem Stiffness.
Interestingly, as seen for the Storage Modulus, the Linum cultivar × wound type interaction was not significant (χ 2 (2) = 0.86, p = 0.65). This interaction was therefore removed from the final model. In contrast Linum cultivar × Strain (χ 2 (1) = 39.01, p < 0.0001) and wound type × Strain (χ 2 (2) = 29.97, p < 0.0001) interactions were significant and thus kept in the model. Furthermore, the Linum cultivar × wound × Strain interaction was not significant (χ 2 (2) = 3.29, p = 0.193) and therefore removed from the model. As for Storage Modulus, the pot effect was not significant (χ 2 (78) = 15.46, p = 1) and the random effect of sample was highly significant (from bootstrapping approach, p < 0.0001). The table of estimates from the final lmer model from Stiffness is shown in Table 1b and the data plotted together with the model fitting in Fig 8. Tan Delta. The relationship between Tan Delta and Strain was significantly positive for both cultivars and all wound types (Fig 9, Table 1c). Strain significantly affected Tan Delta (χ2 (4) = 384.79, p < 0.0001) indicating that Tan Delta increased with greater Strain in all cases.
Cultivar of Linum significantly affected the Tan Delta (χ2(4) = 21.86, p < 0.001). Eden cultivar produced stems with higher Tan Delta compared to Drakkar (t(108.16) = -5.11, p < 0.0001 from Tukey Post hoc tests). For Drakkar stems this mechanical property was lower by around 3.8% compared to Eden stems (Tables 2c and 3). Type of wound significantly affected the Tan Delta (χ 2 (4 = 26.51, p < 0.0001). The average Tan Delta of stems with longitudinal and transversal wounds decreased compared to control stems without wound (Table 2c). Transversal wound healing recovered slightly better Tan Delta but was significantly lower (t(108.16) = 3.77, p < 0.001 from Tukey Post hoc tests). Longitudinal wound recovery was not significantly lower than controls (t(108.16) = 1.87, p = 0.15). In contrast to Storage Modulus, Tan Delta of stems with longitudinal wound was not significantly lower than those with transversal wound (t(108.16) = -1.88, p = 0.15). Stems with longitudinal and transversal wounds showed Tan Delta values lower by around 7.1% and 6.1% respectively compared to control stems (Table 3). Thus wound-healing could not recover completely Tan Delta. As for both Storage Modulus and Stiffness, the Linum cultivar × wound type interaction was not significant (χ 2 (2) = 1.59, p = 0.45), and this interaction was therefore removed from the final model. In contrast Linum cultivar × Strain (χ 2 (1) = 12.36, p < 0.001) and wound type × Strain (χ 2 (2) = 0.15, p = 0.015) were significant and thus kept in the model. Furthermore, the Linum cultivar × wound × Strain interaction was not significant (χ 2 (2) = 1.20, p = 0.55) and therefore removed from the model. For the random effects, the pot effect was not significant (χ 2 (78) = 20.15, p = 0.99) and the random effect of sample was highly significant (from bootstrapping approach, p < 0.0001) indicating the biological variability between Linum stems. Keeping random effect was essential to the model. The table of estimates from the final lmer model from Tan Delta is shown in Table 1c and the data plotted together with the model fitting in Fig 9. Plant height at 95 DAS. The relationship between plant height at 95 DAS and type of wound was significantly negative (Table 1d) for both stems with longitudinal and transversal wounds indicating that plant height decreased when damaged with wounds. The Linum cultivar × wound type interaction was highly significant (χ2(2) = 17057, p < 0.0001) indicating a strong inter-dependence of the two fixed factors in contrast to the mechanical properties. Plant height for Drakkar cultivar (Table 2d) was significantly lower with a longitudinal wound by around 12.7% (t(537.2) = 9.56, p < 0.0001 from Tukey Post hoc tests) and 4.5% with a transversal wound (t(310.4) = 3.50, p < 0.0001) compared to the control Drakkar plants. The two types of wounds differed significantly (t(305.9) = -6.20, p < 0.0001). Plant height for Eden cultivar (Table 2d) was also significantly lower with a longitudinal wound by around 18.5% (t (211.9) = 17.93, p < 0.0001 from Tukey Post hoc tests) and 8.2% with a transversal wound (t (232.0) = 7.97, p < 0.0001) compared to the control Eden plants. The two types of wounds differed significantly (t(222.9) = -9.96, p < 0.0001). Thus wounds affected plant height and healing could not recover the height at 95 DAS. For the random effects, the pot effect was significant (χ 2 (1) = 5136.9, p < 0.0001) and was therefore kept in the model nested into sample random effect. The random effect of sample was also significant (from bootstrapping approach, p < 0.0001) indicating the biological variability between Linum stems. Keeping both random effects was essential to the model. The table of estimates from the final lmer model from plant height is shown in Table 1d.
Diameter at 95 DAS. The relationship between tested segment diameter at 95 DAS and type of wound was mainly indicating a negative relationship apart from longitudinal wounding were the output was less significant and not negative (Table 1e). It suggested that the diameter response to wounding was not homogenous. The Linum cultivar × wound type interaction was highly significant (χ2(2) = 2334.8, p < 0.0001) indicating a strong inter-dependence of the two fixed factors in contrast to the mechanical properties. Diameter for Drakkar cultivar (Table 2e) (Table 2e) was significantly lower both with a longitudinal wound by around 9.8% (t(231.1) = -1.55, p < 0.0001 from Tukey Post hoc tests) and 7.4% by a transversal wound (t(209.8) = 2.59, p < 0.0001) compared to the control Eden plants. The two types of wounds differed significantly (t(147.3) = -2.20, p = 0.03). Thus tested segment diameters were mostly affected by wounds and in most of the cases, healing could not recover diameters at 95 DAS.
For the random effects, the pot effect was significant (χ 2 (1) = 7652.5, p < 0.0001) and was therefore kept in the model nested into sample random effect. The random effect of sample was also significant (from bootstrapping approach, p < 0.0001) indicating the biological variability between Linum stems. Keeping both random effects was essential to the model. The table of estimates from the final lmer model from stem diameter is shown in Table 1e.
Discussion
The present study describes significant differences in the mechanical, morphological and anatomical responses to mechanical damage of two Linum usitatissimum cultivars, one with stems resistant to lodging (cv. Eden) and one with more flexible stems (cv. Drakkar). Although mechanical properties could not be fully recovered after a 25-day healing period, both cultivars could show a healing efficiency from 79% up to 95% with higher scores in favour of transversal wound healing.
Effect of wound directions
An important aspect about the healing efficiency is the wound direction with respect to the fibres. Sclerenchyma fibre bundles of Linum usitatissimum are distributed in the periphery of the stem and present a unidirectional arrangement along the stem [28,33]. With this configuration, transversal wounds might cause greater alterations. They completely cut the fibres at right angles and might also cut the cambium at right angles to its plane of division [34], thus altering the vascular function if phloem and/or xylem are affected and markedly weaken the supporting functions of the fibres. The longitudinal wounds cause separation of the bundles while preserving the continuity of the fibres and consequently keeping more or less the supporting functions unaltered and injure fewer cells [34]. Therefore, we might expect the stems with longitudinal wounds to recover better. Interestingly, after the 25-day healing period, we observed that the stems with transversal wounds perform better in terms of mechanical properties recovery but also in terms of plant height.
At the morphological level, in the case of transversal wound healing, the injured tissues showed a complete closure thus restoring the whole cross section although not the original stem organisation. In the case of the longitudinal wound healing, injured tissues were not restored and showed structural changes in the bordering cells, where wound edges remain completely apart. An open wound might indeed disadvantage a stem with desiccation of internal tissues usually not exposed and a loss of mechanical support by the void created. In contrast, the transversal wounds responded by a local reinforcement of the wounded tissues by enhanced growth and marked histological changes including clearly visible lignification of fibre bundles.
The complete opening of the wound induced after a longitudinal incision might be partly attributed to an anisotropic distribution of (pre-)tension and compression present in the stem having a special arrangement of tissues with different mechanical properties. In the case of a transversal wound, this anisotropic distribution does not result in a permanent opening of the wound, because the upper part of the stem might constrain the wound to close down by applying its own mass. This contribution to bring the wound edges pressed together might facilitate the wound healing.
Furthermore, the incisions were performed during an early phase of development of Linum, where the plants were still building their fibres and were actively growing. The stem continued to increase in height and radius. This radial growth might have enhanced the opening of the longitudinal wounds by enhanced crack propagation and exposed internal tissues, thus maintaining a disadvantage in terms of recovery. The wound timing is therefore a critical element in the wound healing process and its analysis.
Mechanical damage affects fibres quality by triggering tissue lignification
Both Linum usitatissimum cultivars developed protective chemical barriers during wound healing. In the case of a longitudinal wound (Figs 4D and 5D), a protective boundary layer is observed along the open wound (suberin and lignin deposition) with a layer of parenchyma cells below (i.e. wound periderm). Some of the bast fibre bundles bordering the wound became also markedly lignified. In the case of the transversal wound (Figs 4I and 5I), a ligno-suberized deposition in the epidermis and part of the cortex formed a protective barrier. All the bast fibre bundles in the perturbed part were clearly lignified and pulled apart by parenchyma cells growth while the other bundles remained non-lignified with a regular distribution in the periphery.
While the deposition of ligno-suberin material around the wound epidermis has been reported in a previous study on wound response in flax [35], the reaction of the sclerenchyma fibre bundles was not investigated. These fibres are especially important due to their role of mechanical support [25,32] but are also of high interest to industrial use. Among the wound healing responses in our findings, a striking element is the lignification of the sclerenchyma fibre bundles in the damaged area and its vicinity. When not damaged, the Linum sclerenchyma fibre bundles have a low lignin content ranging from 2% to 5% [15,23,25,33]. This lignin deposition might have two important consequences. First, the fibre quality could be affected because higher lignin content directly impacts the fibre quality by decreasing its strain to failure [32,36,37]. This parameter is so important that transgenic flax with lignin deficiency is investigated to improve elastic properties of bast fibres [37]. Damaged or wounded flax stems might therefore decrease textile quality. Secondly, our findings showed a singular case of lignin deposition triggered by mechanical wound during sclerenchyma differentiation. Lignin deposition during sclerenchyma differentiation occurs naturally in Alfalfa (Medicago sativa L.) undamaged stems [38] where the onset of lignification occurs in the secondary cell walls contrary to secondary xylem. A similar process might occur for the sclerenchyma fibre bundles in the injured region of the cultivars stems, explaining how the fibres already present during incision could be completely lignified although they are naturally very low in lignin content.
Mechanical healing efficiency rate consistent between the two cultivars
The mechanical response of Linum usitatissimum cv. Drakkar and cv. Eden stems to increasing strain was comparable for all wound types and for controls. Both cultivars showed decreasing Storage Modulus and Stiffness at higher strains, while Tan Delta increased at higher strains. The slight decrease in Modulus and Stiffness could be attributed to some early damage in the cells of the fibres, due to an orientation of the microstructure in the strain axis. As a result, the dissipation that is observed is most likely resulting from viscous dissipation, but also to a small extent of damage in the stems. Since both cultivars, wounded or not showed the same trend, the dissipation is assumed to follow similar mechanisms in all cases.
The increase in dissipation of the stem material (represented by Tan Delta) combined with the decrease in elastic behaviour (represented by Storage Modulus) demonstrated that damping properties increase despite wounded stems. These mechanical traits at very small deformations (1 to 17 μm) reflect the role of water and also the crucial role of the sclerenchyma fibres in the stems by an increasing energy dissipation to respond to the increasing deformation. Even though the connection between flax stem properties and processed flax fibres is not straightforward in terms of intrinsic damping properties, it is worth noting that processed flax fibres composites are especially of interest for their intrinsic damping properties in particular in sport equipment [39]. Despite the loss of elasticity via the Storage Modulus decrease and the wounds, Drakkar and Eden stems were still able to enhance damping properties by increasing their dissipation at higher strains.
Furthermore, although the two cultivars presented distinct mechanical properties, with Eden having greater Storage modulus, Stiffness and Tan Delta, their recovery efficiency in percentage was similar for all mechanical properties. For example, they both lost around 20% from their respective Storage Modulus when wounded with a longitudinal incision on the stems. This can be explained by the fact that the parameters wound types and Linum usitatissimum cultivars are non-interdependent for all mechanical properties, thus both cultivars reacted the same way to wound types.
This stable efficiency rate between the two cultivars occurred despite the variability among plants in terms of intrinsic growth and reactions of stem tissues during wound healing represented by the significant random effect "samples". In contrast, plant structural responses (i.e. plant height and stem diameter) to mechanical damage were associated with more variability. The random factor "pot" representing the variability of environmental growth conditions was significant only for plant height and stem diameter. In addition, the parameters wound types and Linum usitatissimum cultivars were also interdependent. It indicates that the structural traits of Drakkar and Eden might be more sensitive to growth conditions than mechanical properties and wound healing. The strong background selection of the cultivars focused on fibre quality could explain our findings. However, plant height and stem diameter could be important traits concerning the quantity of fibres produced and could decrease fibre yield in case of wounded stems.
Implication for technical self-healing material and concepts for composite materials
In addition to the use of plant fibres in composites for natural fibre-reinforced composites, plants are also a source for biomimetic self-healing materials [8,9,40]. Successful concepts developed in recent years were inspired from biological system such as blood flow vascular networks [41] and more recently from the vascular plant system of xylem and phloem [40]. These bioinspired materials include a segregated vasculature into fibre-reinforced polymer composite. The refilling property of the integrated vascular networks, thus allow repeated selfhealing function [42].
In contrast to the concept of micro-tubes delivering a healing agent in the matrix, our findings with the two Linum usitatissimum cultivars showed that the matrix played an active role in the healing process: the healing occurred via the bast fibres (sclerenchyma) but also in other tissues such as vascular system (e.g. xylem) and the matrix (e.g. pith and cortical parenchyma). The stem structure was subjected to a profound reorganisation locally around the wound.
Most of the self-repair models for fibre-reinforced materials were inspired by hardwood species [40]. The Linum usitatissimum cultivars used during our investigation are annual plants presenting a short life history i.e. growing actively over a short period of time compared to a species such as oak (Quercus robur L.). Therefore, they do not experience the same constraints for their structure and have to constantly adjust to any event occurring during the rapid organ development such as wound healing. Furthermore, seed production is crucial for annual plants, consequently the structure bearing the seeds is of upmost importance, making healing of the stem decisive for the reproduction strategy. Therefore, wound healing in annual plant stems might confer another source for bioinspired materials by providing a dynamic approach of self-healing.
Conclusion
This study demonstrated the possibility to use Dynamic Mechanical Analysis (DMA), coupled to more traditional plant observation techniques, to assess and compare the initial and healed properties of plant stems. Alongside with the better understanding of wound healing in plants and their implications in terms of mechanical and structural recovery, our findings show also the possible impact on textile quality and fibre yield. This study was performed during a young phase of the Linum usitatissimum cv. Drakkar and cv. Eden growth, when plants were building their stem and fibres. It would be interesting to evaluate the influence of damage that occurred during a later phase of development towards the end of the fibres building, i.e. when the structure is almost in its final shape. Another interesting aspect would be to investigate the responses with lignin deficiency Linum usitatissimum stems. Lignin being essential to wound healing, how would the plant cope in terms of physiology against pathogens and in terms of mechanical reinforcement? Given the increasing interest in self-healing materials and sustainable materials, further investigations of Linum usitatissimum plant fibres might be of particular interest. This famous plant represents also an invaluable source for self-healing material concepts as an invaluable point of departure for future applications for constructing technical selfhealing fibre-reinforced composites. | 8,163.4 | 2017-10-05T00:00:00.000 | [
"Materials Science"
] |
Dynamic tuning of dielectric permittivity in BaTiO3 via electrical biasing
ABSTRACT Dynamic tuning of optical properties is technologically important for the development of future integrated electronic and photonic devices. Here, we demonstrate the electrically tunable permittivity in optical frequency in epitaxial BaTiO3 (BTO) thin films using spectroscopic ellipsometer measurements under DC bias condition. The lattice tetragonal distortion and domain switching of biased BTO is evident based on the increased anisotropy in permittivity in-plane and out-of-plane. Specifically, out-of-plane permittivity, arising from the c-domains, shows a continuous decrease with the applied voltage, relative to the in-plane permittivity, arising from the a-domains. The dielectric constant can also be reversibly adjusted through electric-reset cycles. GRAPHICAL ABSTRACT IMPACT STATEMENT The current study demonstrates the real-time electrically tunable dielectric permittivity in BaTiO3, probed using spectroscopic ellipsometer at optical frequencies, towards future tunable waveguides, optical components and devices.
Introduction
Non-linear dielectric materials are important for photonic device applications such as low-loss waveguides, wavelength conversion and electro-optic modulators [1,2]. Among them, optical waveguides are a key component in integrated optical circuits with potential use in all-optical signal processing and optical computing [3,4]. Specifically, waveguide materials, such as LiNbO 3 and BaTiO 3 [5,6] possessing a non-zero second-order nonlinear optical susceptibility owing to their noncentrosymmetric crystal structure, are important for nonlinear photon generation and manipulation (e.g. second harmonic generation (SHG) and spontaneous parametric down-conversion (SPDC)). Therefore, tunability in their dielectric properties is technologically important owing to their application in nanoscale integrated optical circuits. Such tunable optical properties have been CONTACT achieved via several methods such as doping and stoichiometry tuning [7][8][9][10]. However, integrated on-chip optical devices require real-time tunability using external stimuli such as electric field, magnetic field, laser excitation and temperature gradient. For example, optical transmission of thin Ag films has been tuned by applying a magnetic field; pump-probe ellipsometry has been used to study ultrafast phenomena; and, tunable reflectance has been achieved using thermal gradient [11][12][13]. Therefore, real-time tunability and detection of optical properties are very much needed for the development of integrated photonic devices. On the other hand, such non-centrosymmetric crystal structures also enable ferroelectric properties. Ferroelectric materials have potential applications in electrically tunable dielectric applications due to the variation of their dielectric constant via applying DC bias [14].
Recently, optical detection of ferroic switching properties has received wide interest particularly due to their switchable characteristics in response to an external stimulus such as electrical field, stress or light [15][16][17][18]. For example, the movement of ferroelectric domain walls has been realized using a low-power coherent light source [18,19]. Current measurement methods of electrically tunable dielectric properties are mainly done using impedance analyzer (preferred at low frequencies, < 100 MHz) or waveguide method (preferred at higher frequencies, GHz) [20]. However, both these techniques suffer from high losses at higher frequencies and are timeconsuming to measure over the entire frequency range, thereby, enabling the measurements at the steady state of the sample. Therefore, new techniques are needed for the real-time dynamic measurement of electrically tunable dielectric constant in the THz frequency range. Spectroscopic ellipsometry could be an ideal technique for real-time dynamic measurement of optical tuning under DC bias in ferroelectric materials at optical frequencies (>100 THz).
In this work, epitaxial ferroelectric BaTiO 3 (BTO) thin film, deposited using pulsed laser deposition (PLD), is chosen to demonstrate the real-time tuning of dielectric characteristic under DC bias. BTO is a typical perovskite ferroelectric material which possesses non-linear optical properties due to its non-centrosymmetric crystal structure. It presents a cubic perovskite structure above the Curie temperature T c and transforms to a tetragonal ferroelectric phase below T c . It is widely used as thin-film capacitors, dynamic random-access memories, chemical sensors, optical modulators and microwave devices owing to its high permittivity, dielectric constant and spontaneous polarization at room temperature [21]. To demonstrate its tunable dielectric properties, out-of-plane DC bias is applied to BTO, and its dielectric response is measured using spectroscopic ellipsometer.
Thin-film growth
The SrRuO 3 (SRO)/BTO/Al-doped ZnO (AZO) multilayer stack was deposited using a PLD system, with a KrF excimer laser (Lambda Physik Compex Pro 205, λ = 248 nm), on single-crystal SrTiO 3 (STO) (001) substrate. The conventional solid-state sintering process was employed to make the SRO, BTO and AZO targets. The oxygen partial pressure was maintained at 200, 40 and 7 mTorr for depositing the SRO, BTO and AZO films, respectively. The growth temperature for the SRO and BTO layers was 700°C while the AZO layer was deposited at 420°C.
Microstructure characterization
The microstructural characterization of the films was done using X-ray diffraction (XRD) and scanning transmission electron microscopy (STEM). XRD θ -2θ scans were captured using a Panalytical X'Pert X-ray diffractometer while STEM micrographs were acquired using FEI Talos-200X microscopeoperated at 200kV. STEM was further used to conduct energy dispersive X-ray spectroscopy (EDS) for chemical mapping.
Optical characterization
The permittivity of BTO was calculated using spectroscopic ellipsometery (JA Woollam RC2). A fixed incident angle of 75°was used and the bias was applied using the top (AZO) and bottom (SRO) electrodes. The measured ellipsometer parameters ψ and are related by the equation: r p /r s = tan(ψ)e i (where r s and r p are the reflection coefficient for the s-polarization and ppolarization light, respectively) were fitted using physically consistent models (mentioned in the text) using the CompleteEASE software. The thickness of the three layers was calculated from the STEM images and input in the model. Figure 1 shows the multilayer stack of SRO, BTO and AZO film on STO substrate that was used for the measurement. Both SRO and AZO act as the bottom and top electrodes respectively. The electric bias is applied to this stack and the elliptically polarized light is measured using the ellipsometer in-situ. The transparent nature of AZO allows us to probe the bias effect on BTO. Being a non-linear optical material, BTO shows ferroelectricity on applying bias. BTO undergoes tetragonal distortion on applying bias, thus causing an increase in polarization as shown in Figure 1. Such tetragonal distortion increases the anisotropy and allows the measurement of in-plane and out-of-plane dielectric permittivity using ellipsometry measurement. Figure 2 presents the cross-sectional STEM images taken under a high-angle annular dark field (HAADF) mode. The EDS mapping, shown in Figure 2(b), confirms the clear and sharp interfaces between the layers. The use of SRO as a bottom electrode facilitates the epitaxial growth of BTO as confirmed by the XRD spectra in Figure S1 (Supporting Information). Epitaxial BTO film of ∼ 72 nm is deposited to overcome its critical thickness and to obtain both a-domains and c-domains with their polar axis lying in-plane and out-of-plane respectively [22]. Figure 2(c and d) shows the high-resolution STEM image of the AZO layer and AZO-BTO interface, respectively. Clearly, the interfaces are nearly perfect with no obvious inter-diffusion. BTO grows along [001] direction with an out-of-plane lattice parameter of ∼ 4.1 A, which is well consistent with the XRD results.
Results and discussion
The non-linear electronic response of BTO is confirmed using piezoresponse force microscopy (PFM). The phase and amplitude maps confirm the switchable and ferroelectric nature of the BTO. The ellipsometry measurements were performed on this multilayer stack at an angle of 75°along with the voltage bias as shown in Figure S2 (Supporting Information). Separate ellipsometer measurements were performed on SRO and AZO films deposited on STO to estimate their permittivity for modeling the multilayer stack. A Drude-Lorentz model was used for modelling SRO permittivity while a combination of Drude, Tauc-Lorentz and Gaussian model was used to model the permittivity of AZO to enforce the Kramers-Kronig consistency. These fitted models were used to model the SRO and AZO layers in the multilayer stack and their permittivity was assumed to be constant with the applied voltage. The BTO permittivity was modeled using the anisotropic Tauc-Lorentz model. The thickness of all the three layers was measured using STEM images and input in the model. Figure 3(b and c) shows the ellipsometer parameter ψ and with and without applied bias, which was fitted using this model to calculate the dielectric permittivity plotted in Figure 3(d and e). Interestingly, BTO shows an increased anisotropy in the biased state with both the in-plane ( ' || ) and outof-plane ( ' ⊥ ) permittivity decreasing with an increased bias.
The unbiased BTO presents a mix of a and c-domains, thus yielding a nearly isotropic response with ' || and ' ⊥ being almost equal. Applying a bias increases the tetragonal distortion by favoring the c-domain formation as seen by the decrease in ' ⊥ . Under the 15 V-bias condition, ' || and ' ⊥ can be effectively treated as a and c , respectively. As observed, a ( ' || ) is greater than c ( ' ⊥ ) which is typically observed in BTO. This arises mainly due to the greater probability of the Ti ion in the oxygen octahedra to displace itself from its [001] position towards the [011] position [23]. Therefore, a small electric field can cause a large polarization in the a-direction as compared to the c-direction, thus leading to a > c , which is consistent with the above results. Additionally, separate temperature-dependent ellipsometer measurements were conducted to confirm that the anisotropic dielectric permittivity is caused by the DC voltage bias. The 15 V bias increased temperature to ∼ 50°C as measured using a type-K thermocouple. Figure S3 (Supporting Information) shows the fitted permittivity of BTO at 50°C using an external heating stage. Clearly, the temperature increase plays a minor role in tuning the permittivity of BTO while the applied bias has a major contribution in tuning the permittivity of BTO. Figure 4(a) shows the DC bias dependent dielectric permittivity of BTO at optical frequencies. Overall, ' ⊥ decreases with an increase in voltage. The maxima observed in the -V plot is not symmetrical around 0 V which might be due to the domain pinning at the substrate interface. Such asymmetrical C-V plots have been reported in prior works and have been attributed to the formation of charged domain walls at the film-substrate interface [24,25]. The decrease in ' ⊥ is consistent with the fact that the BTO becomes less polarizable because of the saturation of the polarizable charge, resulting in the decrease of the dielectric constant for the increasing DC bias voltage. In addition, its permittivity decreases with increasing wavelength (shown for two different wavelengths: 1000 nm and 2000 nm) which is consistent with its normal dielectric dispersion characteristic. On the contrary, the presence of the other four O 2− ions in-plane with the Ti 4+ ion tends to cancel the in-plane polarization change of each other due to the movement of the Ti 4+ ion, and thus such canceling effects result in a less obvious change in the in-plane permittivity compared to the out-of-plane permittivity. Therefore, the increase in the anisotropy of dielectric permittivity at higher voltages is attributed to higher tetragonal distortion. To evaluate the reliability of the measurements, continuous electric-reset cycles were performed. Figure 4(b and c) compares the fitted permittivity after five cycles for the unbiased and biased state, respectively. The comparison between the permittivity of the films before poling and after the applied electric field removed is made in Figure 4(b), showing the permittivity of the thin films at 0 V, cycle 1 (before poling) and 0 V, cycle 5 (after repeating the cycle 5 times). The in-plane ( ' || ) and out-of-plane permittivities ( ' ⊥ ) are very similar at cycle 1 while '⊥ shows a small decrease after the electric field removed after 5 cycles. This might be attributed to a relative increase in the number of c domains after poling since c domains have a lower permittivity than a domains. The dielectric permittivity for both 0 V and 15 V after five cycles shows a similar trend, which demonstrate a reliable repeatability of the measurements. Figure 4(d) illustrates the domain switching mechanism on applying bias. Initially, both a and c domains are present to reduce the overall lattice strain as previously reported [22]. Since all the measurements were done within optical frequency range, only electronic polarization contributes to the overall dielectric permittivity variation. electronic being relatively small, the degree of anisotropy is also small in the unbiased state. Under the biased state, the majority of the domains become c oriented. A deep double-well potential ( E = 11.7 meV) is present along the c-axis as compared to the very shallow double-well potential ( E = 0.8 meV) present along the a-axis, therefore, making the shorter axis to be more polarizable [26]. This is due to the increased interaction of two out of the six O 2− ions (along the c-axis) with Ti 4+ ions, thereby, reducing their electronic polarizabilities. Furthermore, the polarization saturation also decreases ' ⊥ with increasing DC bias. Therefore, ' ⊥ decreases with increasing DC bias voltage.
The tuning of optical permittivity via DC bias in ferroelectric BTO thin films demonstrated in this work presents great opportunities in several aspects. First, the study provides an early demonstration of dielectric tunability in ferroelectric materials at optical frequencies. Demonstrating the tunability at lower voltages by using thinner films and high-k materials could be useful in future device-related applications. Second, this realtime ellipsometry under DC bias technique can be used to probe the domain relaxation dynamics and optical monitoring of opto-electronic devices. Third, applying bias also dynamically tunes the permittivity and produces discrete values, that can be used in integrated opto-electronic logic devices. Besides, these dynamic discrete states can also be used in neuromorphic computing applications. Last, using picosecond time resolution ellipsometer [27] could lead to the detection of ultrafast phenomena, such as the negative capacitance in ferroelectric materials [28] and many others. One of the potential limitations of this study is the limited range of tunability achieved by electric field applied and measured by in-situ ellipsometer due to the relatively small electronic polarization in the optical frequency range. Therefore, BTO films with better ferroelectric properties or other stronger ferroelectric materials such as PZT or LiNbO 3 could show greater tunability under lower applied electrical field.
Conclusion
Optical-based detection of ferroelectric domain switching under DC bias has been demonstrated in ferroelectric BTO films. Real-time ellipsometer measurements of BTO under biased condition reveal the tetragonal distortion confirmed by the increased anisotropy in the in-plane and out-of-plane permittivity. The stability and repeatability of this dynamic tuning is shown by the electricreset cycles. These measurements allow the exploration of phase change properties in ferroelectric materials using light. Such real-time ellipsometer measurements allow the examination of novel physical phenomena in other materials including phase change materials and metal-oxide nanocomposites under optical frequency range. | 3,421.6 | 2020-05-19T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Cosmology with the Redshift-Space Galaxy Bispectrum Monopole at One-Loop Order
We study the cosmological information content of the redshift-space galaxy bispectrum monopole at one-loop order in perturbation theory. We incorporate all effects necessary for comparison to data: fourth-order galaxy bias, infrared resummation (accounting for the non-linear evolution of baryon acoustic oscillations), ultraviolet counterterms, non-linear redshift-space distortions, stochastic contributions, projection, and binning effects. The model is implemented using FFTLog, and validated with the PT Challenge suite of $N$-body simulations, whose large volume allows for high-precision tests. Focusing on the mass fluctuation amplitude, $\sigma_8$, and galaxy bias parameters, we find that including one-loop corrections allow us to significantly extend the range of scales over which the bispectrum can be modeled, and greatly tightens constraints on bias parameters. However, this does not lead to noticeable improvements in the $\sigma_8$ errorbar due to the necessary marginalization over a large number of nuisance parameters with conservative priors. Analyzing a BOSS-volume likelihood, we find that the addition of the one-loop bispectrum may lead to improvements on primordial non-Gaussianity constraints by $\lesssim 30\%$ and on $\sigma_8$ by $\approx 10\%$, though we caution that this requires pushing the analysis to short scales where the galaxy bias parameters may not be correctly recovered; this may lead to biases in the recovered parameter values. We conclude that restrictive priors from simulations or higher-order statistics such as the bispectrum multipoles will be needed in order to realize the full information content of the galaxy bispectrum.
Whilst the above references have been pivotal to the development of a bispectrum model, few contain all the necessary ingredients to allow for robust comparison of theory and observation. In particular, one must account for the backreaction of short-scale physics on the large-scale bispectrum [66,67,85], long-wavelength displacements [86][87][88][89][90][91][92][93], and survey geometry [94,95], all of which can lead to biases in derived parameters if not properly accounted for. In [80] a complete model for the tree-level (leading-order) bispectrum of galaxies in redshift-space was presented and validated, including all the above effects (see also [19]). This allows for precise modelling of the angle-averaged bispectrum monopole, and has facilitated a number of analyses constraining ΛCDM parameters [15] and primordial non-Gaussianity [96,97]. However, this model was restricted to relatively large scales (k < 0.08 h −1 Mpc at z = 0.61). If we wish to further exploit the constraining power of the bispectrum, we must push to smaller scales, by extending the perturbation theory to next order. Whilst [98] has recently demonstrated some work in this direction, a full model for the one-loop bispectrum (including all relevant phenomena such as projection effects) has not yet been presented and validated with simulations.
In this work, we present a complete and systematic computation of the redshift-space galaxy bispectrum monopole at one-loop order. This includes all effects necessary to compare with observational data: deterministic contributions, counterterms, bias renormalization, stochasticity, bin-averaging, and coordinate distortions. This involves the galaxy density at fourth-order: we systematically account for all bias operators (following [79]), and include full treatment of all necessary redshift-space counterterms, ensuring a convergent Taylor series. Our model necessarily depends on a number of free parameters: these account for the unknown complexities of ultraviolet physics (such as galaxy formation physics and feedback), and ensure physical robustness. Efficient computation of the one-loop bispectrum is non-trivial; as such, we devote a significant portion of this work to discussing its practical computation with the FFTLog algorithm [99]. We compare the theoretical predictions to real-and redshift-space bispectra obtained from the PT Challenge simulations [100], which serve both to validate the approach and to assess the information content of the one-loop bispectrum model. Though we restrict to the measurement of σ 8 and primordial non-Gaussianity parameters, one can constrain a variety of other phenomena with the bispectrum, and, further still, our methodology can be extended to other correlators including the bispectrum multipoles [65,101] and the recently-detected trispectrum [102,103].
The remainder of this paper is structured as follows. The theoretical model is presented in §2, before its implementation is outlined in §3. In §4 we give details of the data and analysis choices used to validate the model, before presenting the results of likelihood analyses using the real-and redshift-space galaxy bispectrum in §5 & 6 respectively. §7 comments on the method's applicability to current datasets, with a summary and discussion given in §8. Finally, various technical details are presented in the Appendices, including: A the perturbation theory kernels, B details of the bispectrum integration routines, C discussion of the redshift-space counterterms, and D derivation of the stochastic bispectrum components. Appendix E is devoted to prior volume effects. The key plots of this work are Fig. 1, showing the one-loop bispectrum components, and Fig. 4, displaying the utility of the bispectrum for a BOSS-like survey.
Theoretical Model for the One-Loop Bispectrum
In this work, we analyze the power spectrum and bispectrum of biased tracers (i.e. galaxies) in redshift space at one-loop order. Whilst the one-loop power spectrum and tree-level bispectrum have been described in detail before [e.g. 19,80], a complete model for the oneloop bispectrum has not been presented before (though some aspects can be found in [98]) and will be discussed below, with additional technical details found in the appendices. Here, we will restrict to Gaussian initial conditions; extension to primordial non-Gaussianity is discussed in §5. 2. In the EFTofLSS, the bispectrum is comprised of the following terms at one-loop order [e.g., 66,85,104]: where the first and second terms give the tree-level and one-loop bispectrum in Eulerian perturbation theory, B ct is the derivative and counterterm contribution, and B stoch encodes stochasticity. This is strictly a function of five variables: three lengths, {k 1 , k 2 , k 3 } and two angles, {µ 1 , µ 2 }, for µ i ≡k i ·n with line-of-sightn (hereafter LoS), noting that k 1 µ 1 + k 2 µ 2 + k 3 µ 3 = 0. In real-space, this reduces to just three variables: {k 1 , k 2 , k 3 }.
Bias Expansion
To compute the bispectrum within Eulerian perturbation theory, our first step is to express real-space galaxy density field, δ g , in terms of a basis of bias operators, i.e. all combinations of the density and velocity fields (δ and θ) consistent with the relevant symmetries up to a given order in perturbation theory [79,[105][106][107][108][109][110]. For the one-loop bispectrum, we require terms up to fourth-order (δ 4 L ), and here use the basis of Galileon operators proposed in [79]: where curly brackets separate operators of different order and the bias parameters are marked in color. In (2.2), we drop any terms that do not appear in the one-loop bispectrum; these are all composite local evolution operators such as δ 4 and δ 2 G 2 (Φ v ). Here we have ignored both higher-derivative operators (which we return to below) and bias renormalization, which is discussed in Appendix B. The Galileon operators are defined by where Φ v ≡ ∇ −2 θ is the velocity potential, equal to the Newtonian potential Φ ≡ ∇ −2 δ at leading order. These can be simply generalized to functions of multiple potentials, with (2.2) involving the LPT potentials ϕ 1,2 , satisfying Up to third order, this is equivalent to the bias expansion used in [106] and previous works [e.g., 10,80], with the relations 1 Utilizing (2.2), and expanding each operator in terms of the linear density field δ (1) ≡ δ L , we can define the n-th order contributions to the galaxy density field: where the real-space kernels K n are given in Appendix A.1 and depend on the bias parameters given above. Furthermore, this generalizes to the redshift-space density field, δ s (k), using the well-known mapping [e.g., 111] where f is the logarithmic growth rate, u z (q) = (iµ q /q)θ(q) is the Fourier-space LoS velocity field, and µ q ≡q ·n, for LoS vectorn. The associated kernels, analogous to (2.6), are labelled as Z n and defined for n ≤ 4 in Appendix A.2.
Deterministic Contributions
Utilizing the redshift-space kernels of Appendix A.2, the tree-level bispectrum, B 211 ≡ δ can be written where P L (k) is the linear power spectrum (though see the below discussion on infrared resummation). This depends on the bias parameters {b 1 , b 2 , γ 2 }, as well as the growth rate, f (z). The one-loop terms can be written as loop integrals over the linear power spectrum, and come in four flavors [e.g., 66,104]: × P L (q)P L (|k 1 + q|)P L (|k 2 − q|), × P L (q)P L (|k 2 − q|) + 5 perm., where the B II 321 spectrum is similar to the P 13 (k 1 ) contribution to the one-loop power spectrum. Computation of the loop integrals can be performed via explicit numerical integration or with the FFTLog method [99]; we discuss the latter in §3, with details presented in Appendix B. As well as the tree-level biases, these spectra involve the higherorder parameters {b 3 , γ × 2 , γ 3 , γ 21 , γ × 21 , γ 211 , γ 22 , γ 31 }, of which only γ 21 appears in the oneloop power spectrum.
Counterterms
To ensure a self-consistent theoretical model, we require a set of counterterms, which account for non-idealities in fluid equations (via the viscous stress tensor), and absorb the unknown ultraviolet (UV, q k) behavior of the loop integrals in (2.9) [e.g., 66,67,85]. For the one-loop bispectrum in real-space, these operators are degenerate with derivative operators in the bias expansion, such as ∇ 2 δ. Furthermore, the redshift-space bispectrum contains additional counterterms that appear after the renormalization of contract operators in the perturbative mapping of (2.7); these are discussed in detail in Appendix C.
The overall bispectrum counterterm contribution can be written as where F ctr 2 (k 1 , k 2 ) is the real-space counterterm kernel [79]: 2), and we choose the non-linear scale k NL = 0.45h Mpc −1 [44,112,113]. (2.10) additionally involves the µ-dependent redshift-space kernel Z ctr 2 defined by (2.12) as derived in Appendix C, with k 3z ≡ k 3 µ 3 . In principle, two combinations of C 1 , C 2 and C 5 are constrained by the power spectrum, so only one parameter out of three is independent here. In practice, however, we did not find any difference between imposing the power spectrum constraints on C 1 , C 2 , C 5 or treating them as free parameters. This is why we proceed with keeping them free in what follows. In total, the one-loop bispectrum counterterm depends on 14 free parameters, {β B,i }, and {C i } in addition to the one-loop power spectrum counterterms. 2 Notably, many of the counterterms appearing in (2.12) are degenerate at the bispectrum monopole level; nevertheless, we prefer to keep all of them in the model, and marginalize over them within physically motivated priors. This is done for two main reasons. First, terms with different powers of µ can, in principle, be distinguished even at the bispectrum monopole level thanks to the Alcock-Paczynski projection effect [114], which is described below. Second, the degeneracy between these terms can be broken with higher order angular multipole moments of the bispectrum [37,41], which we will analyze in the future.
Stochasticity
Contributions to the bispectrum are also sourced by the non-deterministic part of the density field [106][107][108][109][110], i.e. that uncorrelated with δ L . At tree-level, this gives two terms, ∝ 1/n, P (k)/n (arising from Poissonian shot-noise with sample densityn), whilst at oneloop order, we must keep contributions suppressed by (k/k halo ) 2 , where k −1 halo is some characteristic halo size. From [79], we have the following form at next-to-leading order in real-space: depending on another five free parameters {A shot,0 , B shot } and {A shot,1 , S 0 , S 1 }, which cannot be constrained with the one-loop power spectrum. In the Poisson limit, A shot,0 = B shot = 1, with all higher-order terms (arising, for example, from halo exclusion) vanish.
In redshift-space, significantly more dependencies arise. A systematic derivation of these is presented in Appendix D and yields the following expression: (2.14) where This expression shares the parameter P shot with the power spectrum, but includes an additional 12 nuisance coefficients: {{S n }, {A shot,n }, B shot }. P shot is defined as a constant rescaling of the stochastic power spectrum [80], Note that in the absence of projection effects the counterterms A shot,1 and A shot,2 are fully degenerate. Therefore, for the purposes of this study we will set A shot,2 = 0.
Infrared Resummation
An additional complication arises from the effects of long-wavelength displacements, which can be consistently treated using "infrared resummation". A rigorous derivation of this was presented in [86,87] in the context of time-sliced perturbation theory [115], and, at tree-level order, can be implemented by replacing the linear power spectrum P L with its IR-resummed equivalent, i.e. 3 where P w and P nw are the wiggly and smooth parts of the power spectrum respectively. This has the effect of damping the oscillatory component by a k-and µ-dependent factor. The damping scales are given in terms of the broadband power spectrum as where r BAO is the sound-horizon scale and k S ∼ 0.1h Mpc −1 . At one-loop order, the IR-resummed bispectrum can be written schematically as where B[P ] indicates that the bispectrum should be evaluated using the power spectrum P and we have dropped the counterterms and stochasticity [86]. In this case, the loop corrections become more complex, since the damping factor, Σ 2 tot is a function of the redshift-space angles µ. To allow for efficient computation via the FFTLog procedure (Appendix B), we here adopt the isotropic approximation for the one-loop terms, dropping any µ-dependence in Σ 2 tot inside the integral. This is expected to be a good approximation in practice, and is exact for the real-space case. Note that we keep the full redshiftdependent damping function in the tree-level expressions, i.e. the isotropic templates are used only for the computations of the one-loop corrections.
Loop Integrals
We now discuss how to compute the one-loop bispectrum. The most difficult part of this is evaluating the loop integrals appearing in (2.9): in this work, these are computed via the FFTLog procedure [99], the subtleties of which are described in Appendix B. In essence, the real-space computation proceeds by first writing the integration kernels (products of Z n ) as polynomials in k 2 i , q 2 , and |k i ± q| 2 (or their reciprocals). By expanding the linear (or IRresummed) power spectrum as a sum over complex polynomials, i.e. P L (k) ∼ c m k ν+iηm for frequency η m and FFTLog 'bias' ν, the various terms in (2.9) take the form (using B 222 as an example) for some complex ν i . The integral can be evaluated using techniques borrowed from quantum field theory, and reduces the calculation to a tensor multiplication, noting that all cosmological information is encoded within c m . In redshift space, the appearance of angles,q ·n inside the integral make this more challenging; however, it can be evaluated using similar tricks to the one-loop power spectrum [cf. 112]), as discussed in Appendix B. Following the above tricks, the bispectrum takes the following schematic form, again taking B 222 as an example: where the i index runs over all combinations of bias and f (z), denoted θ i . 4 Additionally, we have expanded in terms of the redshift-space angles {µ, χ ≡ 1 − µ 2 cos φ} (of which there are 47 non-trivial combinations); these are related to the µ i angles via: We adopt this basis rather than the more familiar choice of {µ 1 , µ 2 }, since it avoids pathologies for flattened triangles (whence k 1 ≈ k 2 + k 3 , and µ 1 ≈ −µ 2 ). The underlying shapes, B (i,j,k) , appearing in (3.2) are independent of both redshift-space angles and bias parameters, and depend only on the form of the linear power spectrum, k 1 , x = k 2 3 /k 2 1 and y = k 2 2 /k 2 1 , assuming k 1 ≥ k 2 ≥ k 3 . Two options arise for using the bispectrum templates B (i,j,k) in Monte Carlo Markov Chain (MCMC) analyses: (a) they may be computed once for a fixed linear power spectrum, (b) they may be computed as a tensor multiplication (cf. 3.1) at each step in the MCMC chain, feeding in the relevant linear power spectrum (and thus c m coefficients) each iteration. Whilst (b) is the approach usually adopted for the one-loop power spectrum, we will here adopt (a) for the one-loop bispectrum. This has the effect of fixing cosmology in the bispectrum loops (except for σ 8 , which acts as a global rescaling, modulo a small effect concerning the IR resummation amplitude, which we ignore in this work), and is chosen on computational grounds, since the size of the necessary FFTLog matrices becomes very large. 5 Explicitly, we compute the bispectrum templates, B (i,j,k) , for a grid of values of {x, y, k 1 } (treating flattened triangles with √ x + √ y = 1 separately to avoid divergences), then use these to construct a three-dimensional linear interpolator for each shape. The resulting bispectra have been compared to results from explicit (and computationally intensive) numerical integration for a range of values of bias and triangle shapes and found to be in excellent agreement. Full details of the above steps are given in Appendix B.3. We additionally publicly release all our analysis code: this can be found at GitHub.com/OliverPhilcox/OneLoopBispectrum.
Bin Integration
To robustly compare theory and data, we must integrate the model across some set of bins. Following [80], this is achieved via the integral: where B 0 is the bispectrum monopole of (2.21) and N 123 = 8πk 1 k 2 k 3 (∆k) 3 V 2 /(2π) 6 for bin center (k 1 , k 2 , k 3 ) and width ∆k. As in [80], this is strictly exact only in the narrow-bin limit, and can be corrected by "discreteness weights" as in the former work. In practice, we compute the set of bispectrum templates B (i,j,k) (k 1 , x, y) for a range of values of k 1 , x, y (see Appendix B.3) then perform the bin-averaging by linearly interpolating these values, dropping any triangles that do not satisfy the triangle conditions The integration is performed using Gauss-Legendre quadrature, as for the angular integrals. Finally, we note that we can perform bin integration either within the MCMC chains or as a pre-processing step (allowing us to use bin-averaged templates in the later analysis). We use the latter option for the purposes of this paper. 5 To see this, note that the matrix in (3.1) has size N 3 freq , for N freq FFTLog frequencies. Taking N freq = 64, with 47 angular combinations, O(50) bias parameter combinations, and computing the matrix for 10 choices of each of x and y (noting that k scales out), we find ∼ 5 × 10 10 elements, or ∼ 50 GB in (complex) single precision.
Free Parameters
Our full model for the one-loop galaxy power spectrum and bispectrum depends on the following 44 free parameters (i.e. Wilson coefficients): where parameters appearing only in the power spectrum (following the definitions of [80]), only in the bispectrum, and in both spectra, are shown in blue, black and purple respectively. The three lines give bias parameters, UV counterterms, and stochasticity parameters respectively. Note that here we switch to the power spectrum biases b G 2 and b Γ 3 instead of γ 2 and γ 21 to ease the comparison with previous works [80,116]; these are related via (2.5). Whilst performing an MCMC analysis in this high-dimensional space may seem a formidable task, we note that all parameters except {b 1 , b 2 , b G 2 } enter the theory model linearly, and can thus be analytically marginalized, following [117]. This is exact, and will be applied to all analyses presented in this work, significantly reducing computational cost. Since the parameter b Γ 3 is of physical interest in power spectrum analyses, we opt to marginalize over this explicitly, alongside the quadratic biases. For the purposes of the analytic marginalization, we assume the following priors on the bispectrum nuisance parameters: all means are zeros, and the expectation values given by 10 for all bias parameters, 10 for all real-space counterterms and one-loop stochastic contributions, 20 for redshift-space counterterms (in order to account for enhancements caused by short-scale non-linear redshift-space distortions, known as fingers-of-God [118]), and 20 for redshift-space one-loop stochastic contributions. For the tree-level stochastic counterterms, following [80] we assume standard deviations of 5 for the dimensionless B stoch , P shot , and A shot parameters. The power spectrum nuisance priors match [80,116]. Note that our nuisance parameters are normalized in such a way that their physical values are expected to be O(1) numbers from the naturalness arguments. In this sense our physically-motivated choice of nuisance parameter is conservative, as we allow them to be as large as O(10).
Numerical Results
Before proceeding to use the one-loop bispectra to perform parameter inference, we first consider the form of the spectra themselves. Plotting the bispectrum is a challenge itself: the monopole exists in the three-dimensional simplex of {k 1 , k 2 , k 3 }, and we have contributions from a wide variety of nuisance parameter combinations. For the purpose of visualization, we will fix the bias parameters to simple local-in-Lagrangian space predictions, based on [119]: assuming the bias to be described only by linear and quadratic terms In Fig. 1 we plot the deterministic (Eulerian PT) bispectrum contributions assuming the above bias relations with b L 1 = 1, b L 2 = 0.3 and f (z) = 0.7, as well as distortion parameters α = α ⊥ = 1 and the best-fit PT Challenge input power spectrum (cf. §4). For both equilateral and squeezed triangles we observe a similar form: the one-loop corrections are suppressed on large scales (by k/k NL ) but become large as k increases, with the B I 321 piece exceeding tree-level theory by k ∼ 0.1h Mpc −1 . We find significant cancellation between the various one-loop components (which all depend on the same biases), which is expected from the IR cancellation of loop integrals. Note that the high-k behavior is further modified by the counterterms (scaling as k 2 P 2 L (k)) and stochasticity (scaling as k 0 and P L (k) at leading-order). The individual shapes of the bispectrum components are generally non-trivial, with oscillatory signatures seen in B 411 and, to a lesser extent, B I,II 321 . The smooth nature of B 222 (expected since the three power spectra are all inside the q integral) implies that a smaller number of FFTLog frequencies can likely be used in its computation, which may expedite the template computation, and suggests that this has only weak cosmology dependence. From the deterministic contributions alone, it is clear that the one-loop bispectrum is a significant fraction of B tree for all k 0.1h Mpc −1 , and thus its inclusion is necessary if we wish to model the bispectrum beyond the softest modes.
Data and Analysis Details
The dataset used in this paper is the PT Challenge suite [100], comprising high-resolution N -body simulations at z = 0.61 with a total volume of 566 h −3 Gpc 3 . Galaxies are allocated via a BOSS-like halo occupation prescription, and various summary statistics computed using a fiducial cosmology with Ω m = 0.3. In all our analyses, we use the redshift-space power spectrum multipoles, P (k), and the real-space power spectrum proxy Q 0 , both of which were studied in detail in [116]. In this work, we additionally add the bispectra in both real-and redshift-space, with the comparison allowing us to assess the relative importance of redshift-space distortions in the one-loop bispectrum.
The relevant bispectra are computed as described in [80], which studied the tree-level bispectrum likelihood. We bin the bispectrum data in wavenumber bins of width ∆k = 0.01h Mpc −1 , and use only triangles whose bin centers satisfy momentum conservation. 6 For k max = 0.15, 0.2, 0.3, we find a total of 372, 825, and 2600 independent triangle configurations, respectively, and note that, unlike [80], we do not include the very first bin in the analysis, i.e. we fix k min = 0.01h Mpc −1 for the bispectrum. This matches Dashed lines indicate negative contributions, and we show results for two types of triangle: equilateral, with k 1 = k 2 = k 3 , and squeezed, with k 2 = 0.9k 1 and k 3 = 0.2k 1 . For illustration, we assume coevolution biases following [119], with Lagrangian biases b L 1 = 1, b L 2 = 0.3 and a growth factor f (z) = 0.7. We do not include the contributions from stochasticity or counterterms in this plot, but note that all bias operators have been renormalized. the analyses of the actual surveys like BOSS, where the very first bin is often affected by systematics including stellar contamination [2,15].
Our theory model for the power spectrum matches that of [80,116], and we make use of the publicly available code class-pt [112] to compute the power spectrum models. 7 Similarly, our theoretical model for the bispectrum is discussed in detail in Section 2, and implemented using the FFTLog prescription using Mathematica -we refer the reader to Appendix B for technical details.
An important part of the likelihoods are the covariance matrices, encoding both errors and correlations. As in previous works, we here adopt the Gaussian tree-level approximation for the analytic covariance matrices of power spectra and bispectra, neglecting any cross-correlation between the two statistics. For sufficiently large scales these assumptions are well justified [24,63,65,80]; at smaller-scales, and in the presence of non-uniform survey geometry, a mock-based approach will probably be needed, such as in [15], most likely in combination with some compression scheme [e.g., 117].
The mock galaxy clustering data from the PT Challenge simulations are analyzed within the Bayesian framework. Here, we perform a global MCMC analysis using the publicly available sampler Montepython [120,121] varying the clustering amplitude σ 8 , f equil NL (the amplitude of equilateral primordial non-Gaussianity [96]) and the EFT nuisance parameters. Since the true value of σ 8 in the simulations remains blinded, we will show results only for the fractional error on σ 8 . As noted above, we will marginalize over all physical nuisance parameters given in §3. 3. This is in contrast with some bispectrum studies that aim to fix certain nuisance parameters, such as asserting coevolution relations for Lagrangian biases [81]. Indeed, for some particular purposes, i.e. fits of σ 8 , it may be sufficient to keep fewer parameters in the fit. However, such approximations are unwarrantedtheir validity can break down for other types of analyses. Therefore, we prefer to explicitly vary all physical nuisance parameters in the fit. By virtue of analytic marginalization [117], this is done at no computational cost.
Results: Real-space
We now present results from the above analyses, focusing first on the combination of the redshift-space power spectrum and real-space bispectrum. Though not quite matching observational setups (where the power spectrum and bispectrum are both observed in redshift-space), this analysis will allow us to understand the impact of redshift-space distortions.
To obtain the real space model, we set f = 0 in all calculations and retain only EFT operators that do not depend on the LoS angles, giving a one-loop model fully equivalent to that used in [79,81]. Our data vector contains the power spectrum multipoles, the real-space analog and the bispectrum monopole, i.e. [P (k), Q 0 (k), B(k 1 , k 2 , k 3 )] and we restrict to the z = 0.61 snapshot of the PT Challenge simulations. In most analyses we use P (k) up to k max = 0.16h Mpc −1 , and Q 0 in the range 0.16h Mpc −1 ≤ k < 0.40h Mpc −1 , as validated in [80,100,116]. We explore the impact of varying the bispectrum k max below.
Clustering amplitude and bias parameters
We first focus on measuring the mass clustering amplitude σ 8 and leading galaxy bias These appear both in the one-loop power spectrum and bispectrum models, and hence can be tightly constrained by the data. Unlike σ 8 , the true values of bias parameters in the simulations are unknown. As such, we take their best-fit values at a certain k max (where the one-loop model can be trusted) as a proxy for their true values. This k max is measured as in [100] (see also [113]) by determining at what scale cut posteriors for at least one parameter become biased w.r.t. analyses with lower k max . We take best-fit values of bias parameters at the last stable k max as ground truth, p true . Following this, our parameter measurements are quoted as ∆p = p − p true , to avoid unblinding the results.
We fit the real-space bispectrum data for the following choices of scale-cut: Posterior distributions for the clustering amplitude, σ 8 , and certain nuisance parameters extracted from MCMC analyses of the power spectrum multipoles and the oneloop real-space bispectrum. The power spectrum likelihood is the same for all cases, whilst we vary k B max for the bispectrum, as indicated in the caption. Corresponding marginalized parameter contours for k max = 0.21h Mpc −1 are given in Tab. 1. we find that the posterior on σ 8 remains unbiased up to k B max = 0.21h Mpc −1 , with a shift of 1 − 2% observed for k B max = 0.23h Mpc −1 and k B max = 0.25h Mpc −1 , which becomes significant relative to the PT Challenge error-bars.
However, at k B max ≥ 0.23h Mpc −1 , we observe that the nuisance parameters become biased w.r.t. measurements at low scale cuts, for example, we find a visible tension between the b 2 posterior at k B max = 0.23h Mpc −1 and k B max = 0.19h Mpc −1 . In addition, we see that parameters the b G 2 and b Γ 3 become biased. The optimal values of these parameters scale along the degeneracy direction b G 2 + 0.34b Γ 3 ≈ const, which closely matches the degeneracy combination imposed by the power spectrum, b G 2 + 0.4b Γ 3 [11].
In contrast with the k B max ≥ 0.21 h Mpc −1 picture, the results at all choices of k B max ≤ 0.21h Mpc −1 are fully consistent, implying that k B max = 0.21h Mpc −1 should be chosen as a baseline scale cut. This is somewhat larger than the one-loop power spectrum scale-cut of k max = 0.16h Mpc −1 ; whilst this might appear unusual, we note that the power spectrum contains significantly higher signal-to-noise, and is subject to redshift-space complexities, both of which decrease k P max . We use best-fit values of nuisance parameters from the baseline P + Q 0 + B(k B max = 0.21h Mpc −1 ) analysis as ground truth values in the below. P Parameter 68% limits 0.005 ± 0.050 Table 1. One-dimensional marginalized contraints on low-order bias parameters and the clustering amplitude σ 8 extracted from the PT Challenge data-set. We display results obtained using only the power spectrum multipoles P (left panel, cf. [100]), and those including the power spectrum, Q 0 and the one-loop real-space bispectrum likelihood with k B max = 0.21h Mpc −1 (right panel). The one-loop bispectrum is the main new feature of this work. Most parameters are normalized to their true values, to avoid unblinding the simulation. In real-space, the addition of the bispectrum significantly tightens posteriors on bias parameters (by at least an order of magnitude), and gives ≈ 20% improvement on σ 8 . Further details are given in the main text, with corresponding results for the redshift-space bispectrum shown in Tab. 2.
It is instructive to compare the parameter constraints extracted using the one-loop bispectrum to those from the power spectrum multipoles alone, i.e. P (k) at the baseline k max = 0.16h Mpc −1 . These are shown in the left panel of Tab. 1. We find an improvement of 31% in σ 8 , whilst the error-bars on bias parameters tighten by an order of magnitude in some cases. Despite the noticeable increase in signal-to-noise of the data-set, we find a modest improvement in σ 8 : this is linked to the proliferation of bias, counterterm, and stochasticity parameters needed to describe the one-loop bispectrum in an unbiased manner.
It is also useful to compare our one-loop bispectrum results with those from the treelevel bispectrum. We cannot directly use results from [80] since the former work also varied other cosmological parameters such as H 0 and Ω m . To obtain a cleaner comparison, we repeat the tree-level analysis of [80] with the same analysis settings as here, using k B max = 0.08h Mpc −1 for the tree-level bispectrum. We find ∆σ 8 /σ 8 = 0.002 ± 0.0053, i.e. an 21% improvement over the power spectrum only result. Comparing this the present analysis, we see that the addition of the one-loop bispectrum likelihood yields an extra 10% improvement over the tree-level bispectrum likelihood. Finally, we compare our results with those of [81]. Unlike our work, [81] used the power spectrum of halos and galaxies in real space, leading to the notorious b 1 − σ 8 degeneracy being largely unbroken. This explains why our results on σ 8 are much better -most of the constraining power comes from redshift-space distortions omitted in [81]. Despite this difference, our analysis does confirm a general trend pointed out in [81] -the returns from the one-loop bispectrum are limited by the large number of nuisance parameters. As such, it will be important to obtain better priors on them in the future, for example using hydrodynamical simulations.
Primordial non-Gaussianity
It is interesting to study to what extent the one-loop bispectrum model can help improve constraints on primordial non-Gaussianity (PNG), following constraints from the tree-level bispectrum in [96,97]. We consider here the case of equilateral PNG, which induces the following three-point correlation of the linear density field (see [96] for further details), where ζ is the primordial curvature fluctuation with dimensionless amplitude ∆ ζ , and we have introduced the transfer function T (k) = (P 11 (k)/P ζ (k)) 1/2 . Non-Gaussianity in the initial conditions generates three main effects [96]: (1) an additional contribution B 111 to the tree-level bispectrum, (2) an extra one-loop power spectrum correction P 12 , and (3) further contributions in the galaxy bias expansion, which modifies the tree-level expressions by introducing the so-called "scale-dependent" bias. The latter stems from the following expression, where "..." denote non-linear PNG corrections which can be ignored for the purposes of this paper. The term ∇ 2 ζ generates tree-level "scale-dependent" bias corrections to the power spectrum. Note that these corrections are suppressed in the equilateral case w.r.t. the case of local PNG, where the scale-dependent bias is a leading effect on the galaxy power spectrum, and thus the power spectrum dominates the constraining power on f loc NL . As shown in [96], for the one-loop power spectrum and tree-level bispectrum, we must include all three of the P 12 , B 111 and b ζ -related terms in our model. In this paper, we consider the one-loop bispectrum, which technically requires additional non-linear f equil NL corrections to the galaxy bispectrum, such as B 113 [110,122]. However, given that f eq NL ∆ ζ is small, these will be suppressed, thus we leave their systematic calculation for future work, focussing only on the leading terms, similar to [96,98]. For f equil NL 500, the next-to-leading order contributions are subdominant to two-loop matter corrections for k 0.2h Mpc −1 .
Concerning scale-cuts, we find that use of the P and Q 0 statistics at high k max can lead to biases in the recovered values of f equil NL . This is consistent with the estimates of [96], which showed that the two-loop corrections can actually be larger than the non-Gaussian P 12 contribution at small scales. Thus, we choose k max = 0.2h Mpc −1 for the Q 0 statistics and k max = 0.14h Mpc −1 for P in the PNG analysis of this section. For B real we use the baseline data cut k B max = 0.21h Mpc −1 , motivated by the discussion above. To perform the analyses including PNG, we fit the parameter f equil NL in addition to σ 8 and nuisance parameters. Since the PT challenge simulations were run using purely Gaussian initial conditions, we expect to find f equil NL consistent with zero. Indeed, our nominal constraint on the amplitude of the equilateral shape is given by We stress that these results are obtained without any external priors on σ 8 or the non-linear bias coefficients. This constraint can be compared with that obtained from the tree-level real-space bispectrum likelihood, At face value, our results imply that the addition of the one-loop bispectrum can improve constraints on f equil NL by ∼ 30%.
Results: Redshift-Space
In this section we present the analysis of the data-vector [P (k), Q 0 (k), B 0 (k 1 , k 2 , k 3 )], where all statistics are in redshift space and include projection and coordinate-distortion effects. This set-up thus fully matches an analysis of a realistic galaxy survey such as BOSS [2]. As for the power spectrum, we expect that the addition of redshift-space distortions (particularly the fingers-of-God effect [118], hereafter FoG), will reduce the non-linear scale, thus it is likely that k B max , and the constraining power of the bispectrum monopoole, will decrease.
Clustering amplitude and bias parameters
Let us discuss the recovery of the mass clustering amplitude σ 8 and bias parameters. As a point of comparison, we fix the fiducial bias parameters to those extracted from the real space analysis before with k B max = 0.21h Mpc −1 . We fit the redshift-space bispectrum data for the following choices of scale-cut: P +Q 0 +B 0 (k B max = 0.15 hMpc 1 ) P +Q 0 +B 0 (k B max = 0.17 hMpc 1 ) P +Q 0 +B 0 (k B max = 0.20 hMpc 1 ) P +Q 0 +B 0 (k B max = 0.22 hMpc 1 ) P +Q 0 +B real (k B max = 0.21 hMpc 1 ) Figure 3. Posterior distributions of the clustering amplitude and certain nuisance parameters obtained from MCMC analyses of the power spectra and one-loop redshift-space bispectrum monopole B 0 . The power spectrum likelihood is the same for all cases. We show results for different values of the bispectrum data cut k B max , as indicated by the caption. This is analogous to Fig. 2 (whose optimal constraint is shown by the purple curve), but utilizes the redshift-space bispectrum. Corresponding marginalized parameter constraints with k B max = 0.15h Mpc −1 are given in Tab. 2.
the bispectrum data is not sufficient to constrain all the nuisance parameters entering the theory model. This gives rise to significant marginalization projection effects, which can be naïvely interpreted as a bias in our model. We study these effects in Section E and show that the measurements at k B max < 0.15h Mpc −1 are consistent with our baseline choice k B max = 0.15h Mpc −1 once projection effects are taken into account.
0.09 ± 0.10 Table 2. One-dimensional marginalized constraints on low-order bias parameters and the clustering amplitude σ 8 extracted from the PT Challenge data-set. We show the fit from the combined likelihood including power spectrum multipoles, Q 0 , the tree-level redshift-space bispectrum at k B max = 0.06h Mpc −1 (left table) and the one-loop redshift-space bispectrum at k B max = 0.15h Mpc −1 (right table). The parameters are normalized relative to their true values. Whilst we find significant enhancements in the bias parameter constraints compared to the power spectrum alone (cf. Tab. 1), the constraint on σ 8 does not improve appreciably.
At k B max = 0.17h Mpc −1 the clustering amplitude σ 8 , the rescaled linear bias b 1 σ 8 , b 2 and b Γ 3 become biased w.r.t. their optimal values coming from the real-space bispectrum analysis. These biases are accompanied with a significant increase in the χ 2 statistics. Thus, we conclude that the two-loop bispectrum corrections are not negligible at this scale. This is further supported by the bias growing with k B max : in particular, at k B max = 0.22h Mpc −1 the bias on σ 8 reaches 2%, which is significant in the context of the PT Challenge simulation volume. In conclusion, we find that the one-loop galaxy bispectrum model in redshift space works well up to k B max = 0.15h Mpc −1 for the precision that corresponds to the total volume of the PT Challenge simulation (which we recall is significantly larger than current and forthcoming datasets). The optimal values of cosmological and bias parameters for this choice are presented in Tab. 2. For comparison, we also show the results from the tree-level bispectrum analysis akin to [80]. 8 That k B max is lower in redshift-space than real-space is no surprise: this indicates that the characteristic scale of FoG effects (σ FoG ) is smaller than that of non-linearities (k −1 NL ). For the power spectrum in redshift-space, higher-order counterterms were important to model FoG, scaling as k 4 z . An analogous set of nuisance parameters may be included here, but we caution that their number is large due to the higher dimensionality of the bispectrum.
Considering the marginalized posteriors directly, we find that the one-loop bispectrum likelihood (at k B max = 0.15h Mpc −1 ) yields only a 12% improvement on σ 8 compared to the power spectrum alone, though the constraints on bias parameters (and thus astrophysics) improve markedly. Comparing this with the tree-level case, we see that the inclusion of oneloop corrections actually lead to a somewhat worse result than for tree-level bispectrum, Posterior distributions of the clustering amplitude and low-order nuisance parameters from MCMC analyses of the power spectrum data (P + Q 0 , in green), and the combination of the power spectrum and redshift-space bispectrum monopole (P +Q 0 +B 0 ), using tree-level (blue, from [80]) and one-loop (red) theory. The covariance is rescaled to match the volume of the BOSS survey, and we assume k max = 0.2 h −1 Mpc for both the one-loop power spectrum and bispectrum.
which tightens the σ 8 constraint by 18% for the analysis settings adopted in this work. This is consistent with previous studies considering the real-space bispectrum [81], and arises primarily due to the large number of nuisance parameters appearing in the one-loop calculation, especially in redshift-space. A similar situation takes place in the context of the one-loop redshift-space power spectrum, whose information content is limited by marginalization over nuisance parameters [24]. We will discuss this issue in detail later.
Implications for the BOSS survey
In this section we estimate the potential performance of the one-loop bispectrum model applied to the BOSS survey data [2], which is the largest publicly-available spectroscopic galaxy clustering dataset. This survey has significantly smaller volume than our mock simulation data, so one can expect that the analysis can be pushed to smaller scales [16,80]. Indeed, the relevant parameter in this problem is the ratio of the theory systematic bias in a certain parameter to the statistical error on that parameter. For the BOSS volume the statistical errors are significantly larger than the PT Challenge simulation volume, due to a ratio of volumes of ≈ 100. As the theoretical errors do not depend on the volume, the ratio between the theoretical error and statistical errors thus becomes smaller, and hence any residual theoretical systematics becomes less sizable in relative terms.
To demonstrate this, we repeat the likelihood analysis above for the redshift-space data vector [P , Q 0 , B 0 ], but rescale the covariance to match the BOSS volume V BOSS = V PT Challenge /100 6 (h −1 Gpc) 3 . We select k max = 0.20h Mpc −1 for power spectrum multipoles P and k B max = 0.20h Mpc −1 for the bispectrum monopole B 0 ; significantly larger than that found in §6. The results of this analysis are presented in Fig. 4 Table 3. One-dimensional marginalized constraints on low-order bias parameters and the clustering amplitude σ 8 extracted from the PT Challenge dataset, with covariance adjusted to match the volume of the BOSS survey. We show results for the P + Q 0 only analysis (top left), the tree-level P + Q 0 + B 0 likelihood (top right), and the one-loop P + Q 0 + B 0 likelihood (bottom). The inclusion of the bispectrum sharpens constraints on σ 8 by ≈ 24%, with some ≈ 10% improvement arising from the addition of the one-loop contributions.
We see that in the context of BOSS, the addition of the one-loop bispectrum yields an ≈ 24% improvement over the power spectrum-only result and an ≈ 10% improvement over the tree-level bispectrum likelihood result. However, this leads to a noticeable shift in nuisance parameters, with b G 2 approximately 1.7σ from its fiducial value. This could simply be a prior-volume effect however (since the effect of the priors becomes more important at lower simulation volume), especially given that b G 2 departs from its fiducial value at 1.1σ already for the power-spectrum alone. The tree-level bispectrum analysis, however, results in an unbiased recovery of all nuisance and cosmological parameters.
It is also instructive to study whether the one-loop bispectrum can improve constraints on equilateral PNG. Incorporating this parameter in the analysis as before (varying both f equil NL and σ 8 ) we find f equil NL = 197 ± 350. For the clustering amplitude we find ∆σ 8 /σ 8 = −0.026±0.035, with a slight 0.6σ shift w.r.t. the ground truth. For comparison, we have also run an analysis using the tree-level bispectrum at k B max = 0.08h Mpc −1 instead of the oneloop bispectrum, and found f equil NL = 420±440, ∆σ 8 /σ 8 = −0.025±0.040. First, we see some bias in f equil NL , which can be attributed to prior volume effects and somewhat more optimistic data cuts for the power spectrum that we use in our analysis here. Indeed, in [96] it was shown that the tree-level model yields unbiased results on f equil NL for k B max = 0.08h Mpc −1 and k P max = 0.17h Mpc −1 . Second, we notice that the bound on f equil NL in the one-loop case is 30% better than that of the tree-level analysis. The improvement is quite modest as a consequence of the fact that the one-loop model introduces many nuisance parameters, which cannot be constrained by the data. In our analysis we use highly conservative but still physically-motivated priors; if more aggressive priors on nuisance parameters are used, the constraints are likely to improve further.
In conclusion, we note that the addition of the one-loop bispectrum may yield some ≈ 30% improvement on the amplitude of equilateral PNG. We stress, however, that this comes with two important caveats. First, the k B max used for this study (0.2h Mpc −1 ) results in noticeable biases on the nuisance parameters, suggesting that the errorbar on f equil NL may be underestimated due to over-fitting. Whether this induces a bias on f equil NL is unclear; such an error would likely show up only in the analysis of simulations containing f equil NL = 0. Secondly, we have neglected the PNG-induced one-loop corrections to the bispectrum (as in [98]), which can be marginally important for the scales of interest (particularly in the tails of the f equil NL posterior), as can be easily estimated with the scaling universe approximation outlined in [122].
Finally, we note that our analysis indicates a more modest improvement on f equil NL than that reported in [98]. [98] suggest that the one-loop bispectrum improves f equil NL constraints over the tree-level result by a factor of few. This follows from a comparison with the treelevel bispectrum analysis of [96]. This comparison is misleading, however, since the baseline analysis of [96] varies cosmology whilst [98] always keeps cosmological parameters fixed. We have checked that this accounts for most of the difference between [96] and [98]. A more detailed comparison with [98] is not currently possible because the former work has not yet presented sufficient details about their analysis and theory model. It will be interesting to compare our results with their analysis in the future.
Conclusions and Discussion
In this work, we have presented and validated a complete calculation of the galaxy bispectrum monopole in redshift space at one-loop order in effective field theory. Our model includes one-loop corrections due to mode coupling, as well as the full set of EFT counterterms that are needed to regulate the UV behavior of loop integrals and capture the physical effects of backreaction of short scales onto the large-scale modes. Furthermore, we incorporate a bias expansion up to fourth order (noting that many operators van-ish after renormalization of the power spectrum and bispectrum) as well as fourth order redshift-space distortions. In addition, our calculation includes IR resummation to capture the non-linear evolution of baryon acoustic oscillations (both for the power spectrum and bispectrum), as well as projection and binning effects. In short, we include all relevant ingredients needed to compare theory with observational galaxy clustering data.
We have studied the performance of the one-loop bispectrum model in terms of cosmological parameter constraints, focusing primarily on the mass fluctuation amplitude σ 8 . To validate our model we use the PT Challenge simulation suite [100], which are equivalent to a BOSS-like survey with a hundred times larger volume, thus allowing for high-precision tests. We analyze a data vector that consists of the standard redshift-space power spectrum multipoles, the real-space power spectrum proxy Q 0 [116], and the redshift-space bispectrum monopole. In this setup, we have found that the inclusion of the one-loop corrections allows us to extend the agreement between bispectrum theory and data up to k max = 0.15h Mpc −1 , or k max = 0.21h Mpc −1 in real-space. This can be contrasted with the tree-level model bispectrum model, which works only up to k max = 0.08h Mpc −1 [80]. We caution that these scale-cuts depend on both the survey volume and galaxy type: for BOSS, we can use k max = 0.20h Mpc −1 , and it is likely that the wavenumber reach is larger for emission line galaxies, which boast smaller FoG effects [12,64]. Further, one might hope to extend the k-reach by specializing to some real-space bispectrum analog (similar to Q 0 ) at high-k: this will be considered in future work.
Despite a significant extension of the k-space reach, we have not found the bispectrum to lead to noticeable improvements in the σ 8 constraints compared to those with obtained from tree-level theory when applied to the PT Challenge simulations. This is a consequence of the large number of the EFT nuisance parameters that appear in the one-loop calculation (particularly in redshift-space), and must be marginalized over in our analysis. For a BOSSvolume survey (and accompanying systematic error thresholds), we find greater utility, with the one-loop bispectrum improving constraints by ∼ 10% over the tree-level case, though it remains to be seen whether any accompanying shifts in nuisance parameters are real (and malignant) or just prior volume effects.
We have additionally studied whether the one-loop bispectrum can help constrain equilateral primordial non-Gaussianity (and thus single-field inflation), finding that, for the BOSS survey, the one-loop bispectrum may improve constraints on the non-Gaussianity parameter f equil NL by ≈ 30% compared to the tree-level theory. Achieving this, however, requires pushing the bispectrum analysis to k B max = 0.2h Mpc −1 , where the shifts in the bias parameters become evident. It remains to be seen if this problem can be alleviated with better priors on nuisance parameters or with one-loop PNG-induced corrections to the bispectrum, which were omitted in this study. If one is interested in astrophysics, the one-loop bispectrum is much more useful: we find a significant tightening in the posteriors of parameters such as linear and tidal bias compared to those with only tree-level theory.
An important conclusion from our study is that we need better knowledge of the EFT nuisance parameters if we wish to extract more cosmological information from the bispectrum. This can be done in several ways. First, one can include data from the higher-order angular moments of the redshift-space bispectrum [37]. Since these moments depend on the same set of parameters, their inclusion should tighten the EFT nuisance parameters posteriors, aiding determination of the cosmological parameters of interest. Second, one can constrain the EFT nuisance parameters with higher order statistics, such as the trispectrum, see e.g. [102,123] for work in this direction. Finally, one can obtain better priors on the extra nuisance parameters using high fidelity N-body or hydrodynamical simulations. A powerful route by which to acheive this involves EFT field level techniques, see e.g. [85,[124][125][126][127][128][129][130][131][132]. We plan to investigate these options in future work.
Though the one-loop bispectrum analysis of this work was limited only to two cosmological parameters, the mass fluctuation amplitude and the equilateral non-Gaussianity parameter, it may be similarly extended to other parameters such as local primordial non-Gaussianity, or the neutrino mass. The improvement on other parameters, especially those beyond the minimal ΛCDM model, could be significantly larger, particularly when some new feature is introduced that is not degenerate with the smooth loop corrections. If the parameter of interest enters the theoretical model linearly, the analysis can proceed as above; if this is not the case, one would require an optimization of our one-loop bispectrum pipeline, since the FFTLog-based approach does not currently allow for a fast re-calculation of the theoretical template as the power spectrum is varied. If only the α , α ⊥ parameters are varied however, the templates do not need to be recomputed, only rebinned (via 2.21). Analyses including such effects will be natural next steps in our research program.
Finally, we note that the bispectrum data offers novel probes of new physics. In particular, constructing the bispectrum from different tracers will allow one to probe the equivalence principle [53][54][55][56][57][58][59]. Such an analysis is complicated if one considers only the power spectrum since the effects sensitive to the equivalence principle appear there only at the one-loop order. In contrast, the cross-bispectrum of different kinds of tracers can be a sensible probe of the equivalence principle, whose violation would generate new bispectrum shapes that are not present in the ΛCDM model. This, in particular, will help one derive new constraints on the violation of Lorentz symmetry in the dark matter sector [133,134]. We leave this and other tests of new physics with the bispectrum for future work.
primarily performed on the Helios cluster at the Institute for Advanced Study, Princeton, and additional computations were carried out using the Princeton Research Computing resources at Princeton University, which is a consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's Research Computing Division.
A.2 Redshift-Space
The redshift-space kernels are obtained by expanding the RSD mapping of (2.7) and expanding all fields in terms of the linear density field. Following a lengthy computation, we find the following forms in terms of the real-space kernels: writing µ i···j ≡ µ q i +···+q j and dropping the argument of K 1 for clarity.
B Computation of the One-Loop Bispectrum with FFTLog
In this appendix, we discuss practical computation of the loop integrals given in (2.9). Before considering the redshift-space case, we will first examine how to compute the realspace integrals, which follow a similar logic, but are significantly simpler. Our approach follows [99], but is extended to the case of biased tracers and redshift-space.
B.1.1 Formalism
As noted in §2, the first step in the bispectrum computation is the expansion of the perturbation theory kernels (Appendix A) as polynomials in q 2 , |k 1 − q| 2 and |k 2 + q| 2 or their reciprocals (utilizing permutation symmetries). In practice, this results in a sum over many thousands of terms, once the relevant symmetries have been imposed, and is automated using mathematica. For B 222 , each term is proportional to q P L (q)P L (|k 1 − q|)P L (|k 2 + q|) q α 1 |k 1 − q| α 2 |k 2 + q| α 3 , (B.1) for integer α i , with a similar form found for the other loop integrals except with fewer factors of P L . Expanding the linear power spectrum as a sum of complex polynomials in k, i.e. P L (k) = m c m k ν+iηm for frequencies η m , coefficients c m , and (real) FFTLog bias ν (which sets the eventual integral convergence properties), we can rewrite (B.1) in the form for 2ν j = α j −ν −jη m i , where all the cosmology dependence (encoded in c m ) is now outside the integral. The remaining integral can be computed using path integral methods as where x 2 = k 3 /k 1 , y 2 = k 2 /k 1 and J (with complex arguments ν i ) can be expressed as a sum of hypergeometric functions and Gamma functions [99]. This reduces the computation of bispectrum templates to a set of matrix multiplications and function evaluations, as noted in §3. For B I 321 we find a similar form to (3.1), except with rank-two matrices, whilst B II 321 and B 411 involve only a one-dimensional sum (and one set of c m i coefficients).
B.1.2 Limiting Behavior
When computing spectra via FFTLog, it is important to verify whether the relevant loop integrals actually converge. This is achieved by taking the UV and IR limits of the integration kernels and assessing the dependence on the ν i parameters appearing in (B.2).
As an example, we consider the contribution of three δ operators to B 222 (involving three copies of b 1 F 2 (k − q, q)). This has the following limits in the equilateral configuration k 1 ∼ k 2 ∼ k 3 ∼ k: For P L (q) ∼ q ν , the integral is UV convergent for ν < 1 and IR convergent for ν > −1.
By choosing the bias in this range, FFTLog will give accurate values for the integrals. In contrast, if ν is chosen to be outside this range, we must add the relevant UV or IR limits by hand (taking care to include subleading divergences if necessary).
Considering all bias terms, the limits of B 222 take the following schematic form for equilateral triangles: where {f i } are some polynomials, and we consider only the leading-order contribution for each bias parameter. Inserting P L (q) ∼ q ν as before shows that a term containing K powers of b 2 is UV convergent for ν < 1 − 2K/3, implying ν < −1 for b 3 2 , significantly tighter than the ν < 1 limit for matter (i.e. b 3 1 ). However, the UV limit of b 3 2 is fully degenerate with the bispectrum shot-noise ( 0 in 2.13), and should be subtracted off in practice, as for the b 2 2 contribution to P 22 . If we adopt ν > −1, this term will not be captured by the FFTLog formalism, thus the subtraction becomes implicit. In this case, we require ν < −1/3 to avoid the b 2 2 divergence (and the second-order b 3 2 divergence, both of which are degenerate with the 2 stochasticity in 2.13). In the IR, (B.5) shows that the integral is convergent for ν > −1 for terms involving two or more powers of b 1 , and ν > −3 else. To satisfy all the conditions simultaneously, we may take −1 < ν < −1/3, dropping the shot-noise piece.
For B I 321 , the limiting UV and IR form is given by where X ∈ {b 2 , b 3 , γ × 2 } and ellipses are taken to mean bias operators excluding X (in the UV) or b 1 (in the IR). (B.6) implies that UV divergences can be avoided if we take ν < (1 − 2K)/2 when the term involves K powers of b 2 , b 3 , or γ × 2 ; these are all the composite operators appearing at third order. Furthermore, as in B 222 , the UV limits of the terms involving two powers of b 2 , b 3 and γ × 2 are proportional to shot-noise (this time of the η 0 variety in 2.13), 9 and should be subtracted off in practice (or dropped implicitly by fixing ν > −3/2). In the IR, divergences vanish for ν > −1 for terms involving b 3 1 or b 2 1 and ν > −3 else. Overall, we require a bias of −1 < ν < −1/2 to satisfy all conditions, assuming subtraction of the η 0 shot-noise contributions. For B II 321 , we require the UV and IR convergence properties of q Z 3 (k, q, −q)P L (q), which we labelP 13 (k) by analogy with the galaxy power spectrum (2.9). This natively involves all bias operators in (2.2) up to third order; however, this set is reduced to just {δ, G 2 (Φ v ), G 2 (ϕ 2 , ϕ 1 )} when the renormalization conditions are applied. These conditions demand lim for renormalized operator X and linear density field δ L (k), i.e. there can be no loop contributions which do not decay in the UV limit [106]. The contribution of all composite operators (e.g., δ 2 ) toP 13 is exactly that of a non-decaying loop diagram (since there is no suppression by the F 3 kernel), thus must vanish when the operators are properly renormalized. This leaves only G 3 , which evaluates to zero after averaging over the angular part of q. Following these redefinitions (which do not affect B 222 and B I 321 ), we find the UV and IR limits:P UV divergences occur unless ν < −1, and IR divergences occur unless ν > −1 (for b 1 ) or ν > −3 (else). As for P 13 [99], there is no range of biases which satisfy all the conditions; in this case, we can choose −1 < ν < 1 (satisfying the IR limits, and avoiding subleading UV divergences at ν > 1), and correct the UV part by adding the relevant limit by hand, which takes the following explicit form in real-space: proportional to the velocity divergence σ 2 v . Finally, we consider B 411 . This contains the fourth-order bias operators, and involves Wick contractions of linear density fields within the same operator, permitting simplification via the renormalization condition: which is proportional to the UV limit of B 411 . The first effect of this is to remove contributions from any fourth-order composite local evolution operator (such as δ 4 or δ G 3 ); these operators were already dropped from the bias expansion in (2.2). Secondly, this will remove a number of UV divergences in the below. Before bias renormalization, the UV and IR limits of the remaining terms take the form: where ellipses represent additional bias terms which impatience lead us to ignore. The first line is UV convergent for ν < −3 (first term, involving composite operators) or ν < −1 (second term, no composite operators). However, the first term possesses a UV limit that does not decay as an (negative) integer power of q 2 , violating the renormalization condition (B.10). The precise action of bias operator renormalization is to remove such terms (and only these, as far as this diagram is concerned). By evaluating the diagram with ν > −1, such contributions will be avoided, i.e. the operators will be correctly renormalized. In the IR, we find that divergences can be avoided by setting ν > −1 (for terms involving the first and second order operators proportional to b 1 , γ 2 or b 2 ), or ν > −3 (for the remaining terms). As forP 13 , there is no single bias that will simultaneously remove all the UV and IR divergences in B 411 , even after bias renormalization. Fixing −1 < ν < 1, we may compute the full expression by manually adding the appropriate UV limit to the FFTLog result. These limits can be computed straightforwardly from the kernels in mathematica and are omitted from this publication to avoid unnecessary tedium.
B.2 Redshift-Space
In redshift-space the perturbation theory kernels depend not only on the lengths q, |k 1 − q| and |k 2 + q| but also the LoS angles µ i ≡k i ·n andq ·n. 10 Although we are primarily interested only in the bispectrum monopole (i.e. that integrated over µ 1,2 , with a suitable Lebesque measure, as in 2.20), the full dependence on µ i is necessary for accurate calculation of coordinate distortions (2.21), thus we cannot simply average over µ i before computing the loop integrals; furthermore, this is difficult to perform analytically due to the presence of high powers ofq ·n. After expanding the kernels as polynomials, we will find loop integrals of the form: for n ∈ {0, 1, · · · , 6} (cf. 3.1), with prefactors depending on µ i , k i , biases and f (z). Below, we consider how to compute this utilizing the FFTLog procedure, generalizing the approach of [112] for the power spectrum. First, we expand theq ·n angles as Cartesian sums, i.e. 3 i=1q in i , and pull out the LoS vectors from the integral. The remaining function is a fully symmetric rank-n tensor, given by This has dependence only on k 1 and k 2 ; as such, its tensorial dependence can be written in terms of the components of k 1 , k 2 , and any isotropic tensors of relevance, i.e. the Kronecker delta. 11 Explicitly, this takes the form: where {O k } is the set of all independent symmetric rank-n combinations ofk i 1 ,k i 2 and δ ij K . As an example, the n = 2 operators are {δ ij We then define an "overlap matrix", giving the correlation between basis elements: (assuming Einstein summation conventions); this allows extraction of the A k coefficients via A k = [I] −1 kk O k i 1 ···in F i 1 ···in , where the second term is just the contraction of (B.14) with various powers ofk 1 andk 2 . Finally, we contract (B.14) with n copies ofn i to yield where the first set of parentheses contains a set of (k 1,2 ·q coefficients inside the q integral, and the second contains powers of µ 1,2 . To make this explicit, we give the n = 1 case: writing ν 12 ≡k 1 ·k 2 . In this manner, the FFTLog integral can be performed for arbitrarily large n. We adopt this method to compute the bispectrum templates in redshift space, applying it as a simplification step before the loop integrals are computed as in Appendix B.1. Notably, the above decomposition breaks down in the limit ofk 1 ·k 2 → −1, i.e. for k 1 k 2 , whence there is only one angle in the problem. The corresponds to flattened triangles (with k 1 = k 2 + k 3 or √ x + √ y = 1), which contain the divergence 1 − ν 2 12 → 0. Strictly speaking, this divergence is cancelled by the numerators, oncek 1 = −k 2 is identified; however, if one separately computes the loop integral coefficients proportional to powers of µ 1 and µ 2 , numerical issues will arise. In this limit, we adopt a different angular decomposition, noting that (B.13) can depend only onk 1 and the Kronecker delta. The basis tensors are much simpler in this case, for example, with {δ ij K ,k i 1k j 1 } for n = 2, and facilitate computation in an analogous manner to the above. For n = 1, (B.17) becomes which does not diverge. This divergence also illustrates the importance of expanding the bispectrum templates in the {µ, χ} basis (with µ ≡ µ 1 , χ ≡ 1 − µ 2 cos φ) rather than {µ 1 , µ 2 } (cf. §3): the former is undefined for flattened triangles, whence µ 1 = −µ 2 , whilst the latter simply has dependence only on µ in this limit (noting that µ 2 = µ ν 12 − 1 − ν 2 12 χ → −µ as ν 12 → −1).
B.3 Implementation
The above tricks allow us to efficiently compute the one-loop bispectrum in redshift space. A rough overview of the computation is the following: 1. Expand the relevant (symmetrized) perturbation theory kernels as polynomials in q, |k i ± q|, µ i andq ·n.
5. Compute the bispectrum templates for each of the 47 combinations of µ i χ j and the relevant combinations of bias parameters using the FFTLog algorithm. This is performed for a grid of values of k 1 , x, y, with flattened templates (obeying √ x+ √ y = 1) computed separately, using the alternate angular decomposition given in Appendix B.2, and involving only 7 non-trivial powers of µ.
6. Create a three-dimensional linear interpolator for each template using the precomputed bispectrum shapes (combining full and flattened configurations).
8. Compute the full bispectrum as a sum over templates, weighted by the bias configurations and any necessary discreteness weights.
Notably, only steps (5) and beyond depend on the power spectrum template, and thus the cosmological survey in question. We note one further subtlety: computing the bispectrum templates near (but not at) the flattened limit of √ x + √ y = 1 can lead to numerical issues due to large values of 1/(1 − ν 2 12 ), which appear in the angular decompositions of (q ·n) raised to the n-th power (cf. B.17). To counter this, when the templates are being computed and the condition √ x + √ y < 1.1 is met, we replace the FFTLog prefactor by its Taylor series in (1 + ν 12 ), artificially removing the divergent terms (which are present only due to numerical inaccuracies). Mathematica and Python code implementing all of the above steps is publicly available at GitHub.com/OliverPhilcox/OneLoopBispectrum. Following initial testing of the FFTLog routines against explicit numerical integration for a small number of bins, we use the following choices of FFTLog bias: ν = −0. 6 √ y, subject to the triangle conditions. This is sufficient to ensure that the spectra are subpercent accurate in the regime of interest; the results are largely unchanged if the number of FFTLog frequencies is reduced by a factor of two. The computation requires ∼ 10 4 CPU-hours to compute all templates (entirely performed within Mathematica), with the majority of time devoted to B 222 , and could certainly be optimized further. Calculations have been compared against explicit numerical integration of the (unsimplified) bispectrum kernels, and we find excellent sub-percent agreement in all cases.
C Counterterms from Redshift-Space Distortions
The RSD mapping to O(δ 4 1 ) can be obtained by expanding (2.7) to fourth order: To facilitate renormalization, we must smooth this expansion with a low-pass filter of some size R = Λ −1 . Products of fields at the same point (contact terms) are sensitive to short-scaled modes and hence must to be smoothed and renormalized. We denote these operations by square brackets, [...] R . Galilean symmetry implies the following schematic structure of the renormalized correlators (see [135][136][137] for the first order results), where u i , δ are the smoothed long-wavelength velocity and density fields (for clarity, we will drop the subscript in the below). To preserve Galilean symmetry, the operators O should not depend on the smoothed velocity field.
Note that the velocity field scales like k −1 δ k at the linear order, i.e.
D Stochastic terms
In this section we discuss stochastic contributions to the one-loop bispectrum.
D.1 Real space
The stochastic contributions to the galaxy density field in real space are given in terms of by (some operators are present in [137]): (D.1) Note that δ = 0 by definition. There are two non-trivial possibilities to contract operators in δ to obtain the tree-level bispectrum contributions (with free coefficients shown in color): where primes denote that we drop the Dirac delta function. These match the operators present in the tree-level bispectrum model [80]. At the one-loop order we find three distinct contractions: (k 2 2 + k 2 3 )P (k 1 ) + cyc. n , O k 2n−2 : d 1 k 2 1 k 1 k 2 k 3 =d 1 A shot k 2 1 + cyc. n 2 .
In addition, there are purely stochastic terms that generate the bispectrum of the order k 2n−2 . These arise from the following combinations: After performing angular integration (in the absence of coordinate-distortion effects), this term takes the same form as the real-space term ∼ k 2n−2 . Thus, it does not produce a new contribution to the bispectrum monopole, though is important if higher-order multipoles are also considered.
E Prior volume effects
In this section we study the prior volume effects present in our posteriors when the one-loop bispectrum likelihood is analyzed with small data cuts, such as k B max = 0.12h Mpc −1 . At face value, the posterior distributions from this analysis are several σ away from the true values. However, here we show that as much as half of this shift can be explained by prior volume (marginalization projection) effects. Indeed, such effects are expected to be present when the data volume is not sufficient to tightly constrain model parameters, which is the case for analyses with low k B max . We performed the following test: rerunning our full analysis on the mock data generated by our fitting pipeline for the best-fit cosmology at k B max = 0.15 h Mpc −1 . This mock data is simply a theory curve without any statistical scatter. In the absence of prior volume effects our pipeline must exactly recover the input parameters. However, when we fit this mock bispectrum data at k B max = 0.12 h Mpc −1 , we find that the mean values recovered from our pipeline are shifted relative to the input values at the (1 − 1.5)σ level, as shown in Fig. 5. This is evidence of prior volume effects. Furthermore, the shifts are in the directions of the apparent biases observed in the actual data ( §6). Thus, if we subtract these shifts from the actual posteriors at k B max = 0.12, the mean posterior values would match the true input parameter values at least within the 99% CL. Finally, we note that the parameters b 2 , b G 2 and b Γ 3 are highly correlated; this means that a shift in one would induce a shift in both.
As an additional check, we repeat our mock analysis for k B max = 0.15h Mpc −1 . Overall, we find much improved agreement between the mock and actual analyses. The posteriors for σ 8 and b Γ 3 are still shifted with respect to the ground truth by 1σ (which is smaller than 1.5σ shifts in the k B max = 0.12h Mpc −1 case), but all other parameters are recovered without noticeable bias. P +Q 0 +B 0 (k B max = 0.12 hMpc 1 ) Mock, P +Q 0 +B 0 (k B max = 0.12 hMpc 1 ) Figure 5.
Posterior distributions of the clustering amplitude and low-order nuisance parameters from MCMC analyses of the power spectrum and bispectrum likelihoods from the redshift-space analysis at k B max = 0.12h Mpc −1 for the PT Challenge simulation data (in green) and for the mock bispectrum data vector (in gray) computed with our pipeline for the bestfit cosmology at k B max = 0.15h Mpc −1 . Dashed lines show the input values for the mock data-vector, whose discrepancies with the grey posteriors indicate clear evidence for prior volume effects. | 18,134.2 | 2022-06-06T00:00:00.000 | [
"Physics"
] |
Non-Repeatable Experiments and Non-Reproducible Results: The Reproducibility Crisis in Human Evaluation in NLP
Human evaluation is widely regarded as the lit-mus test of quality in NLP. A basic requirement of all evaluations, but in particular where used for meta-evaluation, is that they should support the same conclusions if repeated. However, the reproducibility of human evaluations is virtually never queried in NLP, let alone formally tested, and their repeatability and reproducibility of results is currently an open question. This paper reports our review of human evaluation experiments published in NLP papers over the past five years which we assessed in terms of (i) their ability to be rerun, and (ii) their re-sults being reproduced where they can be rerun. Overall, we estimate that just 5% of human evaluations are repeatable in the sense that (i) there are no prohibitive barriers to repetition, and (ii) sufficient information about experimental design is publicly available for rerunning them. Our estimate goes up to about 20% when author help is sought. We complement this investigation with a survey of results concerning the reproducibility of human evaluations where those are repeatable in the first place. Here we find worryingly low degrees of reproducibility, both in terms of similarity of scores and of the findings supported by them. We summarise what insights can be gleaned so far regarding how to make human evaluations in NLP more repeatable and more reproducible.
Introduction
Human evaluation is widely seen as the most reliable form of evaluation in NLP.The traditional view in the field, here expressed for MT, is that "automatic measures are an imperfect substitute for human assessment of translation quality" (Callison-Burch et al., 2008).Numerous papers have reported meta-evaluations of metrics in terms of correlation with human judgments (Belz and Reiter, 2006;Espinosa et al., 2010;Hashimoto et al., 2019;Clark et al., 2019;Sellam et al., 2020).However, recently several papers have highlighted issues aris-ing from lack of standardisation (Belz et al., 2020), incomplete details reported for evaluation design (Howcroft et al., 2020), and poor experimental standards (van der Lee et al., 2019).In this paper, we address issues that intersect with all of these.
Our starting premise is that in order to act as litmus test of quality, human evaluations need to be able to be relied upon to produce the same results, at least in the sense of supporting the same conclusions, when run multiple times.This ought to be a low-threshold requirement, but is in fact very rarely assessed at all, let alone routinely established for new evaluation methods.Inter-evaluator agreement is more commonly assessed, but falls far short of establishing whether an experiment when repeated produces similar results and/or supports similar findings.Our aim in the work reported here1 is establishing the reproducibility, or otherwise, of current human evaluation practices, in order to provide evidence-based indications regarding how they can be improved, thereby going beyond recent opinionbased recommendations regarding better practice.
This paper makes five main contributions: (i) an annotation scheme capturing experimental properties playing a role in repeatability and reproducibility (Section 2 and Table 1); (ii) an assessment of the repeatability of human evaluation experiments in NLP (Section 2); (iii) a state-of-the-field assessment of the reproducibility of human evaluations in NLP (Section 3); (iv) the dataset of paper details and annotations our analyses are based on;2 and (v) evidence-based recommendations regarding how to improve the repeatability and reproducibility of human evaluations in NLP (Section 4).
We use the terms repeatability and reproducibility as follows.Repeatable is a property of experiments meaning able to be repeated with identical experimental design.evaluations, meaning producing the same results and/or findings when run multiple times.
Repeatability of Human Evaluation Experiments
In this section, we describe (Section 2.1) our 4stage process for assessing human evaluations in terms of repeatability as a precondition for inclusion in a coordinated set of reproductions.As part of this process, we annotated papers and then experiments with evaluation properties, and we examine what these reveal.Because the final stage of this selection process introduced non-systematic selection (to meet the needs of the coordinated studies design), we also verify our findings on a separate, randomly selected subset of papers (Section 2.2).
For an overview of the selection/filtering process, see the flow diagram in Appendix A.
Identifying repeatable experiments
Selection procedure.To start, we extracted all papers containing the key phrases "human evaluation" and "participants" from TACL and the ACL main conference in the ACL Anthology (177).We included papers from 2018 to 2022 inclusive. 3We manually checked and excluded papers that did not report a new human evaluation of system outputs.
Paper-level properties.In the second stage we annotated seven paper-level properties including language(s), number of systems, dataset and participants (for details, see Belz et al., 2023).During the annotation process, we excluded papers that had prohibitive barriers to reproduction, which meant those that we estimated to have cost >USD 2,000)4 , and/or that had a longitudinal design, and/or that used highly specialised experts as evaluators such as doctors5 .This left 116 papers, of which 29 are from TACL and 87 from ACL.
Experiment-level properties.We then split each paper into the experiments it reports and started annotating each experiment with our fine-grained annotation scheme (Table 1; for additional details see Appendix B).At this point we estimated we had enough information to complete the annotations in the case of just 5% of our papers.We therefore started contacting authors to obtain the missing information.Following the prolonged contacting process (for details see Belz et al., 2023, and Appendix C), we obtained the requested information for just 20 papers (containing 28 experiments).Using both the publicly available and author-provided information, we were able to collate property values to the extent shown in Table 3: 20 experiments had no unclear properties, and 8 had one or more.
That we were able to find clear properties for 20 of the 28 experiments in Table 3 does not indicate that these experiments could definitely be recreated, just that we have the minimal level of information required to attempt recreation.That we can only clear this first hurdle for 17% of the 115 6 papers we started with is alarming.
Bugs, errors and flaws.Moreover, in the process of collating and checking experiment details, we found several types of issues that in some cases called into question whether they should be repeated at all, for ethical and/or scientific reasons (for details see Belz et al., 2023).
Verification on random subset of papers
In order to verify the above finding that only 5% of papers are repeatable from publicly available information, we sampled a new batch of papers from an expanded set of 631 ACL, TACL and EMNLP papers that matched the keyword search, and did not fail any of our inclusion tests as above.
We annotated the 26 experiments reported in these 20 randomly sampled papers using the same procedure as in Section 2.1, except that we only used information that was publicly available either from the paper, supplementary material, or hyperlinks in the paper, e.g., a GitHub repository.In particular we tried to find the system outputs that were shown to participants, and the interface, form, or document that participants completed.
We found the above information for just 5% of papers, confirming our estimate from Section 2.1.Three papers made either just the interface or just the system outputs available.Table 2 shows the number of experiments out of all 26 where a given property was clear, for all properties in our annotation scheme.It is clear from the numbers in the table that very basic information such as number and type of participants is very often not findable.
Reproducibility of Results from Human Evaluations
To complement the assessment of the repeatability of human evaluations in NLP above, here we look at the reproducibility of results, as collated from recent reproduction studies.We examine similarity 6 116 minus one paper we excluded after receiving a response from the author.
in system-level scores between original and reproduction studies (Section 3.1), and assess whether scores support the same conclusions which can be the case even for dissimilar scores (Section 3.2).
Similarity of scores
Table 5 provides an overview of reproducibility results from reproduction studies of human system quality evaluations performed as part of the RE-PROLANG (Branco et al., 2020), ReproGen 2021 (Belz et al., 2021a), and ReproGen 2022 (Belz et al., 2021b) shared tasks.We exclude evaluations based on text annotation where a single overall aggregated score per system was not computed.
Column 1 identifies the original and reproduction study and the evaluation criteria assessed.The last two columns show the corresponding mean study-level and mean criterion-level coefficients of variation (CV * ) (Belz et al., 2022), and rank preservation, respectively.The columns in between show seven properties of each study/criterion, as per the HEDS datasheet (Shimorina and Belz, 2022); column headings identify HEDS question number (see table caption for explanation).
Confirmation of conclusions
Another perspective on reproducibility is whether the same conclusions can be drawn from two evaluations.
Discussion
When corresponding with authors to find missing information (Section 2.1), and when trying to find information from publicly available sources (Section 2.2), properties were often not obtainable for similar reasons.High-level properties such as the dataset, task, and language, could usually be found in the paper.The total number of items was usually available, but the relationship between participants and items was not.Information regarding recruitment of participants, as well as what they saw and did during the experiment almost always required additional information from the author.If authors were to make the files they used for the experiment, and a record of how these were processed (including the way they were presented to participants), then it would go a long way towards making the recreation of more experiments possible.
Repeatability, in the sense of being able to be repeated, is a basic requirement of all scientific experiments, perhaps most importantly as a prerequisite to independent verification through reproduction: "An experimental result is not fully established unless it can be independently reproduced" (ACM, 2020).It is therefore of concern in and of itself that the large majority (95%) of human evaluations in NLP is not repeatable from publicly available information (Section 2.2).This is further compounded by our finding (Section 2.1) that even with considerable effort (up to three emails to first and if necessary other authors) to obtain missing information to enable repetition, 80% of experiments remain non-repeatable.
Finally, where we were able to obtain and review all information needed for a repetition, we found multiple reporting mistakes, errors in scripts, and ad-hoc manual interference in live experiments that call into question for scientific and/or ethical reasons whether experiments should be repeated.
Our analysis of reproduction results (Table 5) showed that for the simplest binary output categorisation task, a good degree of reproducibility could be achieved (CV* = 6.11), but for most of the other, more cognitively complex, evaluations, degree of reproducibility was poor.Most significantly, the same set of conclusions could not be drawn regarding ranks of systems evaluated in any of the reproductions at the experiment level.
We would argue that we urgently need to (i) improve the repeatability of human evaluation experiments by making available publicly, as standard, full information about how the experiment was conducted, in sufficient detail to enable others to re-run it; (ii) test the results reproducibility of new evaluation methods prior to running full evaluation experiments with them; (iii) standardise evaluation methods, especially measurand (evaluation criterion) and measurement procedure, so that the reproducibility of each, once established, does not have to be tested every time.The worrying levels of errors and flaws in reporting and design we found can be in part addressed through standardisation and establishing reproducibility for standardised methods, but will also require a shift in expectations and awareness of how to conduct good quality human evaluations for NLP.
Conclusion
NLP needs human evaluation as a litmus test of quality, including as a reliable reference for metaevaluating other types of evaluation.In order to play this role, human evaluation needs to be verifi-ably reliable, and that includes being reproducible; in order to assess the reproducibility of results, we need to be able to repeat an experiment.However, our results showed that current human evaluations have very poor repeatability (we estimated that just 5% do not have prohibitive barriers to being repeated, and can be re-run without recourse to non-public information), and where we are able to repeat human evaluations, the growing number of results from human evaluation reproduction studies show that they have low degrees of reproducibility of both scores and conclusions.We derived recommendations for making human evaluations in NLP more repeatable and more reproducible, something that we surely need to do if we are to continue treating them as our most trusted assessment of system quality.
A Flow Diagram of Paper Selection/Filtering Process
The following diagram shows the steps in paper selection/filtering process (reproduced from Belz et al. (2023), for ease of reference):
B Details of evaluation experiment properties
All of the property names and values from our detailed annotations are listed below, along with descriptions of what was recorded for each property: 1. Specific data sets used; In the case of a relative evaluation, it refers to the set of outputs, e.g., a pair, that is being compared.
9. How many participants evaluated each item; for some experiments, this varied.
10. How many items were evaluated by each participant; for some experiments, this varied.In particular, for the 13 of 28 experiments that were crowd-sourced, 5 were known integers, 4 varied, and 4 could not be determined (we suspect these also varied).
11. Were training and/or practice sessions provided for participants; see the discussion below.
12. Were participants given instructions?Were they given definitions of evaluation criteria; see the discussion below.
13. Were participants required to have a specific expertise?If so, what type, and was this selfreported or externally assessed?; see the discussion below.
14. Were participants required to be native speakers?If so, was this self-reported or externally assessed?;For the first part we used the options yes, no, crowd-source region filters, and in one case that the experiment was performed with students at a university where the language was native.The latter two are inherently self-reported, although with some limited control by the researchers.Only for one of the experiments with native speakers did the researchers indicate that they had confirmed this, all others were self-reports.
15. How complex was the evaluation task (low, medium, high); assessment by authors of this paper.
16. How complex was the interface (low, medium, high); assessment by authors of this paper.
Classifying the type of participant, training, instruction, and expertise was very difficult.Firstly, not all experiments necessarily require detailed instructions but setting a threshold beyond which instructions become non-perfunctory is difficult.The same is true for training.In the end, we decided to record whether there non-perfunctory training, instruction, practice, or criterion definition.Expertise was also difficult to classify.Some papers would have originally reported 'expert annotators', but following our queries stated participants were graduate students or colleagues.Such participants were often called 'NLP experts'.In the end, we considered participants to be expert if the authors of the original study indicated that they were.
C Process for contacting authors
When we contacted authors of papers we followed a standard procedure.We considered the corre-sponding author to be the first author of the paper, unless a different corresponding author was explicitly stated.First they were sent the following email: The ReproHum project at the University of Aberdeen is running a multi-lab study where over 20 partner labs from across the world will be reproducing human evaluation experiments from NLP papers.The project is being led by Prof. Anya Belz, with Prof. Ehud Reiter as co-investigator, and myself as a research assistant.
To create a shortlist of papers to reproduce, we looked for papers containing human evaluations, at high-profile conferences such as «VENUE».We identified your paper "«TITLE»" from «VENUE» «YEAR» as a candidate for inclusion in our study.If included, the human evaluation that was performed for the paper would initially be reproduced by 2 different labs.One of our main objectives is to identify types of human evaluation that are associated with higher degrees of reproducibility so that the NLP community can then use this information to select the most appropriate methods for their studies.
We are writing to you today to ask if you can provide us with more information about your experiment to enable us to reproduce it under conditions that are as close to the original as possible.We are particularly hoping that you can provide the system outputs and questions that were shown to participants.
We would be most grateful if you could initially confirm that you are able to send us (links to) the below information (for each human evaluation that is reported in the paper): 1.The system outputs that were shown to participants.2. The interface, form, or document that participants completed; the exact document or form that was used would be ideal.3. Details on the number and type of participants (students, researchers, Mechanical Turk, etc.) that took part in the study.4. The total cost of the original study.
If you are able to provide the above information, we would be grateful if you could also confirm how soon this would be possible.If there was no response the above email, they were sent a second email with only minor adjustments to reflect that we had tried to contact them previously.A third email was sent in cases where we still had no response.At least one week passed between each email sent to an author.The first two emails were sent from the academic email account of a research assistant, although addressed from the whole project team.The third email was sent by a professor, and whilst this did elicit a small number of responses, most came from the first two emails.In the event that email addresses were no longer valid, we searched for a more recent email for the author, primarily by checking their most recent papers.In the event that we could not find any email address for an author, we attempted to contact the next author in the same way.We were able to find a working email address for one author from all bar one paper.Most were sent using a mail merge, although some were aggregated and sent manually, in cases where one author had many papers.
If you have any questions, please contact us.With best regards, Anya, Ehud, and Craig Project web page: https://reprohum.github.io Reproducible is a property of Is any reported native speaking self-reported (with no interaction with researchers, i.e. on MTurk or a double-blind study)?
Table 1 :
Properties in experiment annotation scheme.
Table 2 :
Table 4 assesses the (dis)similarity of ranks between the pairs of original and reproduction ex- Number of experiments out of 26 for which a given property was clear (random sample of 20 papers using publicly available information only).
Table 3 :
Number of experiments out of 28 for which a given property was clear (non-random set of 20 papers where authors provided missing information).
Table 4 :
Spearman's ρ as an indication of how closely matched system ranks are between original and reproduction studies (Pearson's r for reference).
Table 5 :
Overview of reproducibility results from existing reproduction studies in terms of (mean) CV* and rank preservation (last two columns).Evaluations are characterised in terms of some properties from HEDS datasheets: 3.1.1= number of items assessed per system; 3.2.1 = number of evaluators in original/reproduction experiment; 4.3.4= List/range of possible responses; 4.3.8= Form of response elicitation (DQE: direct quality estimation, RQE: relative quality estimation, Anno: evaluation through annotation); 4.1.1= Correctness/Goodness/Features; 4.1.2= Form/Content/Both; 4.1.3= each output assessed in its own right (iiOR) / relative to inputs (RtI) / relative to external reference (EFoR); scores/item = number of evaluators who evaluate each evaluation item.
Thibault Sellam, Dipanjan Das, and Ankur Parikh.2020.BLEURT: Learning robust metrics for text generation.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online.Association for Computational Linguistics. | 4,625.6 | 2023-01-01T00:00:00.000 | [
"Biology"
] |
Highlighting the Salient Safety Issues in the National Road Traffic Regulation 2012
Land transportation is an integral part of modern day life. It has bridged spatial activities, enhanced commerce and general developments across large areas of society. However, it has also resulted in series of untoward consequences that have negatively affected families as a result of different fatalities. It is obvious that the regulating authorities do not implement the provisions of the law especially as it relates to the periodic certification and recertification of commercial transport drivers and their conductors. It is also obvious that the will to confront security personnel/corporate security vehicles using by the enforcement agencies that grossly violate the provisions of the law is lacking. The law is silent on safety issues such as tire life and conditions, road markings or dressing and geometric design features of the roadways that influence driving. Therefore, the need to institute measures to regulate/control operations, use and behavior on public roads by pedestrians, drivers and other vehicle users became apparent in Nigeria. The 2012 National Road Traffic Regulation was therefore prepared to guide all public road users in relation to their conduct and use of road infrastructure. The complete disregard by the public and enforcement agencies of the regulation has, therefore, necessitated highlighting of some salient safety issues in the regulation which will lead to a better use of our roads and highways when adhered to. These ranges from the registration of vehicles to the processes of the issuance of drivers licenses, parking speed and other personal conducts that could jeopardize the safety of persons and property of potential road users.
Introduction
Land transportation is as old as man [1] -from man being his own means of propulsion (foot) to series of inventions in transport development that has amongst others culminated in the automobile.Beginning from the mid twentieth century, the increased use of the automobile has reshaped, not only the structure of our settlements and the location of activities but also the relationships between settlements, production, commerce, leisure and others.Attendant to the increase in the use of automobiles are the spatial spread of activities, construction of roads, motorways and highways for expeditious movement between locations.The resultant effect is the desire to increase speed to close the gap.In all transportation in general and motor vehicle, transportation has contributed immensely to the wellbeing and growth of the economy [2].
The need to harness such positive input to national development elicited the development of transport policies in most countries, including Nigeria.However, the positive input to the national and local economy, the facilitation of neighbouring and leisure and other positive outlooks are not without problems.The demand for fast cars have pushed manufacturers over the years to improve technology to design and improve the speed of cars that have been applauded in some quarters while the development has caused misery to families, communities and society due to fatalities resulting from several factors including speeding on the roads and highway.The widespread use of cars and increased use of speed have concomitant increase in fatalities, loss of property and other negative and debilitating impacts arising from the usage of motor vehicles of different types, size and, speed; by people of different age, education, agility and socio-economic status.These have, over the years, necessitated the regulation of the system.Road traffic legislations and codes are therefore the direct result of the need to maintain the safety of all road users and property in the society.
Amongst such measures in the past two decades in Nigeria does the passage of Road Traffic Act, Road Traffic Regulation and setting up of additional statutory agencies to enforce the provisions of the laws in collaboration with others already exist.The effects of unsafe acts and importance of safety on public roads and highways cannot be over stated over the years; this paper therefore ties to identify the salient safety features of the Road Traffic Regulation in Nigeria and its operationalization in Rivers State.
Objectives of the Road Traffic Regulation Act 2012
The concern to achieve safe movement within and between locations, economic development and wealth creation and national integration is not limited to a single country.Efforts internationally are therefore made to standardize and adopt international best practices.To this effect, the National Road Traffic Act was enacted with the objectives of: i. Give effects to two international conventions on Road traffic and Road Signs and Signals.ii.Provide operational requirements, rules and regulations for road traffic operations and operators.iii.Make provisions relating to exempted bodies, operators of transport services, operation of ambulance services and towing vehicles; and iv.Provide miscellaneous motor traffic regulations to ensure road safety through personal conduct, interpersonal relationship and handling of vehicles/traffic on public roads.
Registration of Vehicles
Salient safety issues in the National Road Traffic Regulation on registration of vehicles and other related matters are discussed from Part II-VI, of the Regulation (pp, B1530-46).It focused centrally on the methods of application for the registration of vehicle and issuance of licenses.It noted that the registration of all manner of motorized traffic/vehicles (vehicles and motorcycles) is to ensure that the vehicle can easily be identified and owner traced or reached in emergencies or when necessary through the issuance of vehicle license and motor Plate number.
The provisions in this section not only provide the categories of vehicles and their features; the basis for the use of the vehicle in the country, but also re-registration in cases of change in status of the vehicle.Furthermore, this section specifies change in ownership and colour of vehicle for the purposes of security.To ensure that a driver with a vehicle on the road is fit to be in charge or control of a vehicle, the law made regulations for the training of potential drivers.It sets guidelines for the establishment of driving and licensing of driving schools including the certification of driving instructors in the Federation [3].Fines are stipulated for offences under this section.
Learners Permit, Issuance and Revocation of License
The focus from part VII-IX (pp.1546-1562), the regulations to ensure that a potential driver is properly groomed to drive on the roadways in the country after a period of learning in a driving school, the regulation specified the issuance of learners' permit that will be usable for three months.It spelt out the conditions wherein the learner, issued with such permit will drive on the road and accompanied only by licensed instructor and not a passenger.
Often times, this section is violated in the state as vehicles with such learner's signs are seen with only the driver on the driver's seat, it is equally often unknown to the public the competence of the person driving at the stated time [3].In the same vain the regulation categorized driver's license and learners permit into classes A-J for purposes of ensuring that those within a licensed permit for a particular class do not go over to drive other categorizes so as not to jeopardize public safety.Also specified is the age limits of those entitled to apply for a driver's license and the conditions that such would have undergone for a new applicant, for example, tutelage in a driving school and a successful administration of driving test.
Furthermore, a medical fitness attestation and visual acuity must be submitted with an application and biometric capture.It also specified the requirements for the issuance of commercial driver's certification.Conditions for the swap of license categories involve the test to ascertain fitness to handle such category of vehicle before the new license would be issued.Provisions are also made to license diplomats in the usage of cars in the country.For purposes of security which translate to safety, driver's license is personalized with signature and fingerprint capture embedded in the license.
The terms and conditions for the revocation of driver's license are copiously specified which include falsification to obtain license, negligence, use of drugs and alcohol, defective vision, neurological, muscular and other forms of ailments.The competence test which is undertaken as driving test is a practical exercise focused on the following: Control of vehicle in the traffic, stopping of vehicle from normal speed; turn corners, crossing main roads and turn from side to main roads; passing or overtaking other vehicles on the roads; backward movement along a straight road and around corners; turn round in a road; understanding all the displays on the dashboard including all indicators and figures shown on the speedometer of the motor vehicle; demonstration of knowledge of the rules of the road, the signals set out, to these Regulations, signs and traffic light signals illustrated.First aid skill and the principal offences set out under the Act and Regulations.
Others are read at a distance of twenty-three meters in daylight (with the aid of glasses, if worn), a motor Vehicle Identification Number Plates; and generally drive competently a motor vehicle or in the case of a person suffering from disability, a motor vehicle of the particular class of which the application relates, without danger to and with due consideration for other road users.The law also noted the difference between driving an automatic transmission fitted vehicle during the test to so grant license with the non-automatic vehicle.
Due to the need for caution and safety of passengers, the law mandates all commercial driver's license holders to undergo a minimum of cumulative nine hours of competence training within a period of three years which qualifies them to be issued a certificate of competence which shall form part of the requirements for renewal of the license.So also is the special training and licensing of convoy drivers which are usually associated with high speed.For the sake of safety of the public, the law spelt out situations for convictions that would lead to the withdrawal of license from a driver which include: "driving under the influence of alcohol or any drug; participating in, or organizing an unauthorized speed contest on a public highway; receives 3 convictions for failure to maintain insurance; receives 3 convictions for inconsiderate driving within 3 months; receives 3 convictions for failure to properly secure or use child restraint system in a vehicle; exceeding the prescribed speed limit; driving with defective and uncorrected eyesight; or where a driver has accumulated more than 14 penalty points within 12 calendar months." Furthermore, the law made an offence "for any person to authorize, order, consent or knowingly permit the operation of any motor vehicle owned by him or under his control by any person, when he has knowledge that such person is disqualified or has no legal right to do so."Toeffectively distinguish a driver permitted to drive a stage carriage or omnibus, the law prescribed the use of a badge which is obtainable only by valid driver's license owners while a conductor is equally required to apply for and obtain a conductor's badge.This is for the purposes of control, to prevent a conductor from usurping the function of a driver.It explicitly stated the conditions of revocation of both licenses.
Taxis, Stage Carriages, Omnibuses and Motorcycles for Hire
As a caveat in part X-XII (pp.1562-1588), the law prescribed to be posted the capacity of each of the above system on the vehicle to prevent over loading as it did specify that the vehicles are appropriately marked for ease in identification with the colour code of the locality.Under this section, seat conditions and sizes are prescribed including driver's seat area.
Also, are clear conditions for the carriage of passengers, goods, passengers and goods, and the protection of fright or goods in transit.Besides, the conditions for the use and safety of motor cycle riders and its passengers are spelt out.Furthermore, the condition for hiring, refusal to be hired and conduct between the driver, conductor and the hirer are made known.
The requirement and methods of operation of school bus is also captured.Documentations of intercity passenger vehicles are prescribed which includes the preparation of passenger manifest.Other commercial vehicle size, capacity, and weight; its load carrying system, tools, safety materials and warning systems are properly articulated in the law.The number and conditions of the breaking system appropriate to a vehicle is specified; even as measures to avoid noise pollution and excessive gaseous emission are instituted.
Other safety systems whose working conditions are mandatorily specified are: functional headlights, parking lights and traffic ting lights, illuminators and two side lights, less illuminating lights for bigger vehicles to show the width of the vehicles, while also stating the limitations in the number of lights that should be used without causing nuisance [4].Other requirements in a vehicle as stated in the law are: Have reflective red and silver tapes fitted to the rear and sides of the vehicle.a) Electric horn sounding not more than a single note.b) Equipped with at least two mirrors which shall be fitted externally, one on the offside and the other on the nearside of the vehicle which shall assist the driver to be aware of traffic to the rear and on both sides rearward.c) Fitted in the front and rear seats, seat belts and child safety seats which shall be securely worn by the driver and the other occupants of the vehicle while the vehicle is on motion.d) The driver of a motor vehicle shall be responsible for the children who are passengers in such a vehicle and shall ensure: i.The proper use of child locks in every vehicle where one is installed; ii.The proper use of child safety seats for every child that is 7 years and below but not in the front seat.
Other Vehicle Features
All vehicles on the road must have effective and appropriately functional wheel, steel hub or axle tree.Vehicle or trailer with a defective wheel, steel hub, or axle tree, shall not be used on any public road.Also included in the law are: the provision of vehicles with a strong and unreliable steering gear, steering apparatus fitted on the right hand side of the vehicle.
The prescribed standard of windscreen is laminated or safety glass that would not shatter when it breaks on impact and also clear enough as not to obscure the vision of the drivers while the vehicle is being driven on any public road.For visual clarity during rain, it is mandated that all motor vehicles must be fitted with electronically or mechanically operated windscreen wipers which shall be maintained and kept in proper working condition.
Other requirements instituted for safety on the road are the provision of trailers with mudguards to catch as far as practicable mud or water thrown up by the rotation of the wheels, this is not only to protect the trailer but especially vehicles at the rear from being blinded with mud raised to its windscreen including possible damage to the windscreen.Every motor vehicle shall be fitted with an efficient speedometer which shall be plus or minus ten per cent accurate at 50 kilometers per hour and shall be maintained in proper working condition at all times so as to gauge speed.Safety requirements for Tankers carrying spirit, explosives or other inflammable substances are the installation of the tanker with double pole armored wiring with insulated return electrical units and a battery insulation master switch; carrying of warning danger labels to be displayed conspicuously at the front and rear of the vehicle as carrying of additional freight or load on top of the vehicle is prohibited.Vehicles which convey hazardous materials cannot be used to transport other materials.
Reflex reflectors visible to an approaching driver 150m away in clear weather when illuminated by light of the rear vehicle are mandatory, while all vehicles are to be certified road worthy by a Vehicle Inspection Officer who is mandated to issue a certificate valid for six and twelve months for commercial and private cars respectively.Parking of heavy goods vehicle on the road is also prohibited.
Rules on Speed and Road Crossing
This section considers guidelines on speed limits, parking of vehicles on the road and, turning/crossing on the road atgrade rail line crossing.
Speed Limits
Part XIII from page 1588, prescribes speed limits for different categories of vehicles and roads in the country.The law specified that no driver should drive recklessly any vehicle or drive at a speed more than the prescribed limits for the category of vehicle and road; therefore, it is also an offence to drive a vehicle on the road not fitted with a speed limiter, stating the measure with which regulators can use in determining speed of vehicles.
The law, however, recognized and exempted certain categories of official vehicle drivers on duty from prescribed speed limits noting that such drivers must proceed to do so taking cognizance of the safety of other road users; fitted with a device capable of emitting a sound (siren) and with an identification lamp (flashers and beacon lights).
Sirens and Traffic Signs
The regulation also identified those that are permitted to the use of siren on the road (part XIV, pp.1589).It states that all drivers driving a vehicle must obey established authority in control of vehicles either verbally or by sign to stop, slow down or any directed line of action.Instructions by the Act is to obey all traffic signs or traffic signals on any public road and all notices on any public road where such notices are erected or exhibited by an authority responsible for the construction or maintenance of the public road, for the purpose of prohibiting, restricting, or regulating traffic over bridges or sections of the public road; stop when approaching a pedestrian crossing to allow pedestrians standing at the pedestrian crossing to cross the road.
It further inputted that no person shall damage, remove or alter or reposition any road traffic sign or traffic signal, or any other sign, signal, marking including colour and lettering or other device, displayed without lawful authority.The importance of functional trafficator lights and methods of usage in addition to the use of hand to indicate direction are highlighted.It stated that turning vehicles must give adequate and sufficient notice on the road before such turning is negotiated.
Rules on Road Crossing
Part XVII (pp.1591)Rules on vehicular road crossing.The rules are centered on crossing of roads when there are no obstruction of vehicles due to clear traffic and sufficiency of distance from another vehicle to allow access to cross and non-endangering of other road users; the rules also established the process of entering an intersection or other marked area, noting that, unless there is sufficient space on the other side of the intersection or other marked area to accommodate a vehicle, the operating vehicle must ensure the vehicle does not impede or obstruct the passage of other vehicles or pedestrians, notwithstanding any traffic-control signal indication to proceed; except when overtaking vehicles proceeding in the same direction, pass such vehicle only on the left side.
A driver or operator of a vehicle must not enter a public road unless he can do so with safety to himself and other road users; a driver is required to reduce his speed considerably when approaching a school or playground; and at all times give preference to children, the elderly, the physically challenged and visually impaired persons with any sight aid who wish to cross the road at pedestrian crossing points.
Overtaking
Furthermore, rule XVII (b) which specified measures for overtaking, succinctly stated in the regulations that drivers must recognize and respect road lanes as marked noting the width of their vehicle; crossing into another lane must be done without obstructing or endangering other road users and the slowdown of vehicles where necessary to be overtaken and passed by an approaching vehicle.Vehicular speed characteristics, directions or sides and conditions for overtaking are considered and stated.
Conditions that are not permitted also include: stopping of vehicles within8 meters from any corner, distraction and not being in full control of a vehicle and not having proper visibility, not take due cognizance of all traffic signs and notices lawfully placed on or near a road for the guidance of drivers; the lane to be used by slow moving vehicles and the placement of disabled vehicle on the road.
Parking of Vehicles
Part XVIII spelt out the regulations on parking.It stated that all vehicles parked on the road must not cause orlikely to cause danger, obstruction or undue inconvenience to other road users but park in a manner where specified, according to the specification.It prohibited a driver of a vehicle from parking the vehicle at or near a road crossing, a bend, top of a hill or a humpbacked bridge; on a foot-path; near a traffic light or pedestrian crossing; on a main road or one carrying fast traffic; opposite another parked vehicle or as obstruction to other vehicle; alongside another parked vehicle; on roads or at places or roads where there is a continuous white line with or without a broken line; near a bus stop, school or hospital entrance or blocking a traffic sign or entrance to a premises or a fire hydrant; on the wrong side of the road; where parking is prohibited; or away from the edge of the footpath.
At-grade Rail Crossing
An attempt at crossing of a rail line during the approach of a train is prohibited especially when there is any form of sign (manually, mechanically or electronically) indicating such or even where the train is visible to the driver.It further stated that no attempt be made at driving any vehicle through, around or under any railway crossing gate or lowering of barrier at a railway crossing or while such gate or barrier is closed or is being opened or closed; prescribing a minimum of 5meters stop distance from the closest level rail crossing.
Rules and General Duties of Drivers
This section (XIX) specified the general conduct of drivers while driving on public roads and the appurtenances that should not be added to a vehicle.
Phone Calls
The law succinctly disapproved the making and of calls while driving by a driver of a vehicle and even an instructor giving instruction to a learner on the road which it notes to include, sending or receiving oral or written messages.The exception to this rule is if the call is for genuine emergency to either the police, fire service, ambulance or any emergency services.
Dangerous Driving
The rules described dangerous and reckless driving which it prescribed heavy fines on, such as causing vehicle to drive backwards long distance instead of turning; following another vehicle more closely than is reasonable and prudent having regard to the speed of such other vehicle and the condition of the road; driving a vehicle between sunset and sunrise without the use of lighted lamps; permitting any person, animal or object to occupy any position in such vehicle which may prevent the driver from exercising complete control over the movements of the vehicle or signaling his intention of stopping, slowing down or changing direction; permit any person to take hold of or interfere with the steering or operating mechanism of the vehicle and the driver not occupying the proper position such that the driver losses control and does not have a full view of the road ahead.
Other actions with caution within this section are: running the engine of a vehicle unattended to or not taking precautions before starting the engine of a vehicle such as non-stationary vehicles; abandonment of broken down vehicle at the center of the road without moving it to the appropriate place with necessary precaution signs/signals; permitting any person to ride on the wings, running boards, fender, or sides of the vehicle except for the purpose of testing the vehicles during repairs; permitting, in the case of a commercial vehicle, any person to ride on the steps, tailboard, or roof of the vehicle, nor on any load or freight on the vehicle or on any trailer and permitting any person to be carried in the vehicle being pulled except the person in charge of controlling the vehicle.
Other Prohibitions
The regulation prohibits excessive emission of gas and fumes from the engine, and negligently or willfully depositing or causing to be deposited any petrol or other liquid fuel or any oil or grease or other flammable or offensive matter, ashes or other refuse, of whatever nature, from vehicles on or alongside a public road; allowing a vehicle engine run while petrol or other flammable fuel is being delivered into the fuel tank or starting of an engine before the conclusion of fueling and closure of tank cover; falling asleep while driving or in control of a vehicle; allowing any person to enter or alight from a vehicle on a public road, unless such vehicle is stationary and unless the person can do so with safety to himself and other road users; driving without a spare tire and other necessary tools and overloading of a vehicle.
Condition of Drivers
Part XX hinges on regulating drivers' fitness to drive on the road, the rule specified that no driver should be on the steering for more than ten and half hours within twenty-four hours.The driver is expected to have eight hours of continuous rest within a twenty-four hours' period and thirty minutes of rest every three hours in a trip that exceeds five hours.It noted that any time spent by a driver on other work in connection with the vehicle or load carried thereby shall be reckoned as time spent in driving.
Pedestrians and Pedestrian Crossing
Pedestrians are to observe and obey traffic signals at even pedestrian crossings though the driver of a vehicle is mandated to yield the right of way by slowing down or stopping (part XXIII).Pedestrians are not allowed to suddenly enter a pedestrian crossing and walk or run into the path of a vehicle which is so close that it will be impossible for the driver to yield as mandated.Vehicles are not permitted to overtake a stopped vehicle at a pedestrian crossing.Crossing at pedestrian crossing must be done with dispatch or swiftly; where pedestrian overhead bridge is provided, pedestrians must use the overhead bridge but if dependent on pedestrian crossing, all such crossing may be done within 91.44 meters.
Pedestrians must use sidewalks on the road or as near as the side of the road as possible where sidewalk is not provided.The law further stated that no pedestrian should cross a public road without satisfying himself that the road is sufficiently free from on-coming traffic to permit him to cross the road in safety and should not conduct himself in such a manner as to or as is likely to constitute a source of danger to himself or to other road users on such road.
Driving and Alcohol/Drugs
Part XXIV stated that no person should drive a vehicle or occupy the driver's seat of any vehicle on a public road while under the influence of intoxicating liquor or a drug having a narcotic effect while the engine is running.It stated the prescribed alcohol limits of 0.5 grams in 100 milliliters of alcohol or 80 milligrams in 100 milliliters of urine or blood test which shall be determined by use of a breath analyzer or in case of urine or blood test by a medical officer who shall make a report on the results of the test, stating measures for refusal to be subjected to such test.
Removal of Vehicles
The rule stated that there will be issuance of parking ticket to an inappropriately parked vehicle or vehicle parked in contravention of established parking system or standard which will precede the removal of the vehicle by an authorized authority, giving a twenty-hour notice (part XXV).The vehicle to be so removed would have been parked in a position or condition that would cause obstruction or constitute danger to other road users.
Prohibitions on an Expressway
Part XXVI of the regulations prescribed the different types of vehicles permitted to be used in an expressway.One maximum permissible size is larger trailers with extra-ordinary axle load and load that can be placed on it.Also prohibited on the expressway, are the movement, trading and grazing of cattle or livestock, tri-cycles, smaller motorcycles, hand pushed trucks, bicycles of all types, agricultural machines and pedestrian movement.While pedestrian movement will be permissible only in designated areas, the loading and offloading of passengers or goods will be restricted to only designated locations or bus stops.
It cautioned against the opening of the door of a vehicle by any person including the driver towards the moving traffic side unless reasonably safe and the alighting from a vehicle that is not stationary on the road.Furthermore, it cautioned against overloading of vehicles particularly of persons, and goods beyond the manufacturers specification.It also specified measures to be taken to bring the excess to the limits of the vehicle.It noted that the stopping or repair of vehicles on the carriageway of an expressway is prohibited.Improper entry and/or exit, crossing of the central reserve or making of U-turns except on designated areas are not permitted [5,6].
Conditions for the Use of Bicycles
Bicycles for use on the road must be fitted with appropriate braking systems and an efficient bell or warning system, mudguard [6][7][8].Riders of bicycle must not be attached to a moving vehicle, not wear an approved safety helmet or carry a passenger who is not wearing an approved helmet and ride in a negligent or dangerous manner especially under the influence of alcohol, drugs and other psychotropic substances to such extent as to be incapable of having proper control of the bicycle [9][10][11].
Regulations for Operations of Transport Operators
Part XXXIII of the regulations mandates all fleet transport operators who engage in inter-state city road transport services to establish a Safety Unit and appoint a Safety Officer as the head of the Unit who shall ensure that safety is maintained in the system and also be registered for licensing [12][13][14].Emergency phone numbers must be inscribed on fleets as they are to maintain records of drivers, vehicles, routes plied and road crashes and their causes, and submit same on regular basis [14][15][16].
Conclusion
The different sections of the regulation succinctly made provisions for the conducts that will lead to the safety of the country's roads and highways.Substantial provisions in the regulations are however not obeyed and utterly ignored without proper enforcement by the public and the regulators.To a large extent the failure to obey the content of the regulation even by the relatively educated in the society is attributable to ignorance on the part of the public.
It would be important to start early to introduce some of the content of the regulations to students below the tertiary school age.The strict adherence to the training system espoused in the regulation and even furtherance to include written text would improve driver's knowledge of the provisions in the law.Also, it is obvious that the regulating/enforcement authorities do not implement the provisions of the law especially as it relates to the periodic certification and recertification of commercial transport drivers and their conductors.Besides, the will to confront security personnel and other corporate security vehicles (bullion vans) by the enforcement agencies that grossly violate the provisions of the law is lacking.The law is silent on some other safety issues such as tire life and conditions, road markings or dressing and geometric design features of the roadways that influence driving. | 7,260.4 | 2019-10-21T00:00:00.000 | [
"Computer Science"
] |
An Enhanced Opposition-based Fire fl y Algorithm for Solving Complex Optimization Problems
Firefl y algorithm is one of the heuristic optimization algorithms which mainly based on the light intensity and the attractiveness of fi refl y. However, fi refl y algorithm has the problem of being trapped in local optimum and slow convergence rates due to its random searching process. This study introduces some methods to enhance the performance of original fi refl y algorithm. The proposed enhanced opposition fi refl y algorithm (EOFA) utilizes opposition-based learning in population initialization and generation jumping while the idea of inertia weight is incorporated in the updating of fi refl y’s position. Fifteen benchmark test functions have been employed to evaluate the performance of EOFA. Besides, comparison has been made with another existing optimization algorithm namely gravitational search algorithm (GSA). Results show that EOFA has the best performance comparatively in terms of convergence rate and the ability of escaping from local optimum point.
INTRODUCTION
In recent years, heuristic optimization methods have obtained a lot of attention from researchers.This is due to their better performance compared to mathematical optimization methods in coping with large and complex optimization problems.There are different types of heuristic optimization algorithms.One of the early works is Genetic Algorithm (GA) (Goldberg & Holland 1988), followed by other methods such as Ant Colony Optimization (ACO) (Dorigo et al. 1996), Particle Swarm Optimization (PSO) (Kennedy & Eberhart 1995), Gravitational Search Algorithm (GSA) (Rashedi et al. 2009) and etc.However, meta-heuristic optimization algorithms have the problem of being trapped in local optimum and slow convergence rates due to their random searching process.This leads to the development of hybrid algorithms that can overcome these issues effectively.
Firefl y Algorithm (FA) is an algorithm developed by X. Yang based on the fl ashing characteristics of fi refl ies (Yang 2008).The FA is illustrated based on three assumptions where fi rstly, all fi refl ies are of the same sex and therefore the attraction between the fi refl ies is independent.Secondly, the attraction between the fi refl ies is proportional to their brightness.This means, the brighter ones will attract the less bright ones.The fi refl ies will move randomly if all fi refl ies having the same brightness.Thirdly, the brightness of a fi refl y is decided by the objective function.
Compared to some other heuristic algorithms, FA is relatively more simple and easy to implement.However, like most of the heuristic algorithms, FA is also facing the problem in escaping from local optimum and pre-mature converges.Therefore, some improvements have been done on it by previous researchers.One of the improvements was introduced by X. Yang where a technique known as Levy fl ight is used to improve the randomization of FA (Yang 2010).Besides, modifi ed FA with cellular learning automata (CLA) was proposed to improve the ability and convergence rate of original FA (Hassanzadeh & Meybodi 2012).On the other hand, inertia weight based FA has been implemented to avoid the pre-mature convergence as well as to recover from being trapped in local optimum (Yafei et al. 2012).The improvement made to FA in this research differs from previous works.In order to further improve the performance of original FA in terms of convergence rate, the opposition-based learning (Tizhoosh 2005) is integrated into FA while the idea of inertia weight FA (Yafei et al. 2012) is also incorporated at the same time to improve the ability of FA to escape from local optimum.
CONVENTIONAL FIREFLY ALGORITHM
Two important components of FA are the light intensity as well as the attractiveness where the attractiveness is decided by the light intensity (brightness) of the fi refl ies.Since the attraction of a fi refl y is proportional to the light intensity as discovered by the nearby fi refl ies, the attractiveness function, β(r) can be defi ned as shown in Equation (1).
where β o is the attractiveness for r = 0, γ is the light absorption coeffi cient while r is the Cartesian distance between two fi refl ies as defi ned in Equation (2).( ) where i and j represents two fi refl ies at x i and x j .x i,k is the kth component of the spatial coordinate x i of i-th fi refl y.At the same time, the movement of the fi refl y i which is attracted by the brighter one (fi refl y j) is defi ned by Equation (3).
where the second term is due to the attraction and the third term is due to the randomization.In the third term, the randomization parameter alpha is used while rand is a random number generator uniformly distributed between zero and one.Meanwhile, alpha is a decreasing function with a decreasing factor, delta as illustrated in Equation (4).The fl owchart for FA is shown in Figure 1.
The FA have better performance if compared to other algorithms such as PSO and GA in terms of both effi ciency and success rate (Yang 2009).However, many researchers noted that the performance of FA becomes less satisfi ed when the dimension of the search space increased (Rahnamayan et al. 2006;Yang 2009).Opposition based learning could be one of the technique that can improve the performance of FA.
OPPOSITION-BASED LEARNING
Opposition-based learning was proposed by Tizhoosh (Tizhoosh 2005) and it has been applied and tested in some heuristic optimization algorithms such as genetic algorithm (Tizhoosh 2005), differential evolution algorithm (Rahnamayan et al. 2006), ant colony optimization (Malisia & Tizhoosh 2007) and gravitational search algorithm (Shaw et al. 2012) in order to enhance the performance of these algorithms.
Basically, optimization process such as FA always starts with an initial population (solutions) which is created randomly due to the absence of a priori information about the solutions.Then the algorithm will try to search for the best solutions.However, there can be a possibility that the initial guess for the solutions are far away from the actual solutions.The chance to start with the solutions closer to the optimal value can be increased by obtaining the opposite set of solutions simultaneously.Set of population that are closer to the optimal value will be chosen as initial population.The similar method can be adopted as well for each solution in the current population.The concept of opposite number is demonstrated below.Let x ∈ R be a real number within a defi ned interval where x ∈ [a,b].The opposite number x o can be defi ned as shown in Equation ( 5).
Similarly, this concept can be extended to the case with higher dimensions.Let D(x 1 , x 2 …., x m ) be a set of points in m dimensional search space where x m ∈ R. Then the points in the opposition set D o (x o1 , x o2 … x om ) can be defi ned as shown in Equation ( 6).
By using the definition for opposite number, the opposition based optimization can be developed as follows.Let D(x 1 , x 2 …, x m ) be the set of points in m dimensions search space which is the candidate solution for an optimization problem.According to opposition theorem, D o (x o1 , x o2 …, x om ) will be the opposition set for D(x 1 , x 2 …, x m ).Suppose that f( x In inertia weight based FA (Yafei et al. 2012), an inertia weight function, ω(t) as shown in Equation ( 7) was applied to the Equation (3) in the original version of FA described in Section 3.
where ω(t) is the inertia weight at t, ω max and ω min are the initial and fi nal values of the inertia weight respectively through the iteration process, t is the current iteration while Maxgeneration is the maximum number of iterations as defi ned in the initialization process of FA.
The movement of the fi refl y in updating its position in inertia weight based FA is shown in Equation ( 8).
( ) The main purpose of the inertia weight based function here is to improve the global exploration at the beginning of optimization process while improve the local exploration at the end of optimization process (Yafei et al. 2012).
EOFA
Opposition-based population initialization and oppositionbased steps for EOFA with the population size of n and dimension of m are shown in Figure 2.For the initialization, the initial population of fi refl ies, D is generated randomly, and then the opposite population, D o is calculated using Equation ( 6).The n fi ttest fi refl ies are chosen from D and D o to become the fi rst population in opposition-based optimization process.
In EOFA, each fi refl y updates the light intensity (fi tness value) using Equation ( 8) after the evaluation of the fi tness from the objective function.Then the fi refl ies will rank and update their positions.In EOFA, a jumping rate, Jr of 1 is used to decide if the opposite population is generated or not according to Equation ( 9).If Jr is greater than the generated random number, the opposite population will be generated and the next population will be the n fi ttest individuals chosen from current D and D o or else, the next population will remain as the current population, D generated from the update of fi refl y's position.The optimization process repeats until the criteria given is met, where in this case it is the maximum number of iteration.
, ( ) , generation of yes if Jr rand opposite population
no otherwise The opposition-based optimization enables the algorithm to search for the global optimum points in a faster way.The superior performance of EOFA in escaping from the local optimum points as well as the higher convergence rate is shown in the results.
RESULT AND DISCUSSION
Fifteen benchmark test functions for unconstrained global optimization (Hedar 2013) have been chosen in order to evaluate the performance of EOFA.The name, dimension size and the global minima of each test function is presented in Table 1.All simulations in this study are done using MATLAB software.Besides, comparison has been done with Gravitational Search Algorithm (GSA) (Rashedi et al. 2009) in order to show the superior performance of EOFA in solving most of the problems.In addition, FA (Yang 2008) is included in the comparison as well to show the improvement of conventional method by using EOFA.In this work, the population size, n is set to be 50 and the number of maximum iteration is taken as 1000 for all algorithms used in the comparison.For FA and EOFA, the values for β o , initial alpha, delta and gamma are defi ned as 1, 0.2, 0.97 and 1 respectively.For EOFA, the jumping rate, Jr = 1 while the inertia weight, ω max and ω min are 1.4 and 0.5 respectively.For GSA, the initial gravity constant, G o is set to be 100 while the best applying force, Kbest decreases monotonically from 100% to 2.5%.The parameter τ is set to be 8% of the total number of dimensions.
After 50 runs on each test functions, the performances (fi tness value) of each algorithm are reported in Table 2 and the summary of the performances is shown in Table 3.It can be seen that the performances for FA are always the worst compared to GSA and EOFA.This can be caused by premature convergence after trapping in a local optimum.On the other hand, it can be observed that EOFA has the best performance for most of the test functions except for F2 and F11 where GSA outperforms EOFA.It is known from the reviews that, different algorithms may perform better than others for different problems (Elbeltagi et al. 2005;Rashedi et al. 2010).Performances in terms of convergence between FA, GSA and EOFA for randomly chosen functions are illustrated from Figure 3 to Figure 6.It can be seen from the fi gures that FA always converge pre-maturely and give an unsatisfi ed result.Meanwhile, both GSA and EOFA are able to escape from local minima and give better results.However, EOFA has the higher convergence rate and give better results if compared to GSA.
FIGURE 1 .
FIGURE 1. Flowchart for Firefl y Algorithm (FA) ) is the function used to measure the performance of candidate solution, thus if f(D) is greater than or equal to f(D o ), then set of points in D can be replaced by D o or else D is maintained.ENHANCED OPPOSITION-BASED FIREFLY ALGORITHM (EOFA) INERTIA WEIGHT BASED FA
TABLE 1 .
Comparison of performance of FA, GSA and EOFA for F1 with dimension size 30 FIGURE 4. Comparison of performance of FA, GSA and EOFA for F10 with dimension size 30 FIGURE 5. Comparison of performance of FA, GSA and EOFA for F12 with dimension size 30 Comparison of performance of FA, GSA and EOFA for F13 with dimension size 30 Test functions for unconstrained global optimization
TABLE 2 .
Comparison of performances for GSA, FA and EOFA has presented a new method known as EOFA to enhance the performance of standard FA.It was mainly based on the combination of opposition learning theory and the inertia weight function.The performance and effectiveness of EOFA was extensively tested on 15 unconstrained global optimization functions and the results were compared with other existing method, namely FA and GSA.According to their performances, it can be concluded that EOFA has higher effectiveness among the aforementioned optimization technique in obtaining the global optimum value for the test functions. | 3,117.8 | 2014-01-01T00:00:00.000 | [
"Computer Science"
] |
Novel Approach to 2D DOA Estimation for Uniform Circular Arrays Using Convolutional Neural Networks
This paper presents a novel efficient high-resolution two-dimensional direction-of-arrival (2D DOA) estimation method for uniform circular arrays (UCA) using convolutional neural networks. The proposed 2D DOA neural network in the single source scenario consists of two levels. At the first level, a classification network is used to classify the observation region into two subregions (0 ° , 180 ° ) and (180 ° , 360 ° ) according to the azimuth angle degree. The second level consists of two parallel DOA networks, which correspond to the two subregions, respectively. The input of the 2D DOA neural network is the preprocessed UCA covariance matrix, and its outputs are the estimated elevation angle to be modified by postprocessing and the estimated azimuth angle. The purpose of the postprocessing is to enhance the proposed method’s robustness to the incident signal frequency. Moreover, in the inevitable array imperfections scenario, we also achieve 2D DOA estimation via transfer learning. Besides, although the proposed 2D DOA neural network can only process one source at a time, we adopt a simple strategy that enables the proposed method to estimate the 2D DOA of multiple sources in turn. Finally, comprehensive simulations demonstrate that the proposed method is effective in computation speed, accuracy, and robustness to the incident signal frequency and that transfer learning could significantly reduce the amount of required training data in the case of array imperfections.
Introduction
Direction-of-arrival (DOA) estimation, as one of the crucial technologies in antenna array systems, has been widely applied in many fields, such as sonar, seismology, radar, and mobile communication [1][2][3]. e multiple signal classification (MUSIC) [4,5] and estimation of signal parameter via rotational invariance technique (ESPRIT) [6,7] are two classic conventional one-dimensional (1D) DOA estimation algorithms. Despite achieving high accuracy, the MUSIC algorithm suffers from high computational complexity. e estimation accuracy of the ESPRIT algorithm is not high enough even though it avoids the spectral peak search and requires less computation. In practical DOA estimation, signals are usually located in the stereo space. To determine the specific position of signals, it is necessary to estimate angles in at least two directions. Compared with 1D DOA estimation, 2D DOA estimation can provide more accurate signal location information, which is more practical and applicable. Employing a uniform circular array (UCA) can extend MUSIC and ESPRIT to 2D, namely, 2D-MUSIC [8,9] and UCA-ESPRIT [10,11]. However, in the 2D DOA estimation scenario, the computational complexity of conventional 2D estimation algorithms is much higher. Also, conventional DOA algorithms are inconvenient in practical application by virtue of the inevitable array imperfections.
In recent years, neural network-based methods have been extensively developed for improving the operation speed and adaptability of DOA estimation. In order to achieve 1D DOA estimation, references [12][13][14][15] employ a uniform linear array (ULA). e former use the multilayer perceptron (MLP), and the latter support vector machine (SVM). Likewise, reference [16] achieves 1D-DOA estimation based on a Y-shaped array through using the radial basis function (RBF). To achieve 2D DOA estimation, both references [17,18] employ a rectangular array. e former uses RBF, and the latter MLP. Also, reference [19] achieves 2D DOA estimation based on four ULAs by using RBF. e MLP, SVM, and RBF methods require the input feature of neural networks to be a vector, which might cause the input feature dimension to be excessively high. To avoid the problem, references [12,[17][18][19] adopt the strategy of taking only the first row of the preprocessed array covariance matrix as the input feature vector while ignoring noise interference on each element of the covariance matrix. erefore, reference [14] adopts the strategy of averaging diagonal elements of the preprocessed array covariance matrix to slightly reduce noise interference. Because the array covariance matrix is a Hermitian matrix, references [13,15,16] use half of the preprocessed array covariance matrix elements as the input feature vector to fully consider noise. After the employment of the strategy, the increase of the input feature dimension, however, complicates neural networks [20]. e weight sharing concept of convolutional neural networks (CNN), which has achieved great success in computer vision, can overcome the high dimension of input features. References [21][22][23] use CNN to achieve 1D DOA estimation based on UCA and ULA, respectively, and obtain satisfactory results. e existing CNN-based DOA estimation methods are developed under the ideal condition without array imperfections. And the main thing is that they only focus on 1D and have limited ranges of practical applications. erefore, the extension of the dimension of DOA estimation based on CNN is in necessity, and the proposal of an effective solution also becomes necessary in the case of array imperfections.
Motivated by the aforementioned analysis, we present a novel 2D DOA estimation method for uniform circular arrays (UCA) using CNN. To do this, we propose a 2D DOA estimation model consisting of three modules: preprocessing, 2D DOA neural network, and postprocessing. e preprocessing provides appropriate input features for the 2D DOA neural network. e 2D DOA neural network outputs the estimated elevation and azimuth angle. e postprocessing modifies the elevation angle output by the 2D DOA neural network. Moreover, in the inevitable array imperfections scenario, we also achieve 2D DOA estimation through transfer learning [24]. Besides, although the proposed 2D DOA neural network can only process one source at a time, we adopt a simple strategy that enables the proposed method to estimate the 2D DOA of multiple sources in turn. Finally, some numerical examples demonstrate the superiority and effectiveness of the proposed method. e main contributions of this paper are as follows: (1) DOA estimation for UCA using CNN is extended from 1D to 2D; (2) the robustness of the proposed method to the incident signal frequency is effectively improved by simple postprocessing; (3) the feasibility of using transfer learning to reduce the amount of training data in DOA estimation is verified.
e remainder of the current study is organized as follows: Section 2 reviews the concept of CNN and defines the problem of interest; in Section 3, the 2D DOA estimation model using CNN for UCA is proposed; finally, Section 4 demonstrates simulation results, and Section 5 summarizes the conclusions and future work. e main notations used in the paper are listed in Table 1.
Other terms used in the study follow the general notations unless otherwise stated.
Preliminary and Problem Formulation
2.1. Convolutional Neural Network. Generally, A CNN structure consists of convolutional layers, pooling layers, and fully connected layers [25]. e convolutional layer, as an essential part of CNN, is composed of convolution kernels. A convolution kernel connects an input image with a feature map, which can be used as the input of the next layer. Figure 1 visually illustrates a convolution process. To ensure the input image and feature map are of equal size, the 8 × 8 input image is expanded to 10 × 10 by padding 0's around it. e 3 × 3 convolution kernel slides on the input image with the stride of 1. e value a j,k in the j-th row and kth column of the 8 × 8 feature map can be obtained, which can be expressed as where δ signifies a nonlinear activation function. b is the shared bias, and ω J− j+1, K− k+1 the shared weight in row J− j+1 and column K− k+1 of the convolution kernel. a J,K ′ denotes the value in the J-th row and K-th column of the input image after padding 0's. Generally, a convolution layer has more than one convolution kernel. e number of convolution kernels is equal to that of feature maps. e channel number of a convolution kernel is equal to that of an input image. Different regions of an input image can share the weights of convolution kernels, which is conducive to reducing network parameters and training networks. Compared with MLP, SVM, and RBF, CNN does not need to lengthen input features into vectors, which effectively overcomes the excessive dimension of input features.
Problem Formulation.
is study employs UCA to achieve 2D DOA estimation account for UCA's omnidirectional coverage and almost identical beam width [26]. A far-field stable electromagnetic signal s with known carrier frequency f, as Figure 2 depicts, impinges on a UCA of M identical omnidirectional elements. e elevation angle of the incident signal is θ (0°≤ θ ≤ 90°), and the azimuth angle is φ (0°≤ φ ≤ 360°). e radius of the UCA is R, and the angle between the m-th element and the first element is τ m (τ m � 2π(m − 1)/M). e phase reference point is located at the circle center O.
e M × 1 steering vector can be expressed as where c denotes the propagation speed. e single snapshot M × 1 observation vector can be expressed as where n is the n-th snapshot, and s(n) denotes the signal, and e(n) signifies the M × 1 uncorrelated additive Gaussian noise vector. e M × N observation matrix can be expressed as where s and e denote the 1 × N signal vector and M × N noise matrix, respectively, and N is the number of snapshots.
Apparently, x contains information about the power and angle of the incident signal, while it is interfered with by noise. e addressing issue here aims to establish the map from the observation matrix x to the elevation and azimuth angle by virtue of the proposed 2D DOA estimation model.
2D DOA Estimation Model
As shown in Figure 3, the proposed 2D DOA estimation model consists of three modules: the preprocessing, 2D DOA neural network, and postprocessing. e input of the model is the observation matrix x, and the outputs are the estimated elevation angle θ and azimuth angle φ. e function of the preprocessing is to feed appropriate input features into the 2D DOA neural network. e 2D DOA neural network, as the core of the model, is composed of the classification network, DOA 0-180 network, and DOA 180-360 network. According to the classification network response 0 or 1, the input feature is fed into the corresponding DOA 0-180 network or DOA 180-360 network. e function of the postprocessing is to modify the estimated elevation angle θ ′ , which is output by the 2D DOA neural network, so as to improve the robustness of the model to the incident signal frequency.
Preprocessing.
e input of the preprocessing module is the observation matrix x of UCA, and the output is the feature appropriate for the 2D DOA neural network.
From the observation matrix x, the estimated array output covariance matrix R xx can be expressed as where R ij and ϕ ij (ϕ ij � 2πfRsinθ[cos(φ − τ j ) − cos(φ − τ i )]/c) signify the element and phase, respectively. i signifies the i-th row, and j j-th column of R xx . σ 2 s and σ 2 e denote the signal and noise power, respectively. e signal-to-noise ratio (SNR) is defined as SNR � 10lg (σ 2 s /σ 2 e ). R xx is a Hermitian matrix. Equation (5) indicates that the main diagonal elements of R xx are real numbers, which are only determined by International Journal of Antennas and Propagation the signal power and noise power, while these elements do not contain any angle information. e modulus of the nonmain diagonal elements of R xx is determined by the signal power, and the phase is determined by the signal frequency, array radius, and incident angles.
If R xx is directly used as the input feature of the 2D DOA neural network, the input features may be inconsistent at the same incident angles due to the effect of signal and noise power. e inconsistency is inconvenient for training the neural network. First of all, the main diagonal elements of R xx are replaced with 0's to eliminate the signal and noise power. en the modulus of nonmain diagonal elements is normalized to eliminate further the signal power and the interference of noise. Next, the real part of the lower triangular elements and the imaginary part of the upper triangular elements are used to extract the phase. Finally, the preprocessing module outputs an M × M real matrix as the input feature of the 2D DOA neural network. We use N − R xx to signify the input feature, which is expressed as where
2D DOA Neural Network
Architecture. e input of the 2D DOA neural network is N − R xx from the preprocessing, and the outputs are the estimated elevation angle to be modified by postprocessing and the estimated azimuth angle. e 2D DOA neural network consists of two levels.
e first level is a classification network, and the second level is the parallel DOA 0-180 network and DOA 180-360 network. According to the azimuth angle degree, the first level aims to classify the observation region into two subregions, e first level response determines the corresponding DOA network of the second level, into which the input feature N − R xx will be fed.
e development of such a two-level neural network is based on the following considerations. When the azimuth angle approaches 0°or 360°, the input features will be very close, and the output may jump in the case of developing a neural network without classification. at will make the neural network challenging to fit and increase the difficulty of training. e DOA estimation accuracy near 0°or 360°, as reference [27] illustrates, is not high enough. Reference [21] adopts cunning algebraic postprocessing to overcome the problem, but it might inevitably lead to calculation errors. e proposed 2D-DOA neural network overcomes this problem well, avoiding the output jump and calculation error.
Furthermore, the optimal architecture of the 2D DOA neural network cannot be developed in one step. We tried numerous different network architectures and then selected the network with the least parameters on the premise of ensuring the estimation accuracy.
Classification Network Architecture.
In essence, the classification network can compress the dimension of input features from M × M to 1. To validate the feasibility, the range of the elevation angle θ is set from 0°to 90°(10°r esolution) and that of the azimuth angle φ from 0°to 180°a nd 180°to 360°(1°resolution), respectively. A total of 3,620 pieces of data are sampled. Principal component analysis (PCA) is performed after preprocessing these data. Figure 4 displays the projections of the second, third, and seventh principal components. Noticeably, a classification network can be developed to classify φ into (0°, 180°) and (180°, 360°). e classification network architecture, as Figure 5 depicts, consists of 5 layers. e first layer is the input one, where the input is N − R xx . e second and third layers are convolutional ones, where the number of convolution kernels is set to 16 and 8, respectively. e convolution kernel size is set to 3 × 3, and the stride is set to 1. e padding mode is set to "same." e fourth layer is a fully connected one with 16 neurons. e fifth layer is the output one with 1 neuron, and the output is 0 or 1. To be precise, 0 indicates the azimuth angle φ ∈ (0°, 180°), and 1 indicates φ ∈ (180°, 360°).
e first layer has no activation function. e second, third, and fourth layers adopt ReLU [28] as the activation function. e fifth layer adopts Sigmoid [29] as the activation function. e cost function adopts the cross-entropy, which can be expressed as where y i denotes the classification network response, and y i the ground truth label. m signifies the total number of samples, and L the number of layers of the network. ω In addition, when the azimuth angle is 0°, 180°, or 360°, the corresponding response may be 0 or 1 due to the interference of noise. In this case, N -R xx may be fed into the DOA 0-180 network or DOA 180-360 network, but the uncertainty of the response does not affect the estimation accuracy of the proposed model.
DOA Network Architecture.
In the case of φ ∈ (0°, 180°), the classification network outputs 0, and then N -R xx is fed into the DOA 0-180 network, while in the case of φ ∈ (180°, 360°), the classification network outputs 1, and then N -R xx is fed into the DOA 180-360 network. Figure 6 shows the DOA 0-180 network architecture, which consists of 12 layers. e first layer is the input one, where the input is N -R xx . e second to ninth layers are convolutional ones, where the number of convolution kernels is set to 64, 64, 32, 32, 16, 16, 8, and 8 in turn. e convolution kernel size is set to 3 × 3, and the stride is set to 1. e padding mode is set to "same." e tenth and eleventh layers are fully connected ones with 32 and 16 neurons, respectively. e twelfth layer is the output one with two neurons, which correspond to the estimated elevation angle θ ′ to be modified and the estimated azimuth angle φ.
e first layer has no activation function, while the other layers adopt ReLU as the activation function. e cost function adopts the mean squared error (MSE), which can be expressed as where ω denotes the network weights, and b the network biases. e DOA 180-360 network architecture is the same as the DOA 0-180 network architecture (no repetition to avoid redundancy).
2D DOA Neural Network Training.
e training sets, validation sets, and test sets used in this study are composed of simulated data. e simulation conditions are as follows: (1) the incident signal frequency is set to 500 MHz, and the number of snapshots 2,000; (2) the UCA element number is set to 8; (3) the UCA radius is set to 0.6 meter, which is equal to the electromagnetic wave wavelength of 500 MHz; (4) the signal amplitude is randomly generated to enhance the robustness of the neural network to the signal amplitude [30]. e network's initial settings are as follows: (1) the weights are initialized with samples drawn from a truncated normal distribution centered on 0 with standard deviation of sqrt(1/fan_in), called lecun_normal [31], and the biases are initialized to 0's; (2) Adam [32] is adopted in the backpropagation; (3) the minibatch size is set to 512. e classification network and DOA network are trained independently. e training samples are discretized. We conducted numerous experiments and found that the denser the sampling, the higher the estimation accuracy. However, when the sampling density reaches a certain level, the estimation accuracy cannot be further improved. erefore, the number of training samples should be minimized on the premise of ensuring the estimation accuracy. In addition, excessively small elevation angles would cause each element's phase in the steering vector to approach zero, and the interference of noise may make the azimuth angle error larger. However, in this case, the signal direction is almost perpendicular to the z-axis of the Cartesian coordinate system in Figure 2, regardless of the azimuth angle. Furthermore, if the training sets contain excessively small elevation angles, the interference of noise may lead to the data that the input feature does not match the label, which will affect the convergence of the network. Consequently, the inception of elevation angles in the training sets and validation sets is not 0°but 1°. e corresponding azimuth angle label is 0 and 1. e data is generated with the signal-to-noise ratio (SNR) of − 10 dB and 20 dB, respectively. A total of 2 × 22 × 90 � 3,960 pieces of data constitute the training set. e validation set data is also 3,960. Its sampling settings are the same as those of the training set, but it is generated independently to ensure no International Journal of Antennas and Propagation duplicate data. e learning rate is set to 0.001, and the epoch 800. e regularization parameter and early stopping are used to prevent overfitting. Figure 7 illustrates the variation of cost and accuracy in the training process. e accuracy of the training set and validation set cannot reach 100% because of the uncertain response when the azimuth angle is 0°, 180°, or 360°. However, the uncertain response does not affect 2D DOA estimation accuracy. Figure 8 shows the variation of cost in the training process of the DOA 0-180 network. e sampling settings of elevation angles and azimuth angles are {1°:1°: 90°} and {0°: 2°:180°}, respectively. e data is generated with the SNR of − 5 dB and 20 dB, respectively. A total of 2 × 90 × 91 � 16,380 pieces of data constitute the training set. e validation set data is also 16,380. Its composition is the same as that of the training set, but it is generated independently to ensure no duplicate data. e learning rate is set to 0.001, 0.0001, 0.00005, and 0.00001 in turn, and the epoch for each learning rate is set to 400. e strategy of early stopping is adopted to prevent overfitting. e training set, validation set, and training process of the DOA 180-360 network are similar to those of the DOA 0-180 network (no repetition to avoid redundancy).
Postprocessing.
e input of the postprocessing module is the estimated elevation angle θ ′ output by the 2D-DOA neural network, and the output is the modified estimated elevation angle θ. e purpose is to enhance the robustness of the neural network to the incident signal frequency. If the actual signal frequency is different from the training set signal frequency, the actual data and the training set data will not satisfy the same distribution, resulting in inaccurate DOA estimation. Only a few references take into account the robustness to the incident signal frequency. Reference [18] selects 17 frequency points in the range of 2.41 GHz to 2.47 GHz with a step of 3.6 MHz, and the frequency is taken as an input feature of the network. However, the training set would be expanded 17 times. Reference [21] enhances the frequency robustness of the network by randomly sampling from 100 MHz to 500 MHz in the training set. However, random sampling might cause the same input feature to correspond to multiple different incident angles. e current study utilizes simple algebraic postprocessing to effectively improve the frequency robustness of the 2D-DOA estimation model.
f and θ are set to represent the actual incident signal frequency and elevation angle, respectively. f′ and θ ′ represent the corresponding training set parameters. e four parameters satisfy In this case the 2D-DOA neural network will estimate according to the input feature corresponding to θ ′ rather than θ, based on the analysis of each element phase in equation (5). Consequently, the estimated value θ ′ of θ ′ should be modified. e modified elevation angle estimate is expressed as If f > f ′ , there may not be θ ′ that can satisfy equation (10) in the training set.
Multisource Scenario.
e 2D DOA estimation model proposed above is only suitable for single source estimation, which will be greatly limited in practice. eoretically, a neural network suitable for multisource scenarios can be trained as long as the training set is extended. However, because the required data will increase exponentially with the number of the sources, the approach may be difficult to implement. In this subsection, we adopt a simple strategy to achieve multisource 2D DOA estimation based on the proposed 2D DOA estimation model. e array receiving model is as described in Section 2.2. If L signals impinge on the UCA, equation (5) should be revised as International Journal of Antennas and Propagation where σ 2 sl and ϕ ijl signify the l-th signal power and phase, respectively. θ l and φ l denote the elevation and azimuth angle of the l-th signal. First of all, the main diagonal elements are averaged to eliminate the interference of noise, and then the noise power is subtracted to obtain the sum of signal powers. en the main diagonal elements are replaced with 0's, and the modulus of nonmain diagonal elements is normalized by the sum of signal powers. Next, the real part of the lower triangular elements and the imaginary part of the upper triangular elements are fed into the 2D DOA International Journal of Antennas and Propagation neural network. Finally, the 2D DOA estimation of the signal corresponding to the maximum eigenvalue of R xx is obtained.
In the next step, the component of the signal is removed from R xx to obtain a new input feature, which is fed into the 2D DOA neural network again to obtain the 2D DOA estimation of the signal corresponding to the second largest eigenvalue of R xx . By analogy, the 2D DOA estimation of the L signals can be obtained sequentially. In the multisource scenario, equation (6) should be revised, and the input feature N l − R xx of the l-th signal can be expressed as where
Simulation Results
In this section, first of all, the classification network response is presented. en, Section 4.2 illustrates the 2D-DOA estimation model response, analyzing the effect of the SNR and number of snapshots, the robustness to the signal frequency, and the processing time. Besides, transfer learning is argued that it could be employed to reduce the amount of training set data in the case of array imperfections. Finally, we analyze the 2D DOA estimation in the multisource scenario. Unless otherwise stated, the simulation conditions are as described in Section 3.3. Figure 9 shows the classification network response to the incident signals with different frequencies. 50 points are randomly sampled in the range of SNR ∈ {− 10 dB : 0.01 dB : 20 dB}, θ ∈ {1°: 0.01°: 90°}, and φ ∈ {0°: 0.01°: 360°} for each frequency.
Performance of the Classification Network.
Under different frequencies, the classification network response is 0 in the case of φ ∈ (0°, 180°), while the response is 1 in the case of φ ∈ (180°, 360°). Obviously, the classification network can well classify the observation region into two subregions. e Pearson product-moment correlation coefficients (r ppm ) are 0.9999. Furthermore, in order to verify the necessity of the classification network, we also train a slightly more parameterized DOA network without classification as a comparison. e root mean square periodic error (RMSPE) is defined as the evaluation criterion, which is expressed as
Performance of the 2D DOA Estimation
Compared with RMSE, RMSPE might better evaluate the 2D DOA estimation accuracy. It is exemplified in a sample: if θ � 50°, φ � 359°, θ � 50°, and φ � 0.1°, then the RMSPE is about 0.778°, while the RMSE is about 253.781°. e SNR is set to 5 dB. e elevation angle is set to 45°, and the azimuth angle is set to {0°: 20°: 360°} in turn. For each azimuth angle, 500 Monte Carlo runs are performed. Figure 11 shows that the proposed method is superior to the method without classification, especially when the horizontal angle is 0°or 360°.
Effect of the SNR and Snapshot Number.
To highlight the superiority of the proposed method, we now compare the proposed method with 2D-MUSIC, UCA-ESPRIT, and RBF. Both 2D-MUSIC and UCA-ESPRIT are classic conventional 2D-DOA estimation methods. RBF networks can approximate any nonlinear function and have an excellent convergence rate [17,19]. erefore, the three methods are selected as the baseline methods.
In this subsection and the remainder of this paper, the simulation settings of the three baseline methods are as follows. For the accuracy of 2D-MUSIC and search speed, the first search step is set to 1°, and then the second search is performed with a step of 0.01°within ±1°of the first step search result. For the accuracy of UCA-ESPRIT, the array element number is set to 19, and the maximum mode order 6. ree RBF networks replace the classification network, DOA 0-180 network, and DOA 180-360 network, respectively. e training sets and validation sets of the three RBF networks are the same as those of the proposed method. e spread of the training sets is searched in the range of [1,10], and the desired MSE is searched in the range of [0, 5] to obtain the optimal performance of the validation sets. Figure 12 reveals the relationship between the RMSPE and SNR of each method. SNR ∈ {− 10 dB : 1 dB : 20 dB}. 500 points are randomly sampled in the range of θ ∈ {1°: 0.01°: 90°} and φ ∈ {0°: 0.01°: 360°} for each SNR. Figure 13 reveals the relationship between the RMSPE and snapshot number of each method with the SNR of 5 dB. N ∈ {100 : 100 : 2000}. 500 points are randomly sampled in the range of θ ∈ {1°: 0.01°: 90°} and φ ∈ {0°: 0.01°: 360°} for each snapshot number. Figures 12 and 13 illustrate that the performance of the proposed method is even slightly better than that of the 2D-MUSIC algorithm. Although the 2D-MUSIC algorithm breaks through the Rayleigh limit and approaches the Cramer Rao bound, the estimates of 2D-MUSIC are still a discrete value related to the search step. However, the outputs of the proposed method are continuous. e proposed method based on CNN is also superior to RBF.
International Journal of Antennas and Propagation
Although RBF has a better ability of fitting function and fast convergence, its generalization ability in 2D DOA estimation is inferior to CNN. Figure 14 illustrates the relationship between the RMSPE and signal frequency of each method (including the proposed method without postprocessing) with the SNR of 10 dB. f ∈ {100 MHz : 50 MHz : 500 MHz}. 500 points are randomly sampled in the range of θ ∈ {1°: 0.01°: 90°} and φ ∈ {0°: 0.01°: 360°} for each frequency.
Robustness to the Signal Frequency.
2D-MUSIC, as Figure 14 shows, is superior to UCA-ESPRIT in the frequency range from 250 MHz to 500 MHz, while it is inferior to UCA-ESPRIT in the frequency range from 100 MHz to 200 MHz. It results from the fact that when the array radius is fixed, as the frequency decreases, it is equivalent to reducing the array aperture for 2D-MUSIC, which affects the sharpness of spectral peaks [34]. RBF is not robust enough to the frequency because of the poor performance of 2D DOA estimation. e error of the proposed method without postprocessing is excessively large. RMSPE has reached 7.40°when the incident signal frequency deviates from the training set frequency by only 10%, i.e., 450 MHz, resulting in the failure to normally estimate 2D DOA. e proposed method is superior to 2D-MUSIC algorithm in the range of 400 MHz to 500 MHz and inferior to 2D-MUSIC within the range of 100 MHz to 350 MHz, because the lower the frequency, the larger the (f ′ /f) in equation (10), which is equivalent to amplifying the estimation error of the proposed method. However, despite 80% frequency deviation of the training set, the RMSPE of the proposed method is less than 1°. erefore, the proposed method has a certain robustness.
Processing Time.
is subsection highlights the speed advantage of the proposed methods by comparing the processing time of each method. e computations are executed on a PC with Intel Core i7-9700K CPU and 16 GB DDR4 RAM. e processing time of the proposed method and RBF includes preprocessing, network running, and postprocessing time. Table 2 shows the results from 500 Monte Carlo runs.
Array Imperfections and Transfer
Learning. In this subsection, we consider array imperfections unavoidable in practical applications due to antenna manufacturing, equipment environment, etc. Typical array imperfections include gain and phase inconsistence, sensor position error, and intersensor mutual coupling. Referring to reference [13], in this paper, the gain biases are set as e phase biases are set as e mutual coupling coefficient vector is set as In equations (14)- (17), ρ ∈ {0 : 0.1 : 1}, and it is introduced to control the strength of the imperfections; for 2D-MUSIC, RBF, and the proposed method, a � 4 and b � 3; for UCA-ESPRIT, a � b � 9. In light of the array imperfections, equation (2) should be revised as c � I a+b + toeplitz e mc × I a+b + diag e gain × diag e e phase .
(18) Figure 15 illustrates the relationship between the RMSPE and ρ of each method with the SNR of 10 dB. 500 points are randomly sampled in the range of θ ∈ {1°: 0.01°: 90°} and φ ∈ {0°: 0.01°: 360°} for each ρ. When ρ < 0.5, the proposed method performs best. However, with the increase of ρ, each method's performance is degraded, and even these methods fail to estimate 2D DOA. Consequently, it is necessary to calibrate array imperfections.
Generally, conventional calibration methods lack adaptability and are challenging to model accurately [35][36][37]. CNN-based methods are data-driven, and therefore they do not require prior assumptions about array imperfections. Furthermore, to reduce the required training data in practical application, this study adopts transfer learning to address the array imperfection problem. Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large amount of target domain data can be reduced for constructing target learners [24]. All the weights and biases of the 2D-DOA neural network trained in Section 3.3 are not fixed, and the strategy of early stopping is used to prevent the possible overfitting caused by transfer learning.
According to the training set sampling setting of the classification network and DOA network in Section 3.3, the corresponding training sets are regenerated, respectively, in the case of ρ � 1. Table 3 shows three trained two-level neural networks, with the same structure as the 2D DOA neural network proposed in Section 3.2. Figure 16(a) displays the relationship between the RMSPE and SNR of these three neural networks and 2D-MUSIC when ρ � 1. Figure 16 eir similar performance indicates that the amount of training data can be reduced through transfer learning in practical application.
In addition, we also studied the case that the data usage percentage is less than 20%. With the reduction of data volume, the accuracy of the network trained by transfer learning gradually decreases, but it is always higher than that of retraining mode with the same amount of training data. When the data usage percentage is more than 20%, the transfer learning mode with the increase of data volume can achieve the accuracy of R-Network-100% or T-Network-20% with fewer epochs.
Performance in the Multisource Scenario.
In this subsection, we consider the multisource scenario. M eigenvalues can be obtained by eigenvalue decomposition of the UCA International Journal of Antennas and Propagation covariance matrix of M elements. In order to attain the input characteristic N l − R xx of the 2D DOA neural network, i.e., equation (12), at least one eigenvalue corresponding to the noise power must be guaranteed. erefore, the maximum number of targets that can be estimated by the proposed method is M − 1. However, similar to the MUSIC algorithm, when the number of targets approaches the theoretical maximum M − 1, false negatives or false positives may occur. Assume that five stationary signals impinge the UCA in the range of 1°≤ θ ≤ 90°and 0°≤ φ ≤ 360°. 500 independent Monte Carlo experiments are performed in case of array imperfections (ρ �1). T-Network-20% runs five times in each experiment. After postprocessing, the 2D DOA estimation of the five signals can be output in turn. For brevity, Figure 17 shows only the first and third outputs of each experiment.
Conclusions
In this paper, we have presented the 2D-DOA estimation model composed of three modules: the preprocessing, 2D DOA neural network, and postprocessing. e preprocessing effectively eliminates the signal power as well as the interference of noise, providing appropriate input features for the 2D DOA neural network. e 2D DOA neural network consists of the classification network, DOA 0-180 network, and DOA 180-360 network. e classification network divides the observation region into two parts according to the azimuth angle, which avoids the possible jump of the output when the azimuth angle is near 0°or 360°. e parallel DOA 0-180 network and DOA 180-360 network output the estimated elevation angle to be modified and the estimated azimuth angle. e postprocessing modifies the estimated elevation angle to enhance the robustness to the signal frequency. Besides, the feasibility of applying transfer learning to overcome array imperfections is also validated. e experiments reached the following conclusions: (1) the proposed method is superior to 2D-MUSIC, UCA-ES-PRIT, and RBF in the accuracy and operation speed, and it International Journal of Antennas and Propagation has certain robustness to the incident signal frequency; (2) the CNN-based approach can address the array imperfections problem, and the amount of training data can be reduced by means of transfer learning.
We should solve the following problems that still remain: (1) the implementation of transfer learning to train neural networks in the real environment; (2) the study of the robustness of neural networks when the actual signal frequency is greater than the training set signal frequency; (3) further improving the 2D DOA estimation method for multisource scenarios.
Data Availability
e simulation data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 8,755.6 | 2021-07-08T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Absorptive pinhole collimators for ballistic Dirac fermions in graphene
Ballistic electrons in solids can have mean free paths far larger than the smallest features patterned by lithography. This has allowed development and study of solid-state electron-optical devices such as beam splitters and quantum point contacts, which have informed our understanding of electron flow and interactions. Recently, high-mobility graphene has emerged as an ideal two-dimensional semimetal that hosts unique chiral electron-optical effects due to its honeycomb crystalline lattice. However, this chiral transport prevents the simple use of electrostatic gates to define electron-optical devices in graphene. Here we present a method of creating highly collimated electron beams in graphene based on collinear pairs of slits, with absorptive sidewalls between the slits. By this method, we achieve beams with angular width 18° or narrower, and transmission matching classical ballistic predictions.
I n the absence of scattering, electrons propagate freely as coherent waves, analogous to light in free space. Capitalizing on this behaviour, electron-optical elements including beam splitters 1,2 , quantum point contacts 3,4 , lenses 5 , wave guides 6,7 and mirrors 8 have been fashioned in solid-state two-dimensional electron systems 9 (2DESs). The 2DES in graphene hosts chiral electrons [10][11][12][13][14] , with unique refractive properties and associated novel opportunities for electron optics 12,13,15,16 . Until recently, disorder-induced scattering has limited implementation of these ideas. Encapsulation of graphene in hexagonal boron nitride (hBN) 17,18 now enables striking manifestations of refractive ballistic transport 15 including quasiparticle dynamics in superlattices 19 , snake states 20 and Veselago lenses 21 . A collimated electron source could be the final piece needed to unlock the potential of electron refraction in graphene, enabling diverse applications such as ballistic transistors 22,23 , flying qubits 24 and electron interferometers 25 . In conventional semiconductor 2DESs, electrons can be collimated by quantum point contacts 3 to form narrow beams. In graphene, however, electrons are not readily confined by gates and alternative proposals [26][27][28] for collimation in graphene have yet to be realized.
Here we demonstrate experimentally and validate computationally an electron collimator based on a collinear pair of pinhole slits in hBN-encapsulated graphene. We show that grounded edge contacts 17 -analogous to peripheral surfaces painted black in an optical system-can efficiently remove stray electron trajectories that do not directly traverse the two pinholes, leaving a geometrically defined collimated beams.
Results
Collimator design and function. An absorptive pinhole collimator is constructed from an etched graphene heterostructure with a two-chamber geometry wherein independent electrodes make ohmic contact to each chamber (Fig. 1a). The contact to the bottom chamber (red, Fig. 1a) serves as the source for charge carriers, while the contact to the top chamber (black, Fig. 1a) acts as an absorptive filter. To realize a collimating configuration, the filter contact (F) is grounded and the source contact (S) is current biased; charge carriers are isotropically injected from the source, but only those trajectories that pass through both pinhole apertures reach the graphene bulk. Applying a uniform magnetic field can steer the collimated beam. For an uncollimated configuration, the filter and source contacts are electrically shorted.
Our device consists of hBN-encapsulated graphene etched into a Hall-bar-like geometry with the voltage probes replaced by collimating contacts (Fig. 1b). The hBN layers are both d BN B80 nm thick and the device is assembled on d ox ¼ 300 nm SiO 2 atop a degenerately doped silicon substrate used as a back gate to tune charge carrier density n. To test the collimation behaviour of an individual injector in the ballistic regime, we perform a non-local magnetotransport measurement, injecting from one collimator and probing trajectories that reach across the width of the device (W dev ¼ 2 mm) in the collimated and uncollimated configurations (green and blue respectively, Fig. 1c). We inject from the lower right collimator (labelled S4,F4) throughout this Article and, in this case, measure the voltage of the upper right collimator (labelled S3,F3) relative to a reference (F1). In the presence of a B-field, electron trajectories that pass from the injector to collector flow from the injector at an angle y ¼ sin À 1qBWdev 2' ffiffiffiffi np p , where q is the quasiparticle charge. From this, we find that the angular full width at half maximum (FWHM) is 70°when injecting in the uncollimated configuration and 18°w hen injecting in the collimated configuration. For an uncollimated source 3 , the angular conductance is expected to go as G y is the flux density at the Fermi level and w 0 cos(y) is the projected width of the contact. The collector has an acceptance angle of w0 W dev cosðyÞ, leading to an expected cos 2 (y) distribution (y FWHM ¼ 90°). The 70°FWHM for our uncollimated data is in reasonable agreement with this expectation given that the reference contact collects more electrons at higher B-fields and thus suppresses the signal at high angles.
In our collimators, the flux density at the Fermi level is identical to that in a single slit, but the projected width is geometrically defined by the pinhole width w 0 and pinhole separation L 0 . For small angles |y|otan À 1 w 0 /L 0 , the projected width w(y) ¼ cos(y)[w 0 À L 0 |tan(y)|] (left, Fig. 1d). At larger angles, no carriers should transmit, yielding: Convolving over the acceptance angle of the collector (see Supplementary Note 1 for details), we calculate the angular conductance distribution (middle, Fig. 1d) for both the uncollimated case (blue) and the collimated case (green) with w 0 ¼ 300 nm and L 0 ¼ 850 nm, consistent with the fabricated collimator dimensions. The FWHM of the collimator emission is 22°for theory and 18°for experiment (right, Fig. 1d), showing that our injectors efficiently filter wide-angle trajectories and transmit narrowly collimated beams.
Conductance of collimators.
Having established that the angular distribution of injected charge carriers is well described by classical ballistic theory, we now measure our collimators' conductance to determine how efficiently electrons traverse the pinholes. For this, we bias the injector in the collimating configuration (F4 grounded) and measure the current reaching all remaining electrodes as a function of gate voltage (Fig. 2a). The conductance of the collimator tunes sublinearly with n: G $ ffiffiffiffiffiffiffiffiffiffiffiffi ffi n À n 0 p (dotted line, Fig. 2a). This qualitatively agrees with ballistic expectations G $ ffiffiffi n p ð Þ : integrating equation (1) over all angles, we expect: The small offset n 0 B1.6 Â 10 11 cm À 2 in our measurement appears to result from diffraction by collimator slits (Fig. 2a, see Supplementary Notes 3 and 4 for details). Comparing equation (2), with the fit in Fig. 2a assuming ncn 0 ), indicates a conductance that is 35% of expectations. This is a lower bound for the transmission probability, because the collimating filter (F4) can reabsorb electrons that have diffusely scattered off of device edges.
To understand the impact of diffuse scattering and better estimate the transmission probability, we measure the current collected at specific detectors as a function of B-field. Having sourced I source ¼ 50 nAo k B T eRsource (see Supplementary Note 6 for details), we collect current in detectors collinear with (red and blue, Fig. 2b) and adjacent to (black, Fig. 2b) the injector. Current collected at the collinear detector with a wide acceptance angle (red) peaks near B ¼ 0, as the collimated beam travels straight across the device. The apparent background current is B3-5% of I source . At BB120 mT, ballistic cyclotron orbits instead reach the adjacent detector, leading to a prominent peak in current detected at S1 (black) with F1 grounded. Coincident with this peak, the diffuse background of the collinear detector dips, as ballistic trajectories are consumed by the adjacent detector, reducing the number of electrons that eventually find their way into the collinear detector.
In light of the non-trivial diffuse background, we measure current with a narrow acceptance angle at the collector, rejecting most scattered electrons and thus better determining the transmission probability of the collimator. The resulting doubly collimated beam (blue) has a FWHM of 8.5°. Together, all these collinear apertures act as a single collimator with L 0 ¼ 3,750 nm (the separation between the farthest-apart apertures). All of the injected current passes through the first aperture, so the fractional The measured conductance (blue) scales as ffiffiffiffiffiffiffiffiffiffiffiffi ffi n À n 0 p (black dotted line), qualitatively agreeing with ballistic conduction of bulk graphene. Numerical solutions to the 2D Dirac equation (red dots) account well for low-density effects associated with diffraction. (b) Conductance measurements through angularly sensitive collectors. Current is collected at F3 þ S3 (red), S3 (blue) and S1 (black) with all remaining contacts grounded. F3 þ S3 has a broad background due to diffuse edge scattering and imperfect ohmic contacts. S3 has a FWHM of 8.5°due to double collimation and has minimal diffuse background. The peak height of S3 indicates nearly perfect ballistic transmission. current collected should be G w0¼300 nm; L0¼3;750 nm ð Þ Gðw0¼300 nm; L0¼0Þ ¼ 0:040. The maximum of the doubly collimated peak is 0.056 (Fig. 2b). Subtracting a background of 0.005-0.015 (see Supplementary Note 5 for details) suggests transmission through the full path is 1.18±0.12 times the expected value. The 20% beamwidth narrowing observed above for a single collimator (18°versus 22°expected) may indicate modest focusing, which would be consistent with slightly enhanced transmission through the double collimator. The excellent quantitative agreement shows that charge carriers transmit nearly perfectly from slit to slit. By demonstrating not only narrow beams but also high transmission probabilities, our measurements show that absorptive pin-hole filtering could produce low-noise, coherent, collimated beams of electrons in 2DESs that cannot be depleted by electrostatic gating.
Transverse electron focusing. Having experimentally demonstrated that absorptive pinhole collimators can controllably emit electron beams in hBN-encapsulated graphene heterostructures, we illustrate our technology's utility by aiming a beam at the edges of our graphene device to learn about the low-energy scattering behaviour of etched edges in these heterostructures. We perform three simultaneous non-local resistance measurements (Fig. 3a) to probe the specularity of reflections off various edges of the device. In Fig. 3b Iin as a function of B-field and electron density. In both the electron-doped and hole-doped regimes, a peak near B ¼ 0 corresponds to ballistic quasiparticles being collected by the collinear contact in the absence of magnetic deflection. Peaks in R 1 also appear at higher fields, primarily in the hole-doped regime (no0). For reference, we plot contours corresponding to cyclotron radius r ¼ W/2. Any features outside the parabolas (roW/2) cannot correspond to direct ballistic quasiparticle transport across the width of the device and must involve scattering. These data imply that holes undergo multiple reflections at high B-fields, suggesting that the edges may scatter more specularly when hole-doped than when electron-doped.
To directly probe the specularity of reflections in our device, we perform a collimated transverse-electron focusing (TEF) measurement 8,29 . Probe V 3 at the lower left detector is even more sensitive than traditional TEF measurements to scattering that modifies ballistic trajectories, as here the injector and detector have narrow emission and acceptance angles, respectively.
Iin as a function of electron density and B-field has several distinct features associated with specific cyclotron radii (Fig. 3c), in particular for hole doping. At r 1 ¼ 1.25 mm, there is a sharp peak with a FWHM of B300 nm in both the hole and electron regimes. Although a conventional TEF peak would occur at 3 mm is the lateral separation of injector and detector, our measured peak corresponds to slightly greater cyclotron radius. This is expected for our collimator geometry: we illustrate the expected r 1 trajectory in Fig. 3a and plot its corresponding contour in Fig. 3c, indicating excellent agreement with our measurement (see also Supplementary Note 2 for calculation). Trajectories at r 1 are insensitive to edge scattering, whereas at smaller r (larger B) additional peaks imply specular reflection. In the electron-doped regime the presence of a Radius of curvature −1 (μm −1 ) Magnetic field (mT) . The remainder of the electron-doped regime (n40) is nearly featureless, whereas the hole-doped regime (no0) has several auxiliary peaks. Dotted lines correspond to cyclotron orbits with radius equal to the half of device width (r ¼ W dev /2); features outside the two parabolas cannot correspond to direct ballistic trajectories between injector and collector. (c) Collimated transverse electron focusing. A sharp feature at r 1 ¼ 1.25 mm corresponds to trajectories that pass through four pinholes. Features at higher magnetic field must involve specular reflections off of the device edge. There is no such feature on the electron side, whereas there is a noticeable band on the hole side. (d) Comparison of experimental data with classical ballistic simulation. Experimental data (blue) are taken at n ¼ 2.7 Â 10 12 cm À 2 and simulation (red) assumes fully diffuse edge scattering and 67% ohmic transmission. prominent peak at r 1 with no appreciable secondary peak suggests completely diffuse scattering, whereas in the hole-doped regime the presence of a significant secondary peak suggests appreciable specular reflection.
To validate this understanding and quantitatively determine the degree of specularity, we next carry out device-scale simulations of ballistic trajectories-treating electrons as classical point-like particles is warranted given that the Fermi wavelength l f is in most cases much smaller than geometric features in our device and given that most trajectories are captured in ohmic contacts before having a chance to interfere. Modelling the fabricated device geometry including all ohmic contacts, we simulate electron emission from the injector, allowing for reflection off edges and interaction with floating or grounded ohmics. With two free parameters, transmission of ohmics p trans and probability of diffuse edge scattering p diffuse , we simulate the measurement configuration shown in Fig. 3a-c (see Supplementary Note 7 for simulation details). The striking similarities between simulation (p trans ¼ 67% and p scatter ¼ 100%) and measurement suggest that edge scattering is diffuse in our device in the electron-doped regime (Fig. 3d, see Supplementary Movie 1 for visualizing a B-sweep). Similar analysis yields p trans ¼ 10% and p scatter ¼ 67% in the hole-doped regime, quantitatively demonstrating significant electron-hole asymmetry in both ohmic contact properties and specularity of edge scattering in our device. This asymmetry may occur due to finite edge doping that induces smooth electrostatic edge barriers 30 in the p-doped regime.
Discussion
The strong agreement between theory and experiment for both individual collimators and our entire collimating device indicates that absorptive collimation in high-mobility graphene devices can be predictably and robustly applied in a variety of geometries, opening the door for scientific and technological use of narrow electron beams in 2DESs. For example, Klein tunnelling 12,13,31 and Andreev reflections 32 are highly angularly dependent phenomena whose experimental signatures are obscured in typical transport experiments. In such cases, collimation-based measurements will illuminate the physics by quantitatively testing transmission and reflection at specific angles rather than integrated over a range of angles as in past experiments. In addition, novel technologies such as ballistic magnetometers may be built on the sharp magnetotransport features we achieve. Collimated sources are an important addition to the growing toolbox of electron-optical elements in ballistic graphene devices that enable a new class of transport measurements.
Methods
Sample fabrication. Flakes of graphene (from highly oriented pyrolytic graphite, Momentive Performance Materials ZYA grade) and of hBN (from single crystals grown by high-pressure synthesis) were prepared 17 by exfoliation (3M Scotch 600 Transparent Tape) under ambient conditions (35-60% relative humidity) on n-doped silicon wafers with 90 nm thermal oxide (WRS Materials). The heterostructure was assembled by a top-down dry pick-up technique 19 . The completed heterostructure was deposited on a chip of n þþ -doped silicon with 300 nm thermal oxide (WRS Materials). Polymer residue from the transfer process was removed by annealing the sample in a tube furnace for 1 h at 500°C under continuous flow of oxygen (50 s.c.c.m.) and argon (500 s.c.c.m.) 33 . Device patterns were defined by e-beam lithography and reactive ion etching 19 . Ohmic contacts were established to the device using electron-beam evaporated Cr/Au electrodes to the exposed graphene edge 17 .
Measurement. All measurements were performed at 1.6 K in the vapour space of a He flow cryostat with a superconducting magnet. Lock-ins (Stanford Research Systems SR830) at 17.76 Hz were used in all measurements; voltages were measured with Stanford Research Systems SR 560 voltage preamplifiers and currents were measured with Ithaco 1,211 current preamplifiers. The charge density n was calculated from Shubnikov-de-Haas oscillations n Vg ¼ 5:51Â10 10 cm À2 V À1 , in good agreement with the expected geometric capacitance.
Data availability. The data sets generated during and/or analysed during the current study are available from the corresponding authors on reasonable request. | 3,922.8 | 2016-11-16T00:00:00.000 | [
"Physics"
] |
Robotic Cultivation of Pome Fruit: A Benchmark Study of Manipulation Tools—From Research to Industrial Standards
: In pome fruit cultivation, apples and pears need to be handled in various processes such as harvesting and sorting. Currently, most processes require a vast amount of manual labor. Combined with a structural shortage of seasonal workers, innovation in this field is crucial. Automated processes could provide a solution wherein the search for an appropriate manipulation tool is essential. Aside from several grippers, customized for harvesting by various researchers, the industry also provides a wide variety of standardized manipulation tools. This paper benchmarks a wide set of the most relevant gripping principles, primarily based on their ability to successfully handle fruit, without causing damage. In addition, energy consumption and general feasibility are evaluated as well. The performed study showed that the customized foam gripper scores the overall best for all test scenarios at the cost of being the least energy efficient. Furthermore, most other gripping tools excelled at certain specific tasks rather than being generally deployable. Impactive grippers are better suited for harvesting at low energy consumption, while astrictive grippers are more suited for sorting tasks constricted by the available space. The results also showed that commercially available soft grippers are not always capable of handling sensitive fruits such as pears without causing damage.
Introduction
During the cultivation of pome fruit, apples and pears need to be harvested, sorted, and handled in various processes. Currently, the majority of these processes require a vast amount of manual labor. Combined with the structural and further increasing shortage of seasonal workers, innovation in this field is crucial. Transition to automated processes could provide a solution; however, a challenging aspect of this automation is the manipulation of the fruit itself. Hence, the search for an appropriate gripping tool for this application will be essential. To deliver high-quality fruit to the consumer, both harvesting and processing pome fruit demand a considerable amount of care, as they are fragile products. However, besides this soft touch, a firm grip is still necessary to complete tasks such as harvesting. Consequently, the development of a usable gripper for fruit picking involves a delicate balance between gripping success and not harming the fruit.
Verbiest et al. [1] reviewed the current state of the art regarding the automation of the entire cultivation chain of pome fruit, presenting an overview of the advancements made in the last two decades. Most research focused on the development of robotic harvesting machines, whereby the manipulator and its gripping tool was only one element of the total development, being investigated to a lesser extent. Considering the challenges of the manipulator, the majority of these projects report issues regarding the speed and cost of the used robot. Instead of using a six degrees of freedom (6DOF) robotic arms like [2][3][4], or cartesian robots like [5,6], it is important for further research that alternative manipulators for robotic harvesting should be investigated to guarantee the profitability of the harvesting platform.
Agronomy 2021, 11, 1922 2 of 12 However, this paper focusses on the gripping tool, and its challenges, itself. The results considering the gripping tools used in existing research were analyzed as a starting point for this study.
Already in 2004, Setiawan et al. [7] developed a gripper for apples based on an inflatable bellow principle, which first encloses the apple before it inflates to grip the fruit on multiple points. Using the same concept of total enclosure, Li et al. [8] and Wang et al. [9] both have recently developed an origami-inspired gripper for multi-purpose robotic soft gripping. Both Baeten et al. and the company Abundant Robotics used a custom-made suction cup to harvest apples [2,5]. Petterson et al. developed a Bernoulli gripper that can adapt to 3D objects [10]. In contrast to the suction cup principle, Onishi et al. [3] used a three-fingered gripper with yaws that enclose the fruit on multiple sides. This is comparable to the three-fingered gripper of Davidson et al.; however, the latter integrated an extra compliant mechanism to enclose the apple depending on its shape [11][12][13]. Although it was for sweet peppers instead of pome fruit, Eizicovits et al. [14] developed a soft gripper with four finray jaws, which has a compliant mechanism as well. The last gripper taken into account in this study is the two-jaw gripper with active force control based on built-in pressure sensors developed by Zhao et al. [15]. Other gripping methods for agricultural application were reviewed by Li P. et al. [16] and Blanes et al. [17]. More generic review papers on the topic of grippers provide a wider view of recent developments in this area [18][19][20].
In contrast to the previously mentioned grippers, which are all customized gripping designs, the robotic industry offers a wide spectrum of standardized grippers that can be used for several applications. This study involves a benchmark of nearly all relevant gripping principles, considering customized as well as standardized grippers for the application of robotic pome fruit manipulation. An overview of the tested grippers and a description of the testing protocol are further described in Section 2. Within the tests, all grippers were compared based on the following aspects: gripping success, the ability not to harm the fruits, and energy consumption. These results and the accompanying discussion can be found in Section 3. Finally, Section 4 concludes with a broad summary of the performed benchmark study and deliberates some tracks for future work.
Gripping Principles
According to Monkman et al. and Shintake et al., the benchmarked grippers can be divided into two categories as follows: impactive based grippers and astrictive pneumatic suction-based grippers [20,21]. The impactive based grippers generate a gripping force by either friction or form closure of the gripping elements (fingers) contacting the object. The astrictive based grippers generate a gripping force through the creation of a negative pressure field between the gripper and the object. Both categories can be divided further into subcategories, based on the design of the fingers and the method of vacuum generation. Figure 1 below provides an overview of the range of tested grippers. The dark blue boxes indicate the gripper category, the light blue boxes indicate the finger design or type of vacuum generation, and the grey boxes describe the basic components of the gripper.
Seven impactive based grippers were evaluated in this study, which can be divided into three groups. The first group consisted of three grippers with fingers, made of a soft compliant mechanism that adapted to the shape of the object through the actuation force. The FESTO and Piab based grippers were pneumatically powered using angular actuators and a vacuum ejector, respectively. On the contrary, the OnRobot gripper was electrically powered. The three previously mentioned grippers solely consisted of standard industrial parts, however, the Piab piSoft was a prototype sample at the moment of the tests. Differently, the four grippers of the next two groups were equipped with two custom-made cone shaped jaw designs with a soft and a rigid contact surface, respectively. The soft design was created by mounting two cone shaped silicone suction cups with a diameter of 50 mm on the fingers, while the rigid design has a similar 3D printed cone shape coated with a Agronomy 2021, 11,1922 3 of 12 plastic layer, creating a smoother and more clutching surface. Both cone shaped finger designs were mounted on a pneumatic (FESTO), as well as an electric (Robotiq) actuator. Seven impactive based grippers were evaluated in this study, which can be divided into three groups. The first group consisted of three grippers with fingers, made of a soft compliant mechanism that adapted to the shape of the object through the actuation force. The FESTO and Piab based grippers were pneumatically powered using angular actuators and a vacuum ejector, respectively. On the contrary, the OnRobot gripper was electrically powered. The three previously mentioned grippers solely consisted of standard industrial parts, however, the Piab piSoft was a prototype sample at the moment of the tests. Differently, the four grippers of the next two groups were equipped with two custom-made cone shaped jaw designs with a soft and a rigid contact surface, respectively. The soft design was created by mounting two cone shaped silicone suction cups with a diameter of 50 mm on the fingers, while the rigid design has a similar 3D printed cone shape coated with a plastic layer, creating a smoother and more clutching surface. Both cone shaped finger designs were mounted on a pneumatic (FESTO), as well as an electric (Robotiq) actuator.
Besides the seven impactive based grippers, three astrictive suction-based grippers were evaluated as well. The first group of astrictive based grippers was actuated by a pneumatic two stage ejector, equipped with food grade bellow shaped suction cups. The second group is actuated by an electric blower motor using a further development of the gripper described in Baeten et al. [2] as a large custom suction cup. This suction cup consisted of a 3D-printed mount and a thick EPDM foam, with a 100 mm diameter and 60 mm height, with a conical cut out in the center.
While the grippers using the FESTO DHRS-40-A actuator or Piab VGS3010 were powered by pressurized air, the custom foam gripper's blower was powered by 3-phase power. All other grippers were powered on 24V DC power.
Grasping and Damage Assessment-Testing Protocol
The primary required characteristics of the above-mentioned grippers are the ability to grasp the fruits successfully, without causing any damage to the fruits. These parameters will be evaluated through the execution of a simplified harvesting operation in a controlled environment. The used testing setup is shown in Figure 2A.
The setup consisted of an industrial robot placed next to an aluminum construction at which fruits were placed by their stem using a friction fit clamp, representing the fruits attached to the branch of a tree. The robot, more specifically a KUKA iiwa 14 R820, was equipped with the tested grippers. Besides the seven impactive based grippers, three astrictive suction-based grippers were evaluated as well. The first group of astrictive based grippers was actuated by a pneumatic two stage ejector, equipped with food grade bellow shaped suction cups. The second group is actuated by an electric blower motor using a further development of the gripper described in Baeten et al. [2] as a large custom suction cup. This suction cup consisted of a 3D-printed mount and a thick EPDM foam, with a 100 mm diameter and 60 mm height, with a conical cut out in the center.
While the grippers using the FESTO DHRS-40-A actuator or Piab VGS3010 were powered by pressurized air, the custom foam gripper's blower was powered by 3-phase power. All other grippers were powered on 24V DC power.
Grasping and Damage Assessment-Testing Protocol
The primary required characteristics of the above-mentioned grippers are the ability to grasp the fruits successfully, without causing any damage to the fruits. These parameters will be evaluated through the execution of a simplified harvesting operation in a controlled environment. The used testing setup is shown in Figure 2A.
The setup consisted of an industrial robot placed next to an aluminum construction at which fruits were placed by their stem using a friction fit clamp, representing the fruits attached to the branch of a tree. The robot, more specifically a KUKA iiwa 14 R820, was equipped with the tested grippers.
The testing sequence was executed as follows: (I) The fruits were manually hung from the aluminum construction on a fixed location. (II) The robot moved into a manually defined fixed picking position diagonally below the fruit, closed the gripper, and performed a slow vertical downward motion to harvest the fruit. (III) If this picking operation was executed successfully, the robot performed an acceleration test, which was defined by the movement in both directions along a path consisting of a horizontal movement of 620 mm followed by a downward vertical movement of 370 mm. The transition between these two segments was rounded off with a radius of 130 mm. The velocity and acceleration of the end effector along the path are displayed in the graph below ( Figure 2B). The execution of a testing sequence can also be viewed in the following video https://www.youtube.com/ watch?v=X5UT_sGr8x0&ab_channel=ACRO-KULeuven (accessed on 15 September 2021).
For the simplified picking setup, a straight pull was enough to pick the fruits. Although it was on plums, this straight-in straight-out picking technique was tested by Brown and Sukkarieh [4], reporting no extra damage to fruits. However, for pome fruit, a straight pull would mostly damage the stem, whereby the storage life of the fruit would not be guaranteed. Therefore, in realistic scenarios, a more complex picking sequence could be introduced to imitate manual picking. This sequence included an angular movement before pulling the fruit of the branch. This sequence wound not cause any extra damage to the fruit, or the stem, as proven by both Baeten et al. [2], and Brown and Sukkarieh [4], who both used this picking sequence in their robotic harvesting tests. 620 mm followed by a downward vertical movement of 370 mm. The transition between these two segments was rounded off with a radius of 130 mm. The velocity and acceleration of the end effector along the path are displayed in the graph below ( Figure 2B). The execution of a testing sequence can also be viewed in the following video https://www.youtube.com/watch?v=X5UT_sGr8x0&ab_channel=ACRO-KULeuven (accessed on 24 September 2021).
For the simplified picking setup, a straight pull was enough to pick the fruits. Although it was on plums, this straight-in straight-out picking technique was tested by Brown and Sukkarieh [4], reporting no extra damage to fruits. However, for pome fruit, a straight pull would mostly damage the stem, whereby the storage life of the fruit would not be guaranteed. Therefore, in realistic scenarios, a more complex picking sequence could be introduced to imitate manual picking. This sequence included an angular movement before pulling the fruit of the branch. This sequence wound not cause any extra damage to the fruit, or the stem, as proven by both Baeten et al. [2], and Brown and Sukkarieh [4], who both used this picking sequence in their robotic harvesting tests. This testing sequence was performed on each gripper for both apples (Cultivar: Golden Delicious) and pears (Cultivar: Conference) on three gripping force settings and each of these tests was repeated thrice for validation ( Figure 3). This resulted in nine tested apples and nine tested pears per gripper. The three gripper force settings were set at the lower range, midrange and maximum range of the manipulation tools force range and are described for each tool in Figure 3. The repetitions of each testing scenario had to be reduced to three tests due to the combination of a large range of testing scenarios, the absence of sufficient testing subjects with consistent characteristics, and the limited duration over which the tests could be executed. This time span needed to be minimized to limit the variation in the condition of the fruit, which could change rapidly due to the fruit being untreated. Despite the reduced number of repetitions, which limits the statistical relevance, the testing results still provide relevant insights in the capabilities of the manipulation tools as clearly defined patterns further described in Section 3. A larger testing This testing sequence was performed on each gripper for both apples (Cultivar: Golden Delicious) and pears (Cultivar: Conference) on three gripping force settings and each of these tests was repeated thrice for validation ( Figure 3). This resulted in nine tested apples and nine tested pears per gripper. The three gripper force settings were set at the lower range, midrange and maximum range of the manipulation tools force range and are described for each tool in Figure 3. The repetitions of each testing scenario had to be reduced to three tests due to the combination of a large range of testing scenarios, the absence of sufficient testing subjects with consistent characteristics, and the limited duration over which the tests could be executed. This time span needed to be minimized to limit the variation in the condition of the fruit, which could change rapidly due to the fruit being untreated. Despite the reduced number of repetitions, which limits the statistical relevance, the testing results still provide relevant insights in the capabilities of the manipulation tools as clearly defined patterns further described in Section 3. A larger testing batch for the best performing tools will be a subsequent step in future work. The actuation speed of the grippers was minimized, reducing the impact force of the closing gripper. Thus, the damage caused to the fruits could be evaluated as solely influenced by the grippers' design and the gripping force. The fixed gripping position in combination with the size and shape variation of the fruit caused some planned deviation from the ideal gripping position. This deviation gave useful insights regarding the grippers' abilities to cope with similar realistic scenarios, such as moving branches or detection errors.
The gripping success was determined based on the grippers ability to successfully grasp, harvest, and hold on to the fruit during the testing sequence. The damage caused to the fruits was examined on two discrete moments in time. Initially the presence of externally visible damage was evaluated immediately after the above-mentioned testing sequence. Next, the harvested apples and pears were stored for respectively six and four days at room temperature. This storage time made smaller defects and internal damage more easily perceivable by the human eye through discoloration or local decay.
Subsequently, the presence of damage was evaluated a second time, examining the fruits outside as well as inside through cross-sections. On advice of the Flemish institution for fruit cultivation Research Center (pcfruit) npo, no further classification regarding the magnitude and type of damage was made in this evaluation. Due to the high variety of these parameters and the highly variable threshold of the allowable damage, depending on the usage of the fruits, this would only have resulted in a subjective classification.
Agronomy 2021, 11, 1922 5 of 12 batch for the best performing tools will be a subsequent step in future work. The actuation speed of the grippers was minimized, reducing the impact force of the closing gripper. Thus, the damage caused to the fruits could be evaluated as solely influenced by the grippers' design and the gripping force. The fixed gripping position in combination with the size and shape variation of the fruit caused some planned deviation from the ideal gripping position. This deviation gave useful insights regarding the grippers' abilities to cope with similar realistic scenarios, such as moving branches or detection errors. Figure 3. Synthesis of the test results: left are the results for apples, and right for pears. For each gripper tested, the three force settings, defined between brackets, are displayed in three separate columns.
The gripping success was determined based on the grippers ability to successfully grasp, harvest, and hold on to the fruit during the testing sequence. The damage caused to the fruits was examined on two discrete moments in time. Initially the presence of externally visible damage was evaluated immediately after the above-mentioned testing sequence. Next, the harvested apples and pears were stored for respectively six and four days at room temperature. This storage time made smaller defects and internal damage more easily perceivable by the human eye through discoloration or local decay. Subsequently, the presence of damage was evaluated a second time, examining the fruits outside as well as inside through cross-sections. On advice of the Flemish institution for fruit cultivation Research Center (pcfruit) npo, no further classification regarding the magnitude and type of damage was made in this evaluation. Due to the high variety of these parameters and the highly variable threshold of the allowable damage, depending on the usage of the fruits, this would only have resulted in a subjective classification.
Energy Consumption
Besides the characteristics regarding the grasp quality, the power usage of each gripper was also determined for the following simulated harvesting scenario. The grippers would perform a picking operation where the movement between the pick and drop location would last 5 s in each direction. Between these movements the gripper executed a pick and release action of which the time was determined for each of the grippers individually. These picking and release actions were performed on the settings as described in the Section 2.1.
The grippers benchmarked in the study can be divided into two different categories when determining their energy consumption based on the source of the energy. The first category is powered by electric energy, such as in the electrical motors of the OnRobot and Robotiq grippers or the blower motor of the custom foam gripper. The grippers of the second category, including the Festo actuator or the Piab ejector, are pneumatic powered.
The power consumption of the first category was measured using a DC current probe in combination with a digital oscilloscope in the case of the Robotiq and OnRobot grippers, and with a 3-phase power analyzer (Chauvin Arnoux C.A 8335) in the case of the custom foam grippers' blower motor. The power consumption of the second category was calculated with Equation (1), using the air consumption stated on the datasheets. Only the Agronomy 2021, 11,1922 6 of 12 power used by the actuators or blower motor was taken into account, in order to exclude all inefficiencies due to transformer losses, switching power supplies, or pressurized air generation and air distribution, since their presence depends on the construction and design of the whole automated setup.
where P a is the atmospheric pressure; P u the upstream pressure; and Q, the flow rate in the standard state. This equation thus converts the upstream pressure and flow rate into energy(W).
Grasping and Damage Assessment
The summarized results of the grasp and damage assessment are displayed for each fruit type in three graphs on Figure 3. The first graph displays the gripping success for each of the grippers and this for their three respective force settings. The other two graphs display the damage assessment results on the two discrete evaluation moments. In addition to these summarizing graphs, some remarkson observations during testing and damage assessment, are given specifically for each gripper.
With regards to the damage assessment the following notes should be made. The state of the tested apples was comparable to the state during the harvest, whereby the damage results of the apples were representative to a realistic situation. On the contrary, the pears were purposely very ripe, increasing their susceptibility to being damaged and magnifying the minor differences between the tested grippers. Consequently, these results should rather be used for a relative comparison between the grippers and not absolute for the harvesting of pears. Failed gripping attempts could also render the fruits unfit for damage assessment causing a large set of excluded pears.
FESTO DHRS-40-A + DHAS-80 Finray
The finrays on the pneumatic actuator performed well both on gripping success and damage assessment when picking apples. In the case of gripping pears, a similar result was observed regarding the gripping success. Immediate damage was only present on the highest force setting. After the storage time of four days, extra damage on all force settings was detected, caused by the textured gripping surface of the finrays, which was imprinted on the pears ( Figure 4A). An additional trial was made using the smooth side of the finray fingers. A similar number of injuries were noticed during this trial. This could be induced by a combination of two phenomenon. The first could be the less spherical shape of the picking position on the pears resulting in an uneven distribution of force between the finray fingers. The second could be that the finrays did not completely deform to the shape of the fruit due to the lower gripping force setting, distributing the gripping force over a smaller area. Furthermore, this also resulted in a grip that was more based on force rather than the intended form fit, which is considered less secure. At higher force settings the finrays would properly adapt to the fruits ( Figure 5A), however, at this setting the sheer magnitude of the force became the damaging factor. An additional trial was made using the smooth side of the finray fingers. A similar number of injuries were noticed during this trial. This could be induced by a combination of two phenomenon. The first could be the less spherical shape of the picking position on the pears resulting in an uneven distribution of force between the finray fingers. The second could be that the finrays did not completely deform to the shape of the fruit due to the lower gripping force setting, distributing the gripping force over a smaller area. Furthermore, this also resulted in a grip that was more based on force rather than the intended form fit, which is considered less secure. At higher force settings the finrays would properly adapt to the fruits ( Figure 5A), however, at this setting the sheer magnitude of the force became the damaging factor.
OnRobot Sg-b-H
The OnRobot soft gripper achieved satisfactory results handling the apples. However, for the larger specimens, the gripper's soft mechanisms were observed to be too shallow, turning the intended form fit grip into a less secure friction fit. In contrast to apples, a range of issues were observed with the pears. The OnRobot soft gripper is not inherently force but position controlled. The force is generated by the deformation of the compliant soft mechanism. This causes the actual gripping force to be equally influenced by the setting as well as the size object itself, resulting in a range of bad gripping attempts on the smaller pears at the lower force settings. Hence, these bad results could partially be assigned to user errors.
OnRobot Sg-b-H
The OnRobot soft gripper achieved satisfactory results handling the apples. However, for the larger specimens, the gripper's soft mechanisms were observed to be too shallow, turning the intended form fit grip into a less secure friction fit. In contrast to apples, a range of issues were observed with the pears. The OnRobot soft gripper is not inherently force but position controlled. The force is generated by the deformation of the compliant soft mechanism. This causes the actual gripping force to be equally influenced by the setting as well as the size object itself, resulting in a range of bad gripping attempts on the smaller pears at the lower force settings. Hence, these bad results could partially be assigned to user errors.
When gripping any other object, with non-ideal dimensions for the gripper's design, the gripper's fingers often curled up putting all their clamping force on a small area as shown in Figure 4B. Additionally, the non-spherical shape in the gripping location of the pears caused an uneven force distribution between the gripper's fingers, which even enlarged the previous point contact issue. Better results could be achieved if the fruit could be approached from directly below, however, this gripping pose is in reality unachievable for both harvesting and sorting operations.
For creating an ideal grasp with this gripper, a simultaneous movement towards the object, while closing the gripper, could be executed to compensate the movement of the soft mechanism. Additional tests indicated this could partially compensate the previously mentioned bad grasps caused by the gripper's shallowness.
Piab piSoft 100-4
The tests with the Piab piSoft gripper resulted in decent gripping success for both apples and pears. However, the ribbed design of the gripper tips ( Figure 4C) was observed to be detrimental for handling both pome fruits, creating small areas of high contact force and thus causing damage. This effect was especially noticeable on the more sensitive pears.
Similar to the symmetrical four-fingered design of the tested OnRobot gripper, this design was observed to be disadvantageous when gripping the nonspherical shape of the pear from diagonally below, causing extra injuries due to non-ideal gripping point placement and force distribution.
Custom Soft Cups (FESTO DHRS-40-A and Robotiq 2F-85)
This grippers' design performed well on both the gripping success and the immediate damage assessment through good adaptation to the fruit's shape ( Figure 5B). Despite the soft contact interface of the suction cups, injuries due to a too high contact force were observed for the pears after the four-day storage at room temperature. The implementation of a more deformable suction cup could reduce the damage these fingers cause to the more sensitive fruit, by better distributing the contact force.
The gripper's design could also facilitate a combined gripping principle by turning on the suction cups for extra gripping force. This feature was not implemented as the impactive grip was already adequate and the extra upgrade would only create extra energy consumption and an extra source of potential damage (suction spots).
Custom Rigid Cups (FESTO DHRS-40-A and Robotiq 2F-85)
The gripping success of the apples was comparable to the above mentioned custom soft cups alternative. In contrast, none of the pears were gripped successfully, because the grippers could not close far enough to completely fixate the pears. This was caused by heat deformation of the cups during the application of the soft substrate. As the cups were rigid, the lacking ability to adapt in case of a non-ideal gripping position resulted in an increased number of damaged fruits compared to the custom soft cups.
Piab BBL60 and Piab BX35
Since both industrial suction cups have a similar design, their gripping success was also similar. When approaching for the grip, these suction cups need to be pushed against the fruit to deform themselves and create a successful seal, since, due to the rather low suction rate, no large gaps can be bridged to attract the fruits. However, the force needed for this deformation was in the majority of cases too high, resulting in no more than pushing away the fruit without completely sealing the vacuum. When this occurred during testing an intervention was made by applying a manual counter force preventing the fruits from being pushed away. These scenarios are displayed in Figure 3 with the specific designation "assistance needed". Therefore, these grippers are not advised to be used for harvesting operations but rather for sorting or packaging activities. In these applications their ability to grip fruits surrounded by other fruits or other machine elements close to them becomes an advantage. A slight edge could be given to the BX35 cup that has a better chance of finding a good gripping spot due to its smaller size and thinner and more flexible sealing lip, adapting better to the fruits' irregularities ( Figure 5C). This advantage however comes at the cost of lower stability during movement and a higher chance of a collapsing suction lip during gripping.
Both suction cups created suction spots on the pears that became especially observable after the storage time. Additionally, the BBL60's connector design could leave an impression on both fruit types when they were sucked up against it. Modifications made to the suction cups connector, shown in Figure 4D slightly alleviated this. Similar impressions were not observed with the BX35 since the fruits are only sucked into the soft bellow and will never make contact with the suction cups connector.
Custom Foam Suction Cup
The custom foam gripper was able to nearly grip all apples and pears, and of these successful grips, no fruits were damaged. Unsuccessful grips only occurred on the lower power setting where the suction force was not high enough the suck in the fruits and deform the foam to ensure a good seal ( Figure 5D). Still, the higher suction rate gave this gripper the ability to overcome larger gaps between the fruits and seal thus eliminating the Agronomy 2021, 11, 1922 9 of 12 assistance the other astrictive based designs needed. Moreover, no suction spots could be observed due to the lower vacuum level.
Electrical vs. Pneumatic Force Control
When comparing the pneumatic FESTO actuator to the electrical Robotiq actuator, both equipped with the same fingers (the custom soft cups or the rigid cups), their gripping force setting should not be compared directly, since the force ranges of both grippers do not match. Adjusting for this mismatch, a slight advantage could be given to the electric Robotiq, based on the lower number of damaged fruits.
The scaling of this advantage with regards to the gripping speed will depend on the used gripper fingers. The gripping force measurement curves of the gripper's datasheet [22] (p. 60) indicate that the lower forces will not be sustained at the higher speed settings when gripping a rigid object with rigid gripper fingers. This is probably caused by the absence of reaction time for the force control system when two rigid objects contact each other. The addition of the soft deformable custom suction cup fingers will result in a better control over lower gripping forces on higher speeds at the cost of an overall lower achievable gripping [22] (pp. 61-62). This could be improved upon through the use of even softer contact elements ref PU graph. In contrast, the clamping force of the pneumatic actuators is mostly dependent on the supplied air pressure. All achievable speeds with this supplied air pressure will theoretically result in the same gripping force. Aside from the effect of the grasping speed on the controllability of the grasping force, the impact force will increase for both drive mechanisms. As indicated in Section 2, this impact force has been excluded in this study.
The gripping force of the pneumatic actuators might be less influenced by the actuation speed but depends in the case of the radial actuator design of the FESTO DHRS on the size of the gripped object. In the current finger design this results in a higher clamping force the smaller the gripped fruit is. This effect could be reversed through the adaption of the gripper's fingers so the working area of the actuator is between 50 • and fully open. The effect could also be eliminated using a parallel actuator at the cost of a substantial reduction in grippable object sizes. The electrical actuation of the similar radial base actuator of the tested Robotiq gripper was not affected by the object's size. Table 1 shows the results of the power consumption measurements for each of the actuators. The vacuum blower used for the custom foam cup was observed to be the main outlier having an energy consumption of a factor 10 to 350 larger than the other actuators. As expected, the large suction rate of the vacuum blower that resulted in good gripping results, forms also its main disadvantage when determining energy consumption. Aside from the high amount of energy needed to generate the required high flow rate, the need for the blower to be continuously active, due to its slow ramp up and down rate, also contributed to this high energy demand. The second highest energy consuming actuator was the Piab pneumatic ejector. Similar to the blower, the ejectors need to run continuously during the gripping process, which consumes a large volume of compressed air. The OnRobot gripper used substantially less power than the previous actuators. However, its consumption was notably higher than the other electric actuator, both using more power during the actual actuation, and a higher than idle power to hold the object during transport. The last two actuators FESTO DHRS-40-A and Robotiq 2F-85 consumed nearly the same amount of power. The FESTO actuator only consuming compressed air during the actual actuation of the fingers. The Robotiq mainly used power during actuation, supplemented by a small idle consumption of the electronics.
Power Consumption Results
Since the energy consumption was determined for a simulated scenario, the ratio between the grippers might shift due to a related shift in the ratio of gripping, transport, and idle time. An increase in transport or idle time would benefit the pneumatic FESTO DHRS-40-A actuator and in a lesser degree the Robotiq 2F-85.
These results need to be evaluated in perspective to the rest of the automation setup. The low power consumption of the DC powered electrical actuators will be beneficial for the use on smaller battery powered autonomous platforms. Additionally, no other energy conversions would be required in these setups. Otherwise, for off-orchard processing processes or larger fossil fuel powered autonomous platforms, the availability of power becomes a minor issue, and the extra cost could be justified by an increase in the performance of the gripping principal as for the custom foam gripper.
Conclusions
Based on the performed benchmark it can be concluded that all gripping principles are to a certain degree usable for handling apples. The damage caused to apples, however, clearly indicated the beneficial effect of a soft and smooth gripper interface. Unlike the impactive grippers, the industrial ejector driven suction cups proved to be less suited for harvesting operations, but rather for other handling operations, such as sorting or packaging. In this case, these grippers have a substantial advantage over all impactive based gripping principles, due to their ability to pick fruits surrounded by other fruits close to them, on for example a conveyor.
In contrast to apples, the tests with the ripe and oversensitive pears provided better insights in the relative differences in performance between the grippers. All three commercial impactive soft grippers were able to successfully grab the pears, when set correctly. A slight disadvantage could be given to the two symmetrical four-fingered designs, the OnRobot and the Piab piSoft, which were not able to adapt properly to the non-spherical shape of the pears in the current gripping pose. Moreover, despite their soft design, none of the industrial soft grippers were able to handle the pears without damaging them. However, minor adaptions to these grippers could make them viable for the handling of the more fragile fruits. By making the substrates of the compliant mechanisms softer, and by removing textures and other rough features, the presence of force concentrations could be reduced. Neither of the rigid custom cups were able to successfully grip any pears without damaging them. The custom soft cup fingers and industrial suction cups performed better to some degree; however, they still caused many injuries due to force concentrations or high vacuum level respectively.
The soft EPDM foam interface combined with the low vacuum level of the custom foam suction cups proved to be the best approach of the tested grippers to handle the more delicate pears. This success comes with a large drawback, a substantially higher energy consumption of the blower motor. The higher energy consumption turned out to be a disadvantage for the astrictive gripping principals in general, being on average a factor 75 higher than their impactive counterparts, the exception being the Piab piSoft. Within the impactive grippers, the results did not indicate a substantial difference in gripping success or quality between the more conventional pneumatic actuators (FESTO) and similar but newer electrical actuators (Robotiq) on the tested parameters.
Second to the grippers evaluated ability to complete the needed manipulations for a certain task without causing damage, the differentiation parameters currently are the available energy supply and control structure of the automation setup.
Not all applicable gripping principles were able to be tested in the current benchmark. Future testing might include grippers such as origami grippers [8] that are still in research or readily commercially available Fluidic elastomeric actuators (FEA). Secondly more extensive testing should be executed on the most promising manipulation tools providing better statistic relevance with regards to, among others, variations in the testing subjects. The influence of additional gripper parameters such as variation in gripping speed on the scaling of the current results might also be evaluated.
Author Contributions: Conceptualization, G.S. and R.V.; methodology, G.S. and R.V.; software, G.S.; validation, G.S. and R.V.; investigation, G.S. and R.V.; resources, R.V.; data curation, G.S. and R.V.; writing-original draft preparation, G.S. and R.V.; writing-review and editing, G.S., R.V. and K.K.; visualization, G.S.; supervision, K.K. and E.D.; project administration, K.K. and E.D.; funding acquisition, K.K. and E.D. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: All data obtained in this research are mainly reported in this paper. For more comprehensive questions, please contact<EMAIL_ADDRESS>or rafael.verbiest@kuleuven.be. | 9,381.4 | 2021-09-25T00:00:00.000 | [
"Computer Science"
] |
Metal assisted photochemical etching of 4H silicon carbide
Metal assisted photochemical etching (MAPCE) of 4H–silicon carbide (SiC) in Na2S2O8/HF and H2O2/HF aqueous solutions is investigated with platinum as metallic cathode. The formation process of the resulting porous layer is studied with respect to etching time, concentration and type of oxidizing agent. From the experiments it is concluded that the porous layer formation is due to electron hole pairs generated in the semiconductor, which stem from UV light irradiation. The generated holes are consumed during the oxidation of 4H–SiC and the formed oxide is dissolved by HF. To maintain charge balance, the oxidizing agent has to take up electrons at the Pt/etching solution interface. Total dissolution of the porous layers is achieved when the oxidizing agent concentration decreases during MAPCE. In combination with standard photolithography, the definition of porous regions is possible. Furthermore chemical micromachining of 4 H–SiC at room temperature is possible.
Introduction
Nowadays there exists a variety of methods to produce porous silicon from single crystalline wafers. The most common methods are electrochemical etching [1], stain etching [2] and metal assisted etching [3,4]. A large variety of possible application scenarios for such porous layers is reported in literature. Porous silicon can be used as sacrificial layer in the fabrication of MEMS devices. There it is either chemically removed [5,6] or the porous structure reorganizes under inert gas atmosphere at high temperatures [7]. Another application is in the field of biosensors where porous layers with a homogenous porosity depth profile are needed [8]. Contrary to this application a smoothly increasing refractive index of the porous layer is recommended for the fabrication of efficient antireflective layers [9]. Porous silicon offers a high surface to volume ratio, thus being most beneficially used in super capacitors [10], chemical sensors [11] or explosive devices [12,13].
There are, however, certain limitations for porous silicon. In harsh environments, at high temperatures and in alkaline electrolyte solutions, porous silicon cannot be used. It degrades chemically (i.e. due to oxidation) or even dissolves. To exploit its full benefit under such conditions, the surface has to be covered with a stable and dense protective layer [14].
Porous silicon carbide is regarded as promising alternative due to its higher temperature stability and enhanced chemical inertness.
For successful implementation of porous silicon carbide within MEMS devices, the fundamentals of pore formation S Supplementary material for this article is available online (Some figures may appear in colour only in the online journal) Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. need to be understood. Moreover the capability to monitor the etching reaction by electrical probing is desirable.
In this study metal assisted photochemical etching (MAPCE) is utilized for defined porosification of 4H-SiC. In this approach, a thin film of noble metal is deposited on the SiC surface. Next, the sample is placed into an etching solution, containing hydrofluoric acid and an oxidation agent such as Na 2 S 2 O 8 or H 2 O 2 . The noble metal acts as cathode while the surface exposed to the etchant is the corresponding anode. The oxidant is reduced at the cathode and SiC is oxidized at the anode. The generated oxide is dissolved with HF [15].
Recent experiments showed that a decreased contact resistance at the Pt/4H-SiC junction enhances the electron flow during etching. This result allowed successful porosification of 4H-SiC substrates having different bulk resistivities [16].
Despite the results reported in literature, the knowledge about MAPCE with respect to 4H-SiC is still very poor. Even in the laboratory scale it has not yet been established as a reliable method to produce porous SiC.
The objective of this paper is to introduce MAPCE as a reliable and simple technique for the generation of uniform and highly porous 4H-SiC layers. In addition, this study also aims to contribute to the fundamental understanding of MAPCE when applied to this wide bandgap semiconductor.
Experimental details
For all experiments 1 × 1 cm 2 n-type, nitrogen doped 4H-SiC samples with a resistivity of (0.012-0.025) Ω · cm were used. The process flow chart for the performed experiments is illustrated in figure 1. Initially the samples were cleaned consecutively in acetone, isopropanol and ethanol for 5 min. Afterwards an inverse sputter cleaning procedure was carried out (step i). This cleaning step lasted for 240 s at a plasma power of 200 W, applying a 'Von Ardenne' sputter equipment (LS730S). All sputter deposition experiments described in this study were performed with this equipment. Next 300 nm of Pt were sputtered on the samples (step ii). Prior investigations showed the importance of the Pt/4H-SiC junction properties [16]. To enable reliable etching, the contact resistance at this interface has to be decreased. Therefore the samples were annealed in a Nabertherm L9/11/SKM oven under Argon flow of 40 l h −1 starting at 800 °C with a subsequent 30 min ramp up to 1100 °C. This peak temperature was held for 5 min.
The annealing process lead to surface roughening of 4H-SiC at Pt-covered areas. Therefore (step ii) was not performed across the whole sample surface, only a strip near the sample edge region was used, covering about 10% of the sample's surface.
Next, a second layer of 100 nm Pt was sputter deposited and patterned using lift-off, providing Pt on the areas with a smooth surface characteristic (step iii).
The samples were immersed in an etching solution containing an oxidizing agent (Na 2 S 2 O 8 or H 2 O 2 ) and HF (step iv). The HF concentration in all performed experiments was 1.31 mol l −1 , while the concentrations of the oxidants varied. The necessary UV irradiation was provided by a custom built UV source utilizing an 18 W UV-C low pressure mercury vapor lamp (peak intensity at 254 nm). The volume of the etching solution was 1.2 ml and the distance between the sample and the light source was 0.7 cm. During all experiments the C-face of the samples was porosified.
After etching all samples were examined with a Hitachi SU8030 scanning electron microscope using acceleration voltages ranging between 2 and 5 kV. The redox potentials were recorded using a Pt wire that touched the sputter deposited Pt on the samples. This served as sensing electrode. As reference a mercury/mercurous-sulfate electrode filled with saturated K 2 SO 4 was used. The redox potentials were measured under room light conditions.
FT-IR measurements were performed in attenuated total reflection (ATR) mode. Therefore a Bruker Tensor FT-IR spectrometer was used.
TOF SIMS depth profiles were acquired using a TOF-SIMS 5 (IonTof GmbH, Münster, Germany). A 25 keV Bi + primary ion beam was used for analysis (high current bunched mode [17]). For sputtering a 2 keV Cs + beam was used. The analysis area was set to 50 × 50 µm with a pixel resolution of 128 × 128 and the crater size to 400 × 400 µm. The measurements were carried out in negative ion mode using the interlaced mode with a cycle time of 100 µs. Low energy electron flooding (21 V) was applied.
Preface
Current theory states that with increasing porosity the porous structure is mostly covered by the semiconductor-electrolyte space charge layer [18,19]. With ongoing etching time most of the oxidant should be reduced. Thus the redox potential as an indicator for the Fermi level of the etching solution should decrease [20]. To achieve a fast decay of the oxidant, small volumes (1.2 ml) of etching solution were used. Pore formation should stop when all of the oxidizing agent is reduced. Furthermore it was assumed, that the resulting porous layer should have a homogenous porosity depth profile because the space charge layer limits the final degree of porosity. This would allow to control the degree of porosity across the porous layer, while the progress of the etching reaction could be recorded with redox potential measurements. The problem of heat generation would be minimized with a low pressure mercury lamp that emits no infrared radiation. Moreover low etchant volumes are of economic interest, especially when highly toxic etching solutions are used. The experiments were carried out with either H 2 O 2 or Na 2 S 2 O 8 to investigate the influence of the oxidizing agent type on MAPCE. To the best of the author's knowledge, H 2 O 2 has not yet been successfully used as oxidizing agent during MAPCE of SiC.
Evolution of the porous layer
In this section the evolution of the porous layer during etching is presented with varying oxidizing agent types (H 2 O 2 or Na 2 S 2 O 8 ) and concentrations. Porous layers were formed at 25 µm and 50 µm square areas defined by photolithography.
The porosified area of each sample was 0.11 cm 2 . The etching times ranged between 15 and 150 min and the C-face was exposed to the etching solution.
First the experimental results with H 2 O 2 containing solutions are presented, then those with Na 2 S 2 O 8 . Finally, all results are evaluated and discussed. Figure 2 shows the evolution of the etching depth and the redox potential with respect to etching time. After 60 min the total etching depth, as well as the redox potential becomes relatively constant. This is attributed to the decay of H 2 O 2 due to photolysis (equation (1)) [21] in the etching solution with time.
Experiments with H 2 O 2 .
When all the H 2 O 2 is decomposed, no further etching into depth is observed. This behavior is illustrated in figure 3. At first the total etching depth increases until it becomes relatively constant. This behavior is accompanied by the formation of a defined etching front. When the total etching depth does not increase anymore, total dissolution of porous 4H-SiC takes place. This is an unexpected behavior and is discussed below in this section after the presentation of additional results (see section 3.3).
Next the influence of a higher H 2 O 2 concentration (i.e. 0.15 mol l −1 instead of 0.04 mol l −1 ) is presented. Qualitatively the same result was obtained as can be seen in figures 4 and 5.
The redox potential as well as the total etching depth become constant with ongoing etching time.
Beside these similarities there are also differences when the H 2 O 2 concentration is increased. At longer etching times a so-called line of breakage forms (see figure 5(d)). This means that total dissolution takes place at the bottom of the porous layer, instead at the top surface. So the upper porous layer is protected from further etching because total dissolution starts at its bottom. The same statements as for the samples etched with H 2 O 2 based etchants (see section 3.2.1) can be made. After a certain time all the oxidizing agent is consumed due to photolysis (equation (2)) [22] and the total etching depth does not increase anymore (see figure 6). During the experiment with 0.15 mol l −1 Na 2 S 2 O 8 also a decrease of the redox potential could be monitored. This indicates the decay of Na 2 S 2 O 8 in the etching solution. This is contrary to the samples etched with H 2 O 2 , where the redox potential increased. This phenom enon has been studied and the results are shown in the supplementary material S1 (stacks.iop.org/JPhysD/50/435301/mmedia).
Experiments with
After pore formation into depth had stopped, also total dissolution of the porous layer took place. However in no experiment with Na 2 S 2 O 8 , the formation of a line of breakage could be observed. An increase of the Na 2 S 2 O 8 concentration from 0.04 mol l −1 to 0.15 mol l −1 yielded an etching process still ongoing into depth after 150 min, so that total dissolution was not observed (see figures 8 and 9). This is attributed to the presence of a not reduced oxidizing agent in the etching solution. However, etching at the top surface of the porous layer stopped, as is shown by an analysis of the pore size distribution in this region. Figure 10 shows the modal pore area (i.e. the pore area that appears most often in the pore distribution) after different etching times for the experiments with 0.15 mol l −1 Na 2 S 2 O 8 (see figures 8 and 9). At first the average pore size increases almost linearly with time. The sharp increase represents individual pores that connect to larger clusters. From this characteristic point on, the pore size stays relatively constant, indicating a drastic drop of the etching rate. This demonstrates that etching of the porous layer stops at the highly porous regions at the top while pore formation into depth still proceeds. More details about the pore size evaluation can be found in the supplementary material S2.
Discussion
So far the experimental data allow several statements about MAPCE of 4H-SiC. No increased etching depth was observed near the deposited Pt. Furthermore, when UV light is focused onto the sample with a light-wave cable, the etching depth is lower in the vicinity of the sputter deposited Pt, than in regions being further away. So it can be concluded that the electron hole pair generation is done by the UV light irradiation. The UV light generated h + initiate the oxidation of SiC (see equation (3)). The formed oxide is then dissolved by HF (see equation (4)). [15,16] SiC + 4H 2 O + 8h + → SiO 2 + 8H + + CO 2 (3) This is in contrast to metal assisted etching of other semiconductors like silicon [3] or InP [23] where the oxidant creates h + in the semiconductor and etching is enhanced near the metallic cathode. Despite this difference, the role of the deposited Pt is to serve as anode; without Pt no etching could be observed. The oxidizing agent serves as electron consumer during etching and takes up electrons at the Pt/etching solution interface. In agreement with this assumption, an etching experiment with deposited Pt and just HF in the etching solution showed only surface roughening. Altogether the following statements can be made: the UV light irradiation generates electron hole pairs. When the holes are responsible for the oxidation of SiC, charge balance must be maintained for etching to proceed. This is done by the oxidizing agent at the Pt cathode, which is reduced and takes up electrons as is shown in equation (5) Basically, the etching depth is lower for experiments performed with higher Na 2 S 2 O 8 (compare figure 6 with figure 8) concentration. This is explained with a decreased UV light intensity that hits the sample, due to a higher oxidizing agent concentration. So the oxidizing agent concentration and the UV light intensity are the main factors competitively controlling the etching rate during MAPCE.
Prior to the experiments presented in this study, it was assumed, that with proceeding etching time the porosity in the layers should become more homogeneous and uniform; the space charge layer formed at the electrolyte semiconductor junction should limit the final degree of porosity [18]. This was confirmed with our experiments, as can be seen in figures 3, 5 and 7. Furthermore, an evaluation of the surface pore size showed that there is a passivating effect that protects the upper part of the porous layer from further etching. This passivating effect can also be attributed to the space charge layer at the electrolyte semiconductor junction. With increasing porosity the insulating effect of the space charge layer becomes more dominant and the etching rate slows down.
The experiments have also revealed that this passivating effect is only present under certain experimental conditions. At the later stages of etching when the oxidizing agent concentration is low, total dissolution occurs and the passivating effect is lost.
This indicates that the oxidizing agent concentration is the main factor that is responsible for total dissolution of porous SiC. When the oxidizing agent concentration decreases, the Fermi level of the etching solution increases [24]. As a consequence, the width of the space charge layer decreases in n-type SiC and thus the insulating effect is reduced. Another explanation is that with increased oxidation rate the amount of surface states is raised as well. Konstantinov et al [18] claimed that the surface states pin the Fermi level during anodic etching of SiC and control the width of the space charge layer. This theory is supported by the formation of a line of breakage when using H 2 O 2 . The total dissolution starts at the bottom of the porous layer (see figure 5). Such an underetching of the porous layer can only be explained by a chemical passivation of the surface of the porous material.
In summary, it can be concluded that MAPCE is controlled by two competing factors: the oxidizing agent concentration and the UV light intensity. The effect of total dissolution can be explained with a decreased width of the space charge layer in the porous SiC with increasing etching time.
Chemical composition of the porous layers
Since little knowledge about MAPCE of 4H-SiC is available, the chemical composition of the prepared porous layers was investigated. Furthermore in the previous section it was stated that surface states are responsible for the effect of total dissolution. A chemical analysis of the porous layers can give information about their surface termination. The etched area during the experiments presented in this section was 0.9 cm 2 . First ATR-FTIR measurements were performed to investigate which functional groups are generated during MAPCE. Then TOF-SIMS depth profiling was carried out to assign the observed peaks in the IR spectra. MAPCE was done in time intervals of 30 min. After each interval the old etching solution was replaced by a fresh one to prevent total dissolution. By doing so, the thickness of the porous layers could be increased and clearly identifiable peaks in the IR spectra were obtained. Experiments were performed with H 2 O 2 or Na 2 S 2 O 8 as oxidizing agent while the concentration was in both cases 0.15 mol l −1 .
Experiments with H 2 O 2
In figure 11(a), the IR spectra (after baseline subtraction) of samples are presented where MAPCE was performed with H 2 O 2 as oxidizing agent, while figures 11(b) and (c) allow enlarged views on the same spectra. The not etched 4H-SiC sample shows characteristic peaks at 810 cm −1 and 950 cm −1 which are assigned to the TO SiC phonons and LO SiC phonons [25]. The small peaks at 1972 cm −1 , 2030 cm −1 and 2157 cm −1 are assigned to multi phonon absorptions [26,27]. These features vanish with increasing etching time and new peaks appear, due to the MAPCE process. There is a peak at around 1020 cm −1 and a triple peak in the region of 2900 cm −1 . To assign the peaks to functional groups, a TOF-SIMS depth profile was recorded from a sample that had been etched for 180 min (see figure 12). F − as well as O − related signals were detected. Both characteristics tend to decrease with depth before a relative maximum occurs.
Finally the signal intensity almost vanishes for both F − and O − . The 30 Si − isotope signal is given for comparison reasons. There is no clear maximum in the 30 Si − characteristic. The observed features can be interpreted as follows: the F − as well as the O − signal decreases with sputter time because the surface-near regions were exposed to the etching solution for a longer time. The local maxima at approximately 50 min of sputter time are related to the active zone during etching. This is shown in figure 13. At the surface-near regions of the porous layer, the etching rate is low due to the passivating effect of the space charge layer. Below is an active zone where etching into depth takes place. Hence an increased amount of O − and F − is detected. Simultaneously the intensity of the 30 Si − signal decreases because of a gradually decreasing porosity within the active zone. This is in agreement with findings presented in the previous section; at the top of the porous layer etching stops while pore formation into depth still takes place (see figures 8 and 10). The Si − as well as the C − related ion signals were in the upper detector compliance limit (not shown in figure 12). Hence it can be reasoned that most of the porous layer is still SiC. With this elemental information, peak assignment of the spectra in figure 11 is done. The first peak at around 1020 cm −1 is assigned to Si-O-Si stretching vibrations [28]. This peak has a shoulder at 985 cm −1 which is assigned to SiF 2 stretching vibrations [29,30]. The triple peak centered at around 2900 cm −1 is due to sp 3 C-H 2 stretching (i.e. 2852 cm −1 symmetric and 2921 cm −1 asymmetric) and sp 3 C-H 3 asymmetric stretching (2956 cm −1 ) [28].
The findings support the stated mechanism of MAPCE described by equations (3) and (4), since Si-O-Si vibrations are identified in the IR spectra. According to the proposed mechanism, silicon carbide is first oxidized and then dissolved by HF. So far, this has been just a hypothesis, but not proven by experiments [15,16]. The results also show that there is fluoride present in the porous layer after etching. Based on this knowledge it is not possible to formalize a detailed mechanism of MAPCE, but it is possible that in addition to H 2 O, F − serves as nucleophilic species during oxidation and direct dissolution of SiC like in the case of Si takes place [31]. Such a behavior has been suggested by Lauermann et al for anodic etching of SiC [32]. Finally, the C-H x vibrations are interpreted as follows: Once the oxidation products are dissolved, the pure SiC surface remains, terminated with hydrogen atoms.
Furthermore, no indications for the presence of a carbon rich layer could be found, which is often mentioned in reports related to electrochemical etching of SiC [25,33]. Peaks between 1200 cm −1 and 1800 cm −1 in the IR spectrum are attributed to C-C vibrations [34], while other authors label them as N-H vibrations (nitrogen from the doping of SiC) [35]. Such peaks could not be observed during the presented experiments. This shows that the surface termination is different from the one obtained from pure electrochemical etching of SiC. It is also in agreement with the finding of Rittenhouse et al, that porous SiC obtained from MAPCE shows different photoluminescence than porous SiC obtained from pure electrochemical etching [36]. The photoluminescence is attributed to surface states, so a different surface termination is expected.
Experiments with Na 2 S 2 O 8
Next, the experimental results with Na 2 S 2 O 8 as oxidizing agent are presented. Figure 14 shows the corresponding IR-spectra while figure 15 presents the TOF-SIMS depth profile. The same elements were found during the TOF-SIMS analysis and the same functional groups were identified in the IR spectra. A major difference is the relative intensity of the C-H x peaks and the Si-O-Si peak compared to the experiments with H 2 O 2 . By comparison of figure 11 with figure 14 one can see that the intensity of the C-H x peaks relative to the Si-O-Si peak is higher for the samples which were etched with H 2 O 2 . This indicates that the C-H x functional groups at the surface are responsible for the passivating effect observed during MAPCE. The passivating effect is more pronounced with H 2 O 2 as oxidizing agent (see figure 5-formation of line of breakage). Since the C-H x intensity is increased as well when H 2 O 2 is used, it is reasoned that those functional groups contribute predominantly to the observed passivating effect. This means, in turn, that they are responsible for Fermi level pinning during etching.
Conclusions and outlook
In this study metal assisted photochemical etching (MAPCE) of single crystalline 4H-SiC was investigated. A low power UV source (18 W) and only moderate volumes of etchant (1.2 ml) were sufficient for reliable porous layer formation. This makes this approach interesting even for large scale fabrication. Also local definition of the porous layers as well as control over the degree of porosity with depth are possible. Therefore, the goal to develop a MAPCE method that is easy to implement and which allows the preparation of uniform porous SiC layers is accomplished. Furthermore, the effect of total dissolution presents a simple and cost effective approach for micromachining 4H-SiC substrates.
Beside these findings, also fundamental knowledge about MAPCE of 4H-SiC is gained. The observed effects can be broken down to the following key statements. When exposing SiC to etching solutions containing an oxidizing agent and HF, the surface is oxidized and the oxide is dissolved by HF. The necessary holes are generated by UV light irradiation. To maintain charge neutrality during etching, electrons are transferred from the SiC via the deposited Pt electrodes into the electrolyte. There they are consumed by an oxidizing agent such as H 2 O 2 or Na 2 S 2 O 8 . Since, in turn, the UV light irradiation on the sample surface depends on the oxidizing agent concentration and etching without an oxidizing agent is not possible, these two factors strongly determine the porous layer formation during MAPCE.
A chemical analysis of the generated porous layers showed that the surface of the porous layer is covered with functional groups such as Si-O-Si, Si-F 2 or C-H x . So oxide formation takes place during MAPCE and it is also possible that, as a side reaction, direct dissolution of SiC with F − occurs as it is the case for silicon. The experiments indicate that the C-H x groups are responsible for the passivating effect observed during MAPCE. Furthermore, there is no indication of a carbon rich layer on the surface of the MAPCE generated porous SiC, as it is often reported for electrochemical etching of SiC.
The presented findings show that the local porosification of 4H-SiC as well as controlling the degree of porosity is possible by utilizing MAPCE. Furthermore, the basic etching mechanism could be revealed. Hopefully this stimulates the realization of micro-or nanomachined device concepts based on porous silicon carbide in the near future. | 6,245.4 | 2017-09-27T00:00:00.000 | [
"Materials Science"
] |
Hypothalamic Expression of Neuropeptide Y (NPY) and Pro-OpioMelanoCortin (POMC) in Adult Male Mice Is Affected by Chronic Exposure to Endocrine Disruptors
In the arcuate nucleus, neuropeptide Y (NPY) neurons, increase food intake and decrease energy expenditure, and control the activity of pro-opiomelanocortin (POMC) neurons, that decrease food intake and increase energy expenditure. Both systems project to other hypothalamic nuclei such as the paraventricular and dorsomedial hypothalamic nuclei. Endocrine disrupting chemicals (EDCs) are environmental contaminants that alter the endocrine system causing adverse health effects in an intact organism or its progeny. We investigated the effects of long-term exposure to some EDCs on the hypothalamic NPY and POMC systems of adult male mice that had been previously demonstrated to be a target of some of these EDCs after short-term exposure. Animals were chronically fed for four months with a phytoestrogen-free diet containing two different concentrations of bisphenol A, diethylstilbestrol, tributyltin, or E2. At the end, brains were processed for NPY and POMC immunohistochemistry and quantitatively analyzed. In the arcuate and dorsomedial nuclei, both NPY and POMC immunoreactivity showed a statistically significant decrease. In the paraventricular nucleus, only the NPY system was affected, while the POMC system was not affected. Finally, in the VMH the NPY system was affected whereas no POMC immunoreactive material was observed. These results indicate that adult exposure to different EDCs may alter the hypothalamic circuits that control food intake and energy metabolism.
Introduction
Two neurochemically distinct sets of hypothalamic neurons controlling food intake are located in the arcuate nucleus (ARC). One group expresses neuropeptide Y (NPY) and agouti-related protein (AgRP). The NPY release by these neurons results in increased food intake and decreased energy expenditure. The other group expresses cocaine-and amphetamine-regulated transcript (CART) and pro-opiomelanocortin POMC, which is processed to melanocortin peptides, such as α-melanocyte-stimulating hormone (α-MSH). The activation of these neurons decreases food intake and increases energy expenditure [1] with an opposite effect of the NPY/AgRP system. Interactions between these two populations allow the NPY neurons to control the activity of the POMC cells. NPY/AgRP and POMC/CART neuronal projections reach hypothalamic nuclei such as the paraventricular nucleus (PVN), dorsomedial hypothalamic nucleus (DMH), and perifornical area [2]. These secondary centers process information regarding energy homeostasis.
Many factors can influence the activity of this system (for example the secretion of leptin by adipocytes), but estrogenic signaling may intersect at several levels with the hypothalamic circuits controlling food intake [3]. In fact, estradiol is involved in the regulation of metabolism through the modulation of food intake, body weight, glucose/insulin balance, body fat distribution, lipogenesis, lipolysis, and energy consumption [4]. The estradiol regulates neuroendocrine circuits controlling the metabolism [5] by acting on the POMC neurons through the estrogen receptor α (ERα) and on the NPY cells through an estrogen-activated membrane receptor, Gq-mER [6]. Indeed, estradiol has an inhibitor function on food intake, repressing the synthesis of NPY and AgRP [7]. Moreover, it seems that the leptin (secreted by adipocytes in proportion to fat mass and the activator of anorexigenic signals) has a common pathway with estradiol to regulate energy metabolism, namely the STAT3 pathway in POMC neurons [7]. Peripherally E 2 increases both leptin mRNA expression in 3T3 adipocytes and leptin secretion in omental adipose tissue [8]. Alternatively, lack of E 2 after ovariectomy may affect body weight regulation at a central level and mice deficient in ERα show a marked increase of adipose tissue [9]. There is also some evidence that ovariectomy increases hypothalamic NPY expression and decreases CRH immunoreactivity, promoting hyperphagia [10]. Moreover, E 2 deficiency causes central leptin insensitivity [9].
Endocrine-disrupting chemicals (EDCs) are industrial pollutants or natural molecules, which can be found as contaminants in the environment. They can interact with natural hormones by mimicking, antagonizing, or altering their actions [11] and may interfere with several brain circuits [12]. Recent evidence from many laboratories has shown that a variety of environmental EDCs (now called metabolic disrupting chemicals, MDCs) can influence adipogenesis and obesity and these effects may be partly mediated by sex steroid dysregulation due to the exposure to these substances and by alterations of nervous circuits involved in the control of food intake and energy metabolism [13,14].
The BPA, one of the most diffused chemicals in the world, is a xenoestrogen present in a very large number of products and may affect multiple endocrine pathways, due to its ability to bind classical estrogen receptors (particularly ER-α) and non-classical ones (membrane receptors) [15], as well as the G-protein-coupled receptor 30 (GPR30) [16]. BPA can also act through non-genomic pathways [17] and bind to a variety of other hormone receptors (e.g., androgen receptor, thyroid hormone receptor, glucocorticoid receptor, and PPARγ) [18]. In vitro experiments have demonstrated that BPA may dysregulate NPY, AgRP, and POMC expression in hypothalamic immortalized cell lines [19][20][21].
The DES is a powerful nonsteroidal synthetic estrogen (pharmaceutical) used until the early 70s to prevent miscarriage in pregnant women. Later this compound was recognized as a cause of reproductive cancers, genital malformations, and infertility in sons or daughters that had been exposed to this drug in utero [22], but it is still in use for veterinary purposes in some countries and is bioaccumulated in the environment [23]. DES exerts an agonistic effect against ER-α and an antagonistic effect against estrogen-related receptor-γ (ERR-γ) [24]. In ovariectomized female rats exposed to an isoflavone-rich diet, DES had no effect on hypothalamic NPY mRNA and increased POMC mRNA [25].
TBT belongs to the EDC family of organotins, it has been employed primarily as an antifouling agent in paint for boats. Other uses are as a fungicide on food crops, and an antifungal agent in wood treatments and industrial and textile water systems [26]. Due to its use in paint for boats, TBT has exerted toxicological effects on marine organisms. For example, TBT can induce masculinization in fish species [27]. Humans are exposed to TBT largely through contaminated dietary sources (seafood and shellfish [28]). In mammals TBT can increase body weight [29], alter hypothalamic NPY and POMC systems in short-term (4 weeks) exposed adult mice [30,31], and may also alter behavior-exposure to a low dose of TBT induced lower activity, high level of anxiety, and fear in mice [32]. TBT binds with high affinity to steroid receptors; in particular, it binds androgen receptor [33] and interferes with the expression of brain aromatase and estrogen receptors [34]. TBT can act as an agonist of retinoid X receptor (RXR) and peroxisome proliferator-activated receptor-γ (PPARγ) [35]. This inappropriate receptor activation could lead to disruption of the normal developmental and homeostatic controls over adipogenesis and energy balance, especially under the influence of the typical high-fat Western diet [36]. In addition, changes in the microbiome are associated with TBT exposure [37].
As previously reported, studies on the action of EDC on hypothalamic neurons related to eating behavior and energy control used a variety of experimental conditions (exposure to isoflavones, in vitro experiments, and short-term exposure). For this reason, in the present study, we exposed, for a longer time period (4 months), adult male mice to phytoestrogen-free food containing different putative MDCs to understand if the central neuroendocrine, orexinergic, and anorexinergic circuits are differentially affected by these compounds. Due to the alleged xenoestrogenic activity of some of them we also included, as a positive control, a group of animals treated with E 2 .
Body Weight
At the end of the experiment the animals were weighted. Data collected showed a global effect of treatment on the body weight of exposed animals (p < 0.05, F (8) = 2.185). In particular, the post-hoc analysis with Fisher's LSD test showed a reduction in body weight for mice treated with the higher dose of DES (p < 0.05) and for those treated with both doses of E 2 (p < 0.05). No statistically significant effects were observed in the other groups (see Table 1). Table 1. Summary of statistical analysis of body weight data. The values (in grams) are indicated as mean ± standard error of the mean (SEM). Bold numbers and asterisks indicate significant differences among the differently treated groups: * p < 0.05, different from control (p < 0.05, Fisher's test). A preliminary qualitative analysis showed a distribution similar to those already reported in previous contributions [30,[38][39][40][41]. In particular, we did not observe positive cell bodies (confirming previous reports that NPY cell bodies in ARC are visible only after colchicine treatment [42]), whereas a large number of positive fibers was observed along the entire hypothalamus. These were particularly dense within the PVN ( Figure 1) and the ARC ( Figure 2) nuclei, but they were also abundant within the suprachiasmatic, supraoptic, and DMH ( Figure 2) nuclei. Other regions displayed less dense innervations, as for example, the VMH ( Figure 2). In the experimental groups, we observed a qualitative decrease of the NPY immunoreactivity (ir) in all the considered nuclei for all the different treatments. The post-hoc analysis with Fisher's LSD test showed a significant decrease of NPYir in all nuclei and for almost all the treatments. In PVN, all groups showed a significantly lower NPYir than controls (p < 0.01, Figure 1). In ARC, we did not observe statistically significant differences for the lowest dose of TBT and the highest dose of BPA, while all the other treatments induced a significant decrease of NPY expression (p < 0.05, Figure 2). In DMH we observed a significant reduction of NPYir in all treated groups (p < 0.05; Figure 2), except for the lowest dose of DES. Finally, in VMH we observed a strong reduction of NPYir due to the treatments (p < 0.01) except for the highest dose of TBT (for details see Table S2, Supplementary Materials).
POMC System
The distribution of POMCir in control mice was in agreement with the few previous studies that described this system in rats [43][44][45] and mice [31,46]. Contrary to NPY, hypothalamic POMC cell bodies are clearly visible, and they were fully included within the rostrocaudal extent of the ARC ( Figure 3) and periarcuate area, which also showed a local dense innervation of ir fibers. Two major targets of this system are the PVN and the DMH. In the PVN, POMCir fibers outlined the entire nucleus, starting from its rostral portion up to the more caudal levels. The distribution of these fibers was not homogeneous, in particular they were denser in the medial PVN (corresponding to the parvocellular regions of this nucleus) compared to the lateral PVN (corresponding to the magnocellular region) (Figure 1). The DMH nucleus ( Figure 3) showed a denser innervation in the caudal part of the nucleus compared with the rostral part. Other hypothalamic nuclei, such as the VMH, did not show a significant number of positive fibers. In the PVN we did not observe variations due to treatment. In fact, the statistical analysis showed no effect of treatment (F =1.097, p = 0.396, Figure 1). On the contrary, data collected in the ARC showed a decrease of the POMCir (including positive cell bodies and fibers), following the different treatments (F (8) = 8.289, p < 0.001). The Fisher LSD test showed a statistically significant decrease in the groups treated with the highest dose of DES (p < 0.001), the lowest of E 2 (p < 0.001) and in both groups treated with BPA (p < 0.001, Figure 3).
The quantitative analysis also showed a decrease of the POMCir in the DMH following different treatments (F (8) = 19.563, p < 0.001). The Fisher LSD test showed a decrease of POMC ir in all groups compared to controls, except for the lowest dose of TBT (Figure 3; for details see Table S2, Supplementary Materials).
Discussion
The control of energy metabolism and food intake is in part dependent on central neuroendocrine circuits that have been detailed in the introduction. Among the various systems, the NPY and the POMC systems (both located in the hypothalamic arcuate nucleus and sending their axons to other hypothalamic nuclei) exert orexigenic (NPY) and anorexigenic (POMC) effects. Several studies (recently reviewed by [14]) demonstrated that these neural circuits are altered when the animals are exposed to some environmental compounds that are now classified as metabolism-disrupting chemicals (MDCs) [13,47].
In the present study, we showed that some of the putative MDCs, when chronically administered through a phytoestrogen-free diet (reported in the literature as inducing body weight gain [48]), affected the expression of both NPY and POMC in the hypothalamic circuits of adult male mice. For comparison, we included two additional groups, one without any treatment (control group) and the second one exposed to E 2 (added to the diet), which has a well-known anti-adipogenic effect [49,50].
As expected, in the present experiment, both doses of E 2 induced a significant reduction of the body weight in comparison to the control group. On the contrary, male mice fed with the same diet but with different concentrations of three different EDCs, except the group treated with the highest dose of DES, did not show any significant reduction of the body weight. These results suggest that BPA, DES, and TBT are not able, in adult male mice, to counteract the consequence of an exposure to a phytoestrogen-free diet on the body weight, whereas E 2 is able to do this. Therefore, whereas E 2 has an anti-obesogenic effect, the EDCs considered in this study do not show this property. It is possible that the lack of effect on body weight is due to the fact that the reduction of the activity of the orexinergic circuits originated by the reduction of NPY is compensated by the reduction of the activity of the anorexinergic circuits caused by the reduction of the expression of POMC.
Our data show that the NPY expression in male mice hypothalamic nuclei involved in food intake regulation is reduced by E 2 as well as by all tested EDCs at almost all doses. Therefore DES, BPA, and TBT have the same effect of E 2 on the NPY system. In particular, DES and BPA have a well-known strong xenoestrogenic activity because they specifically bind to ERs [51]. On the contrary, TBT does not bind ERs, but it also has xenoandrogenic or antiandrogenic activity [52]. The reduction of NPY expression in the hypothalamus after TBT treatment confirms our previous results [30] and may be due to the activation of other pathways, not directly regulated by E 2 .
The effects of treatments on the POMC system of male mice are less homogeneous. In fact, we observed significant effects on ARC and DMH, while in the PVN we have not detected significant effects. DES, BPA, and also E 2 significantly decreased the POMC expression in ARC, while TBT showed no significant effect. It is important to note that 30% of POMC cells in ARC colocalize with ERα while they do not express ERβ [53], thus suggesting a possible direct role of ERs in regulating part of this system that represents, consequently, a putative target for xenoestrogens, like BPA and DES. The lack of TBT effect is also in line with our recent results that showed no effects of TBT on the POMC system in adult male mice [31].
The POMC neurons of the ARC send axons to two main targets, the DMH and the PVN. All treatments (including TBT at the highest dose) induced a significant decrease in the immunoreactivity in the DMH, whereas no effect was detected in the PVN, even when the quantitative analysis was performed on the different parts of the PVN, according to the method detailed in [54] (results summarized in Figure S3 of Supplementary Material). It is still possible that the paucity of POMC fibers in the PVN (compared to the NPY ones) has prevented the detection of small differences in the present experimental material.
In a limited number of experimental groups, the tested EDCs showed a significant effect in reducing immunoreactivity at the low dose and not at the high dose, for example see the effect of TBT on VMH NPY immunoreactivity, the effect of BPA on ARC NPY immunoreactivity, or the effect of E 2 on ARC POMC immunoreactivity. These results confirm the nonmonotonic dose response described in many experimental situations for several EDCs [55]. The differences of the results of EDC treatments on NPY and POMC immunoreactivity with those obtained with E 2 , are probably due to the activation of pathways not directly or indirectly regulated by E 2 . For example, it has been found that intracerebroventricular injections of oxytocin (OT) in adult fasted male rats decreases food intake [56]. Moreover, a retrograde tracer study revealed OT projection from PVN and SON to ARC, demonstrating that oxytocinergic signaling may regulate feeding [57]. OT cells, expressing ER-β, of the PVN [58], are a possible target for xenoestrogen that binds ER-β, like the phytoestrogen genistein [59]. This suggests that some EDCs may alter POMC expression via the OT system. However, the physiological significance of the OT neuronal projections from PVN and SON to ARC POMC neurons, still remains unclear, and further studies are required to clarify it.
One of the most important regulators of the NPY [60] and POMC [61] systems is represented by the cannabinoid receptor CB1. Some EDCs may modulate the expression of this receptor: prolonged exposure to DES produced a reduction in the mRNA for CB1 receptor in the rat pituitary [62], while BPA caused a downregulation of CB1 receptor in the mice hypothalamus [63]. No data are yet available for an action of TBT on the expression of CB1 receptor. Therefore, it is possible that present results on the alterations of NPY and POMC systems are partly due to an effect of the EDCs on the expression of CB1 receptor and a consequent functional alteration of these two systems. Future work should clarify this aspect.
The levels of circulating glucose are also important in controlling the NPY and POMC circuits, through glucose sensitive neurons located in the VMH and LH (for a recent review see [64]). All the three EDCs analyzed in this study disrupt glucose homeostasis by acting on pancreatic islets [37,65,66]. Even if in the present study we have not detected glucose blood levels, it is therefore possible that part of the dysregulation of the NPY and POMC systems is due to alterations of glucose homeostasis.
Another crucial point is that we do not know, at the moment, if we are observing an activational or an organizational effect of these EDCs. In the first case we may expect that the differences in the expression of immunoreactivity are due to an increase or a decrease in the production of neuropeptides in stable circuits (see the effects of BPA on NPY mRNA in neuronal cell cultures [20]). In the second case the hypothesis is that the exposure to the EDCs may induce permanent (or long-term) changes in the observed circuits. In fact, it has been demonstrated that gonadal hormones produced during puberty are inducing neurogenesis in some hypothalamic [67] or extrahypothalamic [68] nuclei and that this process is necessary to stabilize the sexual differences evidenced in these nuclei. A recent review [69] analyzed the available data for the development of hypothalamic circuits that control food intake and energy balance. In summary, in these circuits neurogenesis is only present during the prenatal period [70], but the full maturation of the connections ARC-PVN is reached during the postnatal days 28-35 [71]. However, more recent studies demonstrated that adult neurogenesis of NPY and POMC neurons in mice ARC is stimulated by changes among high fat-low fat diets [72]. Being our animal was three weeks old, it is therefore possible that exposure to EDCs had altered the connection of ARC towards VMH, DMH, and PVN, or even determined a change in the number of NPY and POMC neurons (BPA may induce apoptosis in hippocampal cells [73]). According to this hypothesis the observed changes in the immunoreactivity could be linked to an alteration (plasticity) of fibers' system reaching these nuclei. At the moment it is impossible to know if NPY and POMC circuits, after such a long exposure to EDCs, when provided with EDCs-free food, may recover to a status comparable to the non-treated animals (this is compatible with an activational effect). Future studies should elucidate this point, in particular not only if there is a recovery, but also how long it will take to recover.
In conclusion, these data, together with those already present in the literature, suggest that EDCs may alter energy metabolism not only at the level of peripheral tissues [13], but also in neuroendocrine circuits involved in the control of food intake, in particular, the NPY and POMC systems. The control of physiological processes by these systems is highly complex, making the understanding of neuroendocrine disruption a particular challenge.
Animals and Treatment
The procedures involving animals and their care were performed in Brescia according to the Union Council Directive of 22 September 2010 (2010/63/UE). The study was approved by the Ethical Committee of Animal Experimentation of the Hospital and the Italian Minister of Health (407/2018-PR). All care was taken to use the minimum number of animals.
C57BJ/6 male mice (Harlan, Udine) were housed in same-sex groups of 4 per cage on a 12:12-h light/dark cycle; animal rooms were maintained at a temperature of 23 • C. Estrogen-free diet was purchased from Dottori Piccioni S.r.L. Via Guglielmo Marconi, 29/31 Gessate (MI, Italy) (https://totofood.it/, assessed on 7 June 2021). The diet was prepared in pellets (the composition is reported in Table S1, Supplementary Materials).
The treatment started when mice were three weeks old and lasted for four months. Animals were divided randomly in nine experimental groups: control mice were fed with the base diet (estrogen-free diet) while experimental groups were fed with the base diet enriched with two different concentrations of E 2 , BPA, DES, or TBT (according to previous studies [74]). All the chemicals were obtained from Sigma-Aldrich, Milano, Italy, dissolved in DMSO and further diluted before their addition to the diet, for homogeneous preparations. These are the doses used: E 2 (stock solution 97%, cat. number E8515; 5 or 50 µg/kg diet); BPA (stock solution 99%, cat. number 239658; 5 or 500 µg/kg diet); DES (stock solution 99%, cat. number D-4628; 0.05 or 50 µg/kg diet); and TBT (stock solution 96%, cat. number T50202; 0.5 or 500 µg/kg diet).
The normal food consumption in adult mice corresponds to 15g/100g body weight/day [75]; since mice used in this experiment had a mean body weight of 30g, it was considered an approximate consumption of 4.5 g food/day was appropriate. Accordingly, in this case mice were exposed daily to approximately 0.15-1.5 µg/g body weight of E 2 , 0.15-15 µg/g body weight of BPA, 0.0015-1.5 µg/g body weight of DES, and 0.015-15 µg/g body weight of TBT.
Body weights were recorded at the end of the experiment, before sacrifice (see Table 1). Food consumption was monitored every two days as the difference between the weight of the pellets supplied and that consumed. Spilled food, if any, was collected in apposite trays underneath the food containers, measured, and taken into account.
Tissue Sampling and Histological Examination
Four months after the beginning of treatment adult mice were deeply anesthetized with an intraperitoneal injection of a mixture of ketamine (100 mg/kg of body weight, Ketavet, Gelling, Italy) and xylazine (10 mg/kg of body weight, Rompun, Bayer, Germany) solution, monitored until the pedal reflex was abolished and killed by cervical dislocation. Animals were decapitated, brains were quickly dissected and placed into acrolein (5% in 0.01 M saline phosphate buffer, PBS) for 150 min at room temperature. Brains were rinsed several times in PBS, placed overnight in a 30% sucrose solution in PBS at 4 • C, frozen in liquid isopentane at −40 • C and stored in a deep freezer at −80 • C until sectioning.
Brains (N = 4 for each group) were serially cut in the coronal plane with a cryostat (Leica CM 1900) at 25 µm of thickness. Sections were collected in four series for freefloating procedure in multiwell dishes, filled with a cryoprotectant solution [76] and stored at −20 • C until used for immunohistochemistry. One series of sections was stained for NPY immunohistochemistry and another for POMC immunohistochemistry. Brain sections were always stained in groups containing each treatment, so that between-assay variance could not cause systematic group differences.
After overnight washing in PBS, sections were incubated in 0.01% sodium borohydride for 20 min to remove the acrolein and rinsed in PBS several times. Then, sections were exposed to Triton X-100 (0.2% in PBS) for 30 min and treated for blocking endogenous peroxidase activity with PBS solution containing methanol/hydrogen peroxide for 20 min. Sections were afterwards incubated with normal goat serum (Vector Laboratories, Burlingame, CA, USA) for 30 min. One series was incubated overnight at 4 • C with the rabbit polyclonal antibody against synthetic porcine NPY (gift by Professor Vaudry, France) diluted 1:5000 in 0.2% PBS-Triton X-100, pH 7.3-7.4 and another with the rabbit polyclonal antibody against POMC (Phoenix Pharmaceuticals, Inc.,Burlingame, CA USA) [31,77,78] diluted 1:5000 in 0.2% PBS-Triton X-100 and 1% of BSA, pH 7.3-7.4. The next day, sections were incubated for 60 min in biotinylated goat anti-rabbit IgG (Vector Laboratories, Burlingame, CA, USA) 1:200. The antigen-antibody reaction was revealed by 60 min incubation with the biotin-avidin system (Vectastain ABC Kit Elite, Vector Laboratories, Burlingame, CA, USA). The peroxidase activity was visualized with a solution containing 0.400 mg/mL of 3,3 -diamino-benzidine (DAB, Sigma-Aldrich, Milano, Italy) and 0.004% hydrogen peroxide in 0.05 M Tris-HCl buffer, pH 7.6. Sections were mounted on chromallum-coated slides, air-dried, cleared in xylene, and cover slipped with Entellan (Merck, Milano, Italy).
The production and characterization of NPY polyclonal antibody has been previously reported [79,80] and it has been employed to detect the NPY system in a wide range of species [40].
The POMC antibody from Phoenix Pharmaceuticals recognizes a sequence corresponding to N terminal amino acids 27-52 of Pig POMC precursor and has often been used in mouse and rat studies [31,42,81].
We performed the following additional controls in our material: (a) the primary antibody was omitted or replaced with an equivalent concentration of normal serum (negative controls) and (b) the secondary antibody was omitted. In these conditions, cells and fibers were completely unstained.
Quantitative Analysis
All sections were acquired with a NIKON Digital Sight DS-Fi1 video camera connected to a NIKON Eclipse 80i microscope (Nikon Italia S.p.S., Firenze, Italy). The staining density of NPY-and POMC-immunoreactive (ir)-containing structures was measured in selected nuclei with the freeware ImageJ (version 1.49b, Wayne Rasband, NIH, Bethesda, MD, USA) by calculating in binary transformations of the images (threshold function) the fractional area (percentages of pixels) covered by immunoreactive structures in predetermined fields (area of interest, ROI). Due to differences in the immunostaining, according to our previous reports [50,63], the range of the threshold was individually adjusted for each section.
For quantification of NPY and POMC systems we selected four hypothalamic nuclei involved in controlling food intake-ARC, DMH, PVN, and ventromedial hypothalamic nucleus (VMH). For each nucleus, we measured the density of immunoreactive structures on three consecutive sections identified by the Mouse Brain Atlas (ARC, VMH, DMH: bregma −1.46mm, −1.58mm, −1.70mm; PVN: bregma −0.70mm, −0.82mm, −0.94mm [82,83]. The ROI selected for each nucleus was a box of fixed size and shape, selected to cover immunoreactive material only within the boundaries of each nucleus (about 140,000 µm 2 for VMH and DMH, 110,000 µm 2 for ARC, and 200,000 µm 2 for PVN). Due to the extreme paucity of immunoreactive structures, it was not possible to measure POMCimmunoreactivity in the VMH.
Statistical Analysis
Collected data were analyzed with the program SPSS 24.0 (SPSS Inc., Chicago, IL, USA); the p values and the significance threshold were set at p ≤ 0.05. Data collected for the body weight were analyzed by one-way ANOVA followed by post-hoc analysis with a Fisher LSD test. Data collected for the immunohistochemistry were analyzed by repeatedmeasure one-way ANOVA. When the analysis did not show significant differences between different levels of the same nucleus, we calculated a mean value for each nucleus that was used to assess variations due to the treatment. When statistically significant, the ANOVA analysis was followed by a Fisher LSD test.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/metabo11060368/s1, Figure S1: NPY immunoreactivity. Immunohistochemical comparison of NPY immunoreactivity among control animals (CRL) and the different treated groups (in all case it was shown to be the lowest dose used) in the dorsomedial (DMH), ventromedial (VMH), arcuate (ARC), and paraventricular (PVN) nuclei. Estradiol, E2; tributyltin, TBT; diethylstilbestrol, DES; bisphenol A, BPA. Scale bar = 100 µm, Figure S2: POMC immunoreactivity. Immunohistochemical comparison of POMC immunoreactivity among control animals (CRL) and the different treated groups (in all case it was shown to be the lowest dose used) in the dorsomedial (DMH), arcuate (ARC), and paraventricular (PVN) nuclei. Estradiol, E2; tributyltin, TBT; diethylstilbestrol, DES; bisphenol A, BPA. Scale bar = 100 µm, Figure S3: Regional analysis of POMC immunoreactivity in the PVN. To further confirm the absence of variations in the POMC expression within the PVN, we measured the immunoreactivity, according to our previous studies [54], by dividing the PVN into four quadrants: dorsomedial (DM), dorsolateral (DL), ventromedial (VM), and ventrolateral (VL). The results of this analysis reported no significant differences for all the analyzed subregions and are summarized in the histograms (B−E). Scale bar = 100 µm, Table S1. Composition of the soy-free diet (SFSD), Table S2. Summary of quantitative analysis of the fractional area. Fractional area data in the different nuclei and in the different groups analyzed in this study. The values reported are the mean and standard error of the mean (SEM). Bold numbers and asterisks indicate significant differences (Fisher's test) among the differently treated groups: * p < 0.05, ** p < 0.01, *** p < 0.001 different from control.
Informed Consent Statement: Not applicable.
Data Availability Statement: All the data are available from the authors upon reasonable request. The data presented in this study are available in this supplementary. | 6,998.6 | 2021-06-01T00:00:00.000 | [
"Biology"
] |
Changes in the determinants and spatial distribution of under-five stunting in Bangladesh: Evidence from Bangladesh Demographic Health Surveys (BDHS) 1996–97, 2014 and 2017/18
Background Bangladesh has experienced tremendous change in child nutrition over the past few decades, but there are large differences between different regions in progress made. The question is whether continuation of current policies will bring the progress needed to reach national and international targets on child nutrition security. Data and methods Using national data BDHS 1996/97, 2014, and 2017, this study attempts to map such reductions across Bangladesh and to explore the distribution of covariate effects (joint effects) that are associated with childhood stunting over these two periods, overall and by region. The main contribution of this paper is to link observed stunting scores to a household profile. This implies that different variables are evaluated jointly with stunting to assess the likelihood of being associated with stunting. Results Overall, the covariates: ‘Parental levels of education’, ‘children older than one year old’, ‘children live in rural area’, ‘children born at home’ formed the country winning profile in 1996/97, whereas parental levels of education disappear in the winning profile for children stunted in 2014. This implies that over the years, Bangladesh has been successful in addressing parental education for long-term reductions in child undernutrition. In addition, the diversity of profiles of households with stunted children increases over time, pointing at successful targeting of policies to increase food security among children over the period. However, in areas where improvements have been insignificant, also the profiles remain stable, indicating a failure of policies to reach the target populations. The analysis for 2017 confirms this picture: the diversity of profiles remains high, with little change in the dominant profiles. Conclusion Further decline in stunting is possible through region specific multipronged interventions, targeting children older than one year among vulnerable groups, in addition with strengthening family planning programs as larger families also have a higher risk to have stunted children. In general, the profiles in 2014 and 2017/18 are much more diverse than in 1996, which can be explained by the relative success of specific targeted policies in some divisions, while being much less successful in other regions. In sum, our results suggest that the challenge lies in the implementation of policies, rather than in the generic approach and assumed theory of change.
Introduction
Globally, nutritional status measured as length-for-age/height-for-age (stunting) is one of the predictors of the well-being of young children. The World Health Organization has identified malnutrition as the underlying cause of close to half of child deaths worldwide [1,2]. In developing countries annually about 13 million under-fives die of causes linked to malnutrition [3]. Stunting affects children not only by increasing deaths among them and making them more susceptible to disease; it also negatively influences children's behavior. There is strong evidence that children who are suffering from linear growth failure are more likely to be affected by infectious disease, for example, malaria, diarrhea, and pneumonia [4], and poor nutrition during childhood causes irreversible damage to cognitive development and future health. Strong associations are observed between child malnutrition and less schooling and reduced economic productivity [5][6][7] estimate that 2.91 years of performance deficit that resulted from stunting caused upto 19.8% loss of adult income. It also appears that damage grieved in early life can lead to permanent impairment and it may also affect future generations. For example, malnourished girls often grow up to become malnourished mothers themselves with detrimental impacts such as low birth weight on their offspring.
In the recent past, Bangladesh has made notable achievements in the development indicators of child health. Under-five mortality has declined in Bangladesh from 133 per thousand in the mid-90s to 46 in recent years [8,9]. The rate of stunting (length-for-age/ height-for-age<-2 SD) among under-five children-an indicator of the state of the chronic undernutrition-in Bangladesh has come down from 55% (59% WHO new) in 1996-97 to 29% (36% WHO new) in 2014 and 23% (31% WHO new) in 2017/18. However, despite of such reductions, still almost one-quarter of all under-fives suffer from stunting, accounting for over 5 million children. Earlier studies on stunting in Bangladesh rarely focus on both determinants and spatial variation of stunting as indicator of childhood malnutrition. Studies frequently report on the effects of socioeconomic and demographic factors that are likely to be associated with stunting prevalence in children in Bangladesh [10][11][12][13][14][15][16]. Heady et al., 2015 [17] performed multivariate analysis using successive DHS data sets (1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011). The major factors correlated with the decline in stunting in Bangladesh include a rise in household assets; improvements in parental education; a reduction in open defecation; prenatal and birth delivery care; birth order and birth intervals; and maternal height. However, these conclusions are drawn at national level and do not account for the substantial spatial variation in stunting levels and causes.
Bangladesh has made significant progress in women's education, infrastructure and economic development, but stunting remains a serious challenge, particularly since not all areas have witnessed significant reductions in stunting rates. Hence, to improve the nutritional status of all children across Bangladesh, it is vital to understand the spatial variation of stunting as well as the local dynamics. Hence, this paper studies the spatial variation of stunting, as well as spatial patterns of explanatory factors, in the periods 1996-97-2014. In addition, we analyzed BDHS 2017/18 data to compare the dynamics of the changes in the exploratory factors between periods 1997-2014 and 2014-2017.
The large literature on determinants of stunting report that age and gender of the child, birth weight and birth interval; mother's education and nutritional status; household economic status and family size; and residential place are significant predictors of stunting. The paper considers the risk factors for stunting in a spatially explicit analysis and compare our findings for 1996/97 with those for 2014 and 2017. These highlights possible shifts in critical factors associated with stunting over time. The findings may help to meet the goals of reducing stunting at national and international targets that Bangladesh has committed to.
Bangladesh played a critical role in the second International Conference on Nutrition in 1992 and made a commitment to include nutrition policy in all development activities. As committed, the first National Food-Security and Nutrition Policy (NFNP 1997) was developed in 1997. The initial success achieved during the decade between 1997 and 2007 (the stunting rates declined by 12%, from 55% to 43%) raised the government's ambitions, and the Health, Population and Nutrition Sector Development Programme (HPNSDP) set a target to bring down the stunting rate by approximately 25 percent by 2021 [18]. At the same time the World Health Assembly (WHA) requires Bangladesh to achieve a stunting rate of 21.6% by 2025 [19,20], a reduction of 40%. However, the Bangladesh DHS 2011 shows that between 2007 and 2011, the stunting rate has almost stagnated (the prevalence of stunting was only 2% lower, at 41%). Stunting rates must decrease at an annual rate of 5.3 percent in the period from 2014 to 2021 if the HPNSDP target is to be achieved and at a rate of 4.8 percent in the period from 2014 to 2025 if the WHA target is to be met. By contrast, the rate at which stunting has fallen in the period from 1996-97 to 2017 is about 3 percent, much too low to meet both national and international targets. Given the past record, it is unlikely that current policies to reduce hunger and increase production will create the necessary impact. The paper's comparison between three BDHS [8,9,21] may provide useful information on the impact of the National Food and Nutrition policy in 1997 and other nutrition policies after 1997 [22,23] and add insights to the regional disparities in achieving improvements. In the discussion section, we will highlight some important messages from the findings for the formulation of future nutrition policy in Bangladesh.
Since 1998, several national programs have been launched by the Ministry of Health, and Family Welfare (MOHFW). The aim of these programs was to stimulate demand for health, population and nutrition services while also improving the quality of these services to reduce morbidity and mortality, and improve nutrition status, especially of women and children. The first Health and Population Sector Program was implemented during the period 1998-2003, followed by the second Health and Nutrition and Population Sector Program (2003-2011), the third sector-wide program HPNSDP (2011-2016), and the fourth HNPSDP (2016-2021). Policy briefs using the data from BDHS 2014 [22] highlight the importance of nutrition-specific interventions like breastfeeding, complementary feeding, micronutrient supplementation, adequate and balanced diet during pregnancy, and treatment of acute malnutrition. However, the report also revealed that the coverage of these interventions was not scaled up till the writeup of the policy briefs in 2016. Therefore, at first step we target to analyze the data sourced from BDHS 2014 covering the period between 1996-2014 before the implementation of sector-wide, holistic approach, HNPSDP program between 2016-2021. Next, we also analyze the data from the BDHS 2017/18, to address the impact of the HNPSDP program. We note that at present, the results of the 2020 BDHS are not yet available The results are discussed in the discussion section 4.
Data
The study relies on three DHS surveys for Bangladesh (1996/1997, 2014 and 2017/18, available from DHS (2022) [24]. These datasets are national representative surveys, gathering information on a wide range of socio-demographic and health indicators of women of reproductive age , and young children aged 0-59 months. In 2017 and 2014, the households are located in 672, and 600 clusters, covering 420 Upazilas of the 492 and 335 of the 463 Upazilas with fairly large samples in each of the 8 divisions Barisal, Chittagong, Dhaka, Khulna, Mymensingh (newly formed in 2015), Rajshahi, Rangpur and Sylhet. The 1996/97 survey covered 301 clusters from 6 divisions. All surveys are cross-sectional with two-stage stratified sampling design. Census enumeration areas are selected as primary sampling units in the first stage of the sampling frame. At the second stage households are randomly selected from the primary sampling unit and ever-married women aged 15-49 years are interviewed from the selected households (for details about sampling design see [8,9,21]. Geographic coordinates and altitude of each cluster are recorded for the 2014 and 2017 datasets, but since the 1996 DHS is not geo-referenced, analysis is done at the lowest administrative level. For later reference, the map of Bangladesh is included in Fig 1. Data for this study are extracted from the Stata file "Children's Recode" of the surveys. The measurement of linear physical growth i.e. height-for-age (stunting) as indicator of chronic nutritional status is available in the data set. The dependent variable for this study is stunting (HAZ<-2 SD), i.e., being more than two standard deviations below the median (-2 SD) of the WHO reference population in terms of height-for-age. For comparison between three surveys the stunting rates for the 1996 survey are recomputed using the new WHO guidelines (the NCHS/CDC/WHO growth standard, WHO new). The study weighted sample for 1996/97, 2014 and 2017/18 surveys include 4711, 6965, and 8759 children aged 0-59 months old, respectively. The variables selected from these surveys are discussed in the Results section.
Characterizing households: Integrated statistical analysis
The main contribution of this section of the paper is to link observed stunting scores to a household profile. This implies that different variables are evaluated jointly with stunting to assess the likelihood of being associated with severe or moderate stunting. Formally, observed values of the variables used in the analysis define a joint empirical frequency distribution. Conditional frequency distributions can be derived from this joint distribution by partitioning the answers by say, S respondents indexed s into a vector y of dependent variables and a vector x of independent variables, taking the frequencies of y conditional on x. As the conditional frequencies are naturally interpreted as probability estimates, we also compute the most probable characteristics associated to each x-value, which can be interpreted as "winner", in the sense of having the highest probability to have the desired y outcomes, as well as the runner up (second-best guess) and so on. The coverage of a profile is the mass of a class divided by the total mass of the relevant group, while the edge of the winning profile over the runner up is the ratio of their maximum likelihood probabilities. Formal definitions and conditions are included in Box 1 (reproduced from Van Wesenbeeck et al., 2016) [25]. Selection of the best profile is based on two criteria: (1) the coverage of the profile, (2) the edge over the runner up.
Given that potentially, many variables are available (say, an entire survey), restriction to a limited set of candidates is vital. Theoretical considerations can provide guidance, and univariate analysis may also be used to identify potentially important aspects that need to be included in the set used for the analysis. In addition, a balance needs to be struck between including a relatively large set of variables in the profile, with a high degree of specificity, but a low number of observations in each profile, or a small set, with a broad coverage, but less specificity. [25] have shown that the best results are obtained when a total of 5 variables are included in the profile. Hence, our study tests for all possible permutations of 5 variables from the total set of eligible variables identified from the literature and confirmed through univariate analysis (see section 3.2 below). Further selection of the best set of 5 variables is done by considering the coverage and edge of each of the permutations.
Identifying and locating vulnerable groups: 1996 vs 2014
The first step is to compare the prevalence of stunting, and thereafter severe and moderate stunting throughout Bangladesh between the years 1996 and 2014 as this period was characterized by targeted interventions. To identify severe and moderate stunting, we use the WHO classification: stunting is severe if the score is below -3 standard deviations from the norm, and moderate for a score between -2 and -3 standard deviations. As mentioned earlier, stunting rates for 1996 are recomputed using the new WHO guidelines to enable comparison with the 2014 figures. Fig 2 shows the district level distribution of stunting (Z<-2 SD) of under-fives for 1996 and 2014, respectively. In 1996, stunting was highly prevalent in all districts of Rangpur, Sylhet, Barisal and Mymensingh regions, and a few districts from other regions (> = 50%). However, nearly 20 years later, a significant reduction in stunting prevalence is observed. Some exceptions stand out, however: the district of Netrokona in the Mymensingh region shows a high stunting rate of > = 50% in both periods. The predominant improvement (stunting rate < = 30%) has occurred in the north-west-south parts of Bangladesh e.g. Rangpur, Rajshahi, Khulna, and Barisal with the exception in Gaibandha (> = 50%), and Panchagar (40-50%) from the Rangpur region. A few districts from Rajshahi and Khulna, bordering Dhaka, also made less improvement (stunting rates 30-40%).
Bangladesh the severe stunting rates are below 20%, whereas in 1996/97, such low levels were observed only in some districts of Rajshahi, Rangpur, and Khulna. A massive reduction in severe stunting rates has occurred in the south-east part of Bangladesh. Fig 4 shows that a similar distribution in the changes between 1996/97 and 2014 is observed for moderate stunting rates in Bangladesh. Reductions mainly occurred in Rangpur, Rajshahi, Khulna and Dhaka (areas are next to Khulna). However, there was no change in the moderate stunting rates in Sylhet, and neighboring areas in other divisions e.g., Mymensingh, Barisal, and Comilla. The distributions of severe, moderate, and total stunting rates throughout Bangladesh and the significant difference between two surveys raise an important question of policy implications and impact evaluation. Which factors play a role in explaining the differences in the reductions of stunting rates among under-fives in Bangladesh? Our analysis explores this to support next steps for policy efforts to achieve the SDG goals in Bangladesh. Univariate analysis reveals that children's residence in urban versus rural areas is significantly associated with stunting probability, although the gap decreases over the years, while gender of children only varies marginally with stunting probabilities in children under-five in all periods. The percentage distribution of stunting varies significantly (p<0.001) by mother's height, mother's body mass index (BMI), parent's schooling years, home delivery versus institutional delivery, and household asset quintiles. The presence of other children under five in the household (indicating lack of family planning) is marginally varying with stunting rates in both periods but is still included in the polling analysis because of the high priority given to family planning in the 4 th Health Sector Programme, 2018-2021 (FP2020, 2019) [26]. This leads to the following set of variables qualify for inclusion in the profile: Gender and age of children; education of the mother; education of the father; BMI of mother; height of mother, age of mother at their child's birth; place of delivery; wealth quintile (rural/urban specific and national); residence (rural/urban), administrative divisions/regions; and number of other children under five in household. We note that S1 Table in the Annex also includes the results for the 2017/18 survey, where in qualitative terms, the same results follow, the only difference being that that the rural/urban gap has decreased. The analysis will address the important key questions why and how the progress made towards reducing the stunting rates over the period of decades; do the covariates play a significant role?
Polling results, 1996 vs 2014
Polling analysis helps to link observed stunting scores to a household profile. This implies that all permutations of subsets of five variables from the total set of selected ones are evaluated to assess the likelihood of being associated with the observed stunting scores among under-fives in Bangladesh. Hence, 462 polling analyses are performed, from which winning profiles (profiles with maximum coverage and edge) are selected.
We present our polling results separately for both surveys in Tables 1 and 2. Table 1 shows the winning profiles with coverage (likelihood) and edge (odds ratios) for all (including newly formed Mymensingh division) divisions and for Bangladesh as a whole. In the sample, overall, 24% of all stunted children in Bangladesh are associated with the wining profile of child is older than one year; residence is in rural area; both parents have no education; and child is born at home. The likelihood of this wining profile is found to be 2.87 times higher than the likelihood of second-best profile, implying a high confidence level in this association. However, to target heterogeneous populations for policy making, polling analysis at lower levels of administrative divisions is also included. The national winning profile was observed for five divisions (Dhaka, Mymensingh, Rajshahi, Rangpur, and Sylhet). Chittagong and Khulna deviate from the national wining profile by one variable 'more than one under-fives in the HH', Barisal deviates by two variables 'male child' and 'mother has primary education'. The highest coverage (about 39% of all stunted children) is observed in Sylhet, followed by 33% in Rajshahi, 29% in Rangpur and 26% in Dhaka. The odds ratios for all these winning profiles were significantly higher than the likelihood of second-best profiles. Table 2 reveals the winning profile for children who are scored as stunted in 2014. This profile includes: child is older than one year; residence is in rural area; maternal BMI is normal; child is born at home; and more than one under-fives in a household. 14% of all stunted children in the sample are being associated with this profile, and the likelihood of this profile is 1.6 times higher than the likelihood of the second-best profile. This implies that confidence in this association is much lower than that for the winning profile in 1996, pointing at more diversity in the types of households with stunted children than in this earlier period.
The winning profile of being stunted is heterogeneous by different divisions. The winning profile in Mymensingh covers 24% with odds ratio 3.73. The second highest coverage is observed in Rangpur (21%, odds ratio: 1.85). Although the combinations of variables in the profiles were different, rural residence and child older than one year are uniquely distributed in all winning profiles. 'Normal BMI'; 'child is born at home' and more than one under-fives in a household' are the second dominating combination of variables in winning profiles. 'Poorest household' and 'Father has no education' were found to be the dominant characteristics of the winning profile that explain childhood stunting only in Sylhet. This diversity in results per region support the national finding that the diversity of households with stunted children has increased, which may be caused by a relatively successful targeting of nutrition policies. Section 4 returns to this point.
Polling results: 2014 vs 2017/18
Finally, we considered the BDHS 2017/18 data on stunting and addressed the question whether this would confirm the dynamics observed in the period 1997-2014. Given the change from targeted to more holistic approaches to decrease stunting, we would expect no significant change in the diversity of the profiles, nor in the profiles themselves. Table 3 presents the polling results for 2017/18. The polling results in Table 3 indeed confirm that the coverage has remained low (and therefore the diversity of profiles high. In fact, for Bangladesh as a whole and for all regions except Rajshahi the coverage fell. Comparing the winning profiles, the results also confirm that by and large, the profiles have remained unchanged.
Discussion
The country has experienced tremendous change in child nutrition over the past few decades. A significant reduction in childhood stunting was achieved during this period: from a rate of 59% (new WHO) in 1996-97 to 36% (new WHO) in 2014, and 31% in 2017/18 [21]. The Health, Population and Nutrition Sector Development Programme (HPNSDP) of the Government of Bangladesh has set a target to bring down stunting rate at about 25 percent by the period 2016-2021, however, World Health Assembly (WHA) targets this rate to 21.6% by 2025 (NIPORT et al., 2016; Osmani et al, 2016). Using national data BDHS 1996/97,2014, and 2017/18 this study attempted to map such reductions across Bangladesh and to explore the distribution of joint covariates effects (wining profiles) that are associated with childhood stunting over these periods by regions and overall. The results give an insight of the wining profiles covering the period of the first, second and third Health, Population and Nutrition Sector Programs in Bangladesh, as well as covering the start of the implementation of a wide range of nutrition sensitive services and the fourth sector-wide population sector program between 2016-2021.
Overall, the covariates: 'Parental levels of education', 'children older than one year old', 'child lives in rural area', 'child born at home' formed the country winning profile in 1996/97, whereas parental levels of education disappear in the winning profile for children who were stunted in 2014. The common variables in the winning profile associated with stunting in both periods are 'rural residence', 'child older than one year', and 'home birth'. This situation reflects that the country was failing in fulfilling its existing policies see Table 3 of [22] to ensure improved nutrition for all. This finding suggests strengthening the National Policy on Infant and Young Child Feeding, to improve access to nutrition and maternal care services.
The 2014 wining profile includes, instead of "parental levels of education", "normal maternal BMI" and 'more than one under-fives' in the HH. This finding provides support for the priority given to strengthening family planning services in the current fourth Health Sector Progamme. In line with common findings in the literature on the detrimental effects of parental schooling years on childhood stunting [13,14,17], the likelihood of being stunted was higher among no educated parents in 1997. Interestingly, however, for 2014, winning profiles no longer include parental lack of education. This may point to the fact that over the years, Bangladesh has been successful in addressing parental education for long-term reductions in child undernutrition. The results for 2017/18 confirm the results for 2014, with an increase diversity in profiles and only minor changes in the variables included in the winning profiles, nationally as well as at regional level. Studies evident on the success of women's education in Bangladesh include e.g. [22,23]. However, the role of father's education is underrated, and our findings for 1996 support the conclusion in Vollmer et al (2017) [27] that father's education is also an important factor in children's nutritional status. In addition, Saha et al., 2019 [15] specifically stressed the impact of father's education to bring down the high prevalence of stunting in Sylhet; in our results, it seems that this approach has been successful, as father's education no longer figures in the dominant profile for 2017 while it did in 1996/1997 and 2014. Generally, the results for 2017 confirm the results for 2014, with an increase diversity in profiles and only minor changes in the variables included in the winning profiles, nationally as well as at regional level. For the period 1996/97 until 2014, a significant reduction in the distribution of stunting is observed, but this reduction was not uniform across the country. For example, the district Netrokona from the Mymensing region consistently has the highest stunting rates in both periods, and a small reduction was observed in other districts of this region. Further support is evident from our polling results where no changes are observed in the household profiles "rural residence"; "father has no education", "child is older than one year old" "child is born at home" associated with observed stunting scores between two periods. Besides, geographic location perhaps hinders the well-being of residents in Netrokona which is situated in the northern part of Bangladesh, near the Meghalayan border. This finding calls for necessary action for national nutrition services (NNS) for further reduction of stunting among these vulnerable groups.
The predominant improvement occurred in the eastern-south-western parts of Bangladesh e.g. Rangpur, Rajshahi, Khulna, and Barisal. Diffusion of agrarian and cultural practices between the two Bengali-speaking regions, i.e. West Bengal and Bangladesh, cannot be ignored. We can corroborate this argument with the fact that the bordering districts of southern West Bengal, that are adjacent to Khulna, Rajshahi and a few other districts hold similar demographic and nutritional rates [28]. Needless to mention, there are similarities in the nature of the soil in Khulna, Rajshahi and adjacent districts of West Bengal. These areas are covered with loamy or clay loam soil of the Gangetic plain that has good agricultural productivity, which may explain the relatively better food supply. In addition, there has been sufficient evidence of women's education, autonomy and decision-making capacity in these regions and thus they excel in child nutrition when compared to the other divisions of Bangladesh. Hence favourable climatic and agricultural environment along with women's empowerment and knowledge related to child nutrition serve as primary factors to reduce the stunting among children, as compared to northern divisions of the country.
Sylhet remains the worst division of Bangladesh in terms of stunting prevalence. National food security data have placed Sylhet as a relatively food secure region and yet this region has the poorest nutritional status [29]. Although Sylhet is a relatively rich district, poverty coexists with richness [30,31]. The local geography of Sylhet is also not favourable for year-long agricultural production, due to leached tropical soil, lack of crop rotation and environmental shocks (BBS 2007). Apart from improving economic condition in Sylhet, another key intervention required is improving education levels and economic status of the poor.
A few districts from Rajshahi, Khulna and Rangpur (bordering Dhaka) showed little improvements. This specifically holds for the Gaibandha area of Rangpur. Furthermore, some districts from the southeast part of Bangladesh e.g., Sylhet, Comilla, Mymensingh, Barisal, and Chittagong consistently have the highest percentage of childhood stunting rates. For these regions, there seems to be a clear relation with hunger and poverty reductions, envisaged in the program NFNP of 1997 and other nutrition policies after 1997. It seems that areas similar to Sylhet may have less access to national nutrition services than planned under the Health, Population and Nutrition Sector Development Program and National Nutrition Policy in 2011.
A renewed call for action is needed to address the underlying causes of undernutrition including maternal and paternal education, child marriage and early first birth, sanitation and hand washing practices, access to food and health care, infant and young child feeding practices and the status of girls and women in the family and in society.
In general, the profiles in 2014 are much more diverse than in 1996, which can be explained by the relative success of specific targeted policies in some divisions, while being much less successful in other regions. This would be consistent with the observation that not only stunting levels have not fallen in these areas, but also that the dominant profiles for households with stunted children have remained largely unchanged.
The results for the period 2014-2017 confirm this picture: for Rajshahi, the progress in reducing stunting has been the lowest across Bangladesh (2%), and this is the only region where the coverage of the dominant profile increased, and the edge is also high. For all other regions, the coverage falls, with increases in diversity being strongly correlated with the improvement in stunting. This therefore suggests that the challenge lies in the implementation of policies, rather than in the generic approach and focus.
We note that as we had no access to the 2020 DHS data, we cannot account for any impact of Covid-19 on stunting outcomes, although it is, of course, unsure whether the data already capture the effects of the pandemic anyhow. However, given assessments of its impact that are by now available for other countries e.g.,see [32,33], it is seems likely that there has been a strong negative effect of Covid-19 on stunting, although it is also clear from the literature that targeted interventions can mitigate the negative effects to a large extent. our findings remain valid also for interventions in crisis situations such as the pandemic, as they highlight the importance of access to antenatal and postnatal care, as well as nutritional interventions for children older than 1 year to mitigate the negative impact of these shocks.
Conclusion
Our study supports the NNP [34], which acknowledges the existing policies of the country related to nutrition but also makes an effort to utilize and incorporate these policies as means to the overall end-an improvement in nutritional status see [19] and Table 2 in [23]. In general, our findings reflect that the access to antenatal and postnatal care, as well as nutritional interventions for children older than 1 year could strongly improve the nutritional status of children. In addition, we also find support for a policy focus that explicitly includes the fathers, particularly when it comes to strengthening education. For specific regions, progress has been significantly slower than average. This in particular holds for Sylhet and areas with similar characteristics, which seemed to have suffered from a lack of access to nutritional services planned under various policy interventions. For such regions, renewed efforts are needed, targeting children older than one year among vulnerable groups, in addition to strengthening family planning programs (because of the association of larger families with stunting prevalence. Regular repetition of the analysis of progress and changes in profiles of households to be targeted would allow not only to track progress, but also to fine-tune policies where needed.
Supporting information S1 | 7,323.4 | 2022-12-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Field and Evaluation Methods Used to Test the Performance of a Stormceptor r Class 1 Stormwater Treatment Device in Australia
Field testing of a proprietary stormwater treatment device was undertaken over 14 months at a site located in Nambour, South East Queensland. Testing was undertaken to evaluate the pollution removal performance of a Stormceptorr treatment train for removing total suspended solids (TSS), total nitrogen (TN) and total phosphorous (TP) from stormwater runoff. Water quality sampling was undertaken using natural rainfall events complying with an a priori sampling protocol. More than 59 rain events were monitored, of which 18 were found to comply with the accepted sampling protocol. The efficiency ratios (ER) observed for the treatment device were found to be 83% for TSS, 11% for TP and 23% for TN. Although adequately removing TSS, additional system components, such as engineered filters, would be required to satisfy minimum local pollution removal regulations. The results of dry weather sampling tests did not conclusively demonstrate that pollutants were exported between storm events or that pollution concentrations increased significantly over time.
Introduction
The increase in impervious surface area associated with urban development has resulted in increased stormwater runoff volumes and increased pollution loads for downstream receiving waters [1,2].The management of stormwater in urban areas has therefore become a priority issue during the planning, construction and maintenance of urban developments [3].
A wide range of best management practices (BMPs) have been implemented over the last few decades to remove pollution from stormwater runoff [4][5][6][7].These include sediment basins, swales, rain gardens, wetlands and biofilters.These devices primarily function by filtering and removing the sediment contained within stormwater runoff.Supplementary biochemical treatment processes that remove nutrients from urban runoff may also occur within the media and plants used in various BMPs [8].To prolong the useful life of these devices, periodic removal of the trapped sediment is required.However, removal of the sediment can often be difficult and costly to achieve in practice, and this can limit their application [9].The size of some of the BMPs can also restrict their use in dense urban environments.
Proprietary stormwater treatment devices (PSTDs) have also been widely implemented in urban areas over the last few decades to manage stormwater by reducing peak flows and downstream pollution loads [10][11][12].Compared to complex treatment trains and large surface area basins, PSTDs are designed for easy installation and maintenance and are becoming more popular in Australia, as well as in the rest of the world [13,14].There has been a range of studies that have focused on the performance and evaluation of conventional BMPs.However, much less is known about the pollution removal performance of PSTDs [14,15].This paper reports on the pollution removal performance results of a series of field-based tests undertaken on a PSTD (Class 1 Stormceptor ; Figure 1).The PSTD, located on the Sunshine Coast, Australia, was subjected to a series of natural rainfall events over a period of 14 months.Water quality tests were undertaken to determine the levels of removing total suspended solids (TSS), total nitrogen (TN) and total phosphorous (TP) removed by the system during rainfall events and dry weather for potential leaching evaluation.Most pollutants from urban areas are transported during wet weather conditions rather than dry, which is why this testing was undertaken during rainfall events.
Sustainability 2015, 7, page-page 2 are designed for easy installation and maintenance and are becoming more popular in Australia, as well as in the rest of the world [13,14].There has been a range of studies that have focused on the performance and evaluation of conventional BMPs.However, much less is known about the pollution removal performance of PSTDs [14,15].This paper reports on the pollution removal performance results of a series of field-based tests undertaken on a PSTD (Class 1 Stormceptor ® ; Figure 1).The PSTD, located on the Sunshine Coast, Australia, was subjected to a series of natural rainfall events over a period of 14 months.Water quality tests were undertaken to determine the levels of removing total suspended solids (TSS), total nitrogen (TN) and total phosphorous (TP) removed by the system during rainfall events and dry weather for potential leaching evaluation.Most pollutants from urban areas are transported during wet weather conditions rather than dry, which is why this testing was undertaken during rainfall events.In particular, the system has been designed to specifically remove TSS; removal of TP and TN has been regarded as an added bonus.The suitability of the performance evaluation calculation methods used has also been discussed.
Methodology
Testing was undertaken over a period of 14 months at a commercial-zoned site in Nambour, approximately 100 km north of Brisbane, Australia.The site comprised a total area of 2800 m 2 , with approximately 1848 m 2 of roof area (66%), 924 m 2 of impervious concrete driveway (33%) and 28 m 2 (1%) of landscaped area.
Treatment Train Approach
The stormwater treatment train (Figure 2) included an underground rainwater tank (roof water capture and reuse), gully pits and surface drains, as well as the PSTD.Roof water from the site was firstly directed to an underground rainwater tank (shown as a blue dot in Figure 1), which then overflowed to the PSTD (shown as a red rectangle in Figure 1) once the tank was full.The surface runoff from the carpark area was drained directly to the PSTD via a series of gully pits, surface drains and underground pipes.
A schematic of the treatment train and flow paths from the site is presented in Figure 2. Once treated, stormwater was discharged to the municipal stormwater drainage system and eventually into Petrie Creek, a sub-catchment of Maroochy River, which is comprised of predominantly low-land freshwaters within partly-confined valleys.Petrie Creek supports rare and threatened species and diverse invertebrate and fish populations [16].In particular, the system has been designed to specifically remove TSS; removal of TP and TN has been regarded as an added bonus.The suitability of the performance evaluation calculation methods used has also been discussed.
Methodology
Testing was undertaken over a period of 14 months at a commercial-zoned site in Nambour, approximately 100 km north of Brisbane, Australia.The site comprised a total area of 2800 m 2 , with approximately 1848 m 2 of roof area (66%), 924 m 2 of impervious concrete driveway (33%) and 28 m 2 (1%) of landscaped area.
Treatment Train Approach
The stormwater treatment train (Figure 2) included an underground rainwater tank (roof water capture and reuse), gully pits and surface drains, as well as the PSTD.Roof water from the site was firstly directed to an underground rainwater tank (shown as a blue dot in Figure 1), which then overflowed to the PSTD (shown as a red rectangle in Figure 1) once the tank was full.The surface runoff from the carpark area was drained directly to the PSTD via a series of gully pits, surface drains and underground pipes.
A schematic of the treatment train and flow paths from the site is presented in Figure 2. Once treated, stormwater was discharged to the municipal stormwater drainage system and eventually into Petrie Creek, a sub-catchment of Maroochy River, which is comprised of predominantly low-land freshwaters within partly-confined valleys.Petrie Creek supports rare and threatened species and diverse invertebrate and fish populations [16].The hydraulic design of this device facilitates a minimum four-minute retention period that provides conditions within the secondary (offline) chamber, promoting the separation of total suspended solids (TSS), hydrocarbons (TPH) and other pollutants (nitrogen and phosphorous) through the coalescer unit (Figure 3).Incorporated into the hydraulic flow of the unit, the coalescer is a polyethylene, oleophilic matrix that filters and then repels hydrocarbons from water.The arrows in Figure 3 show the flow path that the treated stormwater takes through the PSTD.The primary chamber of the PSTD is designed to trap gross pollutants and sediment greater than 0.2 mm in diameter, and the coalescer in the secondary chamber is designed to separate immiscible liquids (oil and grease) from water.The maximum treatment capacity of the PSTD used in this study was 20 L/s.This flow volume was referred to as the treatable flow rate (TFR), and it is from these flows that auto-samplers located at Points A and B (Figure 3) collected water samples for analysis.In particular, the study evaluated the ability of the PSTD to remove TSS contained in stormwater effluent to the levels specified by the Queensland State Planning Policy [17].This policy is intended to control stormwater pollution through the development and approvals process.The policy specifies that pollution emanating from development sites must be reduced by 80% for total suspended solids (TSS), 60% for total phosphorus (TP) and 45% for total nitrogen (TN) [17].
The manufacturers recommend that the PSTD should generally be maintained at least annually.However, this is also dependent on observed pollution loads.Maintenance includes sediment removal from the gross pollutant trap (GPT) section of the unit via a suction hose.The coalescer is subjected to a low pressure wash during maintenance, with the resultant wash-off also being The hydraulic design of this device facilitates a minimum four-minute retention period that provides conditions within the secondary (offline) chamber, promoting the separation of total suspended solids (TSS), hydrocarbons (TPH) and other pollutants (nitrogen and phosphorous) through the coalescer unit (Figure 3).Incorporated into the hydraulic flow of the unit, the coalescer is a polyethylene, oleophilic matrix that filters and then repels hydrocarbons from water.The hydraulic design of this device facilitates a minimum four-minute retention period that provides conditions within the secondary (offline) chamber, promoting the separation of total suspended solids (TSS), hydrocarbons (TPH) and other pollutants (nitrogen and phosphorous) through the coalescer unit (Figure 3).Incorporated into the hydraulic flow of the unit, the coalescer is a polyethylene, oleophilic matrix that filters and then repels hydrocarbons from water.The arrows in Figure 3 show the flow path that the treated stormwater takes through the PSTD.The primary chamber of the PSTD is designed to trap gross pollutants and sediment greater than 0.2 mm in diameter, and the coalescer in the secondary chamber is designed to separate immiscible liquids (oil and grease) from water.The maximum treatment capacity of the PSTD used in this study was 20 L/s.This flow volume was referred to as the treatable flow rate (TFR), and it is from these flows that auto-samplers located at Points A and B (Figure 3) collected water samples for analysis.In particular, the study evaluated the ability of the PSTD to remove TSS contained in stormwater effluent to the levels specified by the Queensland State Planning Policy [17].This policy is intended to control stormwater pollution through the development and approvals process.The policy specifies that pollution emanating from development sites must be reduced by 80% for total suspended solids (TSS), 60% for total phosphorus (TP) and 45% for total nitrogen (TN) [17].
The manufacturers recommend that the PSTD should generally be maintained at least annually.However, this is also dependent on observed pollution loads.Maintenance includes sediment removal from the gross pollutant trap (GPT) section of the unit via a suction hose.The coalescer is subjected to a low pressure wash during maintenance, with the resultant wash-off also being The arrows in Figure 3 show the flow path that the treated stormwater takes through the PSTD.The primary chamber of the PSTD is designed to trap gross pollutants and sediment greater than 0.2 mm in diameter, and the coalescer in the secondary chamber is designed to separate immiscible liquids (oil and grease) from water.The maximum treatment capacity of the PSTD used in this study was 20 L/s.This flow volume was referred to as the treatable flow rate (TFR), and it is from these flows that auto-samplers located at Points A and B (Figure 3) collected water samples for analysis.In particular, the study evaluated the ability of the PSTD to remove TSS contained in stormwater effluent to the levels specified by the Queensland State Planning Policy [17].This policy is intended to control stormwater pollution through the development and approvals process.The policy specifies that pollution emanating from development sites must be reduced by 80% for total suspended solids (TSS), 60% for total phosphorus (TP) and 45% for total nitrogen (TN) [17].
The manufacturers recommend that the PSTD should generally be maintained at least annually.However, this is also dependent on observed pollution loads.Maintenance includes sediment removal from the gross pollutant trap (GPT) section of the unit via a suction hose.The coalescer is subjected to a low pressure wash during maintenance, with the resultant wash-off also being removed via a suction hose.No maintenance of the unit was required or undertaken during the 14-month study test period.
Sampling Protocol
A sampling protocol (Table 1) was developed based on the Auckland Regional Council Proprietary Device Evaluation Protocol [18] and Washington State Department of Environment Stormwater BMP Database protocols [19].The protocol was developed specifically to provide sufficient numbers of valid sampling events and water quality samples for analysis, in order to clearly demonstrate the pollution removal performance of the PSTD under an appropriate range of natural rainfall conditions.Much of the adopted protocol is also included in the Stormwater Australia Stormwater Quality Improvement Device Evaluation Protocol draft [20].
Table 1.Field testing protocol requirements for Nambour.TSS, total suspended solids.
Minimum qualifying events 15
With the aim to gain sufficient valid data to achieve a statistically-significant difference between influent and effluent.Statistical significance will not, however, be a critical requirement, as this may require hundreds of samples and be financially unviable.
Sampling Equipment and Timing
The output signals from all of the monitoring equipment installed on the PSTD in the study were logged using a CR800 Campbell Scientific datalogger.Flow-weighted subsamples (200 mL) were taken after a stormwater volume of 1000 L had passed through the MJK Magflux flow meter installed at the treated flow outlet pipe (Figure 1).A Starflow ultrasonic probe was located in the bypass outlet.A water volume of 1000 L was chosen as the sampling flow interval, as this was approximately equal to the runoff generated by 0.5 mm of rainfall over the site, assuming zero losses.All subsamples collected during runoff events were composited within the automatic samplers in 9-L bottles.For sampling events where insufficient volume was collected for the suite of subsequent chemical analyses to be undertaken (listed in Table 1), the event was discarded and recorded as non-qualifying.
The antecedent dry period was initially set at 72 h between rainfall events [21,22].However, to increase the number of qualifying events, the antecedent dry period was reduced to 6 h unless the influent pollutant concentrations were below the limits of detection (LOD).The minimum event rainfall trigger for sampling was set to 1.5 mm.
volume of flow during period i;
‚ C i = concentration associated with period i; ‚ n = total number of aliquots collected during the event.
¸(5)
Q-Q plots were used to compare the two datasets using a non-parametric approach to compare their underlying distributions.Q-Q plots are generally used to provide a graphical assessment of the "goodness of fit".Q-Q plots (log) were used in this study to compare the shapes of observed sample distributions and to provide a graphical view of how properties, such as location, scale and skewness, are similar or different in the two distributions.
Dry weather samples were taken on consecutive days after rainfall events to determine whether nutrients were exported over time.The dry weather samples were collected manually as grab samples.Inflow grab samples were taken from the primary chamber and outflow grab samples from the secondary chamber of the PSTD.Calculations of the changes in pollution concentrations were made using Equations ( 1)-(3).
Results and Discussion
During 14 months of monitoring, 59 rainfall events (>1.5 mm) were recorded at the study location.Of these, 18 events were characterised as qualifying events according to the agreed sampling protocol (Table 1).When any of the results were less than the limits of detection (LOD) for that particular test, they were shown as 50% of the LOD in Table 1.
The measured pollution removal performance (ER) of the PSTD was 83% for TSS, 11% for TP and 23% for TN over the 14-month study period (Table 2).Being specifically designed to remove TSS, the system has successfully achieved this objective.Although not unexpected, the removal of TP and TN from outflows was found to be minimal and not achieving the minimum specified by the regulations.Additional components would need to be added to the treatment train to fully satisfy the specific Queensland Government regulations in terms of TP and TN pollution removal.Notes: LOD = limit of detection.
Even though the calculation methods for both the ER and CRE metrics use the same data, these results were found to vary substantially (Table 2).This is the result of the two calculation methods using different mathematical logic.Results near the limits of detection, such as those for rainfall events on 22 March 2015 and 7 April 2015 (Table 2), skewed the average CRE metric by producing individual event CREs of 0%.Exclusion of these outliers produced substantially different results, increasing the average TN CRE result to 15% (up from 0%).
The PSTD has a designed treatable flow rate (TFR) of 20 L/s.Eleven of the 18 events were >75% of the TFR (Table 3) and were >100% of the TFR.Performances (CRE) for treatable flow rates between 75% and 100% were found to be highly variable for each pollutant measured (TSS, 0%-90%; TN, 0%-99%; TP, ´148%-60%).This variability appears to be more related to the low influent concentrations than the flow rate.PSTD measured performances over total flow volumes (sum of loads) were found to be variable and, although high for TSS, included a calculated export of TN (Table 4).The sum of loads (SoL) has been calculated according to Equation (6).Although somewhat counter-intuitive because the unit is a closed system, this may be a result of the number of non-qualifying events that passed through the system contributing to the overall pollution load in subsequent events.
Particle Size Distribution (PSD) analysis revealed variable results between inflow and outflow (Table 5).The particle sizes at which the different percentages of mass were observed all increased after treatment.Although all size groups were shown to increase after treatment, these results do not represent substantial differences in sizes (especially in the D 10 and D 90 ).These results may have been affected by unusually non-spherical shapes of the particles measured, which affects the accuracy of the automated laser measurement technology.Alternately, these average results presented below could be influenced by the mathematical averaging across all events.Influent concentrations for TSS and TN at the study site (Table 6) were significantly lower (TSS p < 0.001, TN p < 0.001) than the typical values for Australian commercial catchments reported by Duncan [23] and those recommended by industry for use in Australian pollution modelling studies and used within the software tool, Model for Urban Stromwater Improvement (MUSIC) [24].Note: 1 Model for Urban Stromwater Improvement (MUSIC) [24].
Similarly affected PSTD performance results have been observed by the authors from other field evaluation sites [25].Results from these other studies have been shown to differ by up to 30% for TN and 20% for TP, where low influent concentrations result in 0% or negative CRE.In some cases, calculations have resulted in a theoretical export of pollutants.Large negative CRE can have an impact on the average CRE value, and so, for this reason, it is suggested that when influent concentrations are close to the LOD, CRE on its own is not an appropriate metric.Where low influent concentration are observed, the calculated ER may be a more accurate reflection of the PSTD pollution removal performance.
Even though large datasets may be required, statistical validation (paired t-test) of data is recommended by some international protocols to confirm significant differences between the influent and effluent sample sets [18].TP and TSS influent pollution concentrations (log-normally distributed) were found to be significantly different between the Nambour study site and the MUSIC guidelines (p > 0.05) (Table 7, Figure 3).The Q-Q plots of the log-transformed datasets confirm visually the results of the statistical tests that the data are closely aligned to a log-normal distribution and that therefore further statistical tests can be performed (Figure 4).Previous PSTD testing studies that have produced highly variable data have suggested that the confirmation of statistical significance may require extensive testing; however, they conceded that this may not be achievable in all circumstances [23].An estimation of the number of samples required for a statistically-significant, paired comparison for the current dataset (Equation ( 6)) as recommended by Burton and Pitt [26] suggests that eight samples would be required for accurate TSS analysis.However, 333 samples would be required for TP and 280 samples for TN.Collecting this number of samples would not generally be viable for most studies.
where n = the number of sample pairs needed; α = the false positive rate (1 ´α is the degree of confidence; a value of α of 0.05 is usually considered statistically significant, corresponding to a 1 ´α degree of confidence or 95%); β = the false negative rate (1 ´β is the power, if used; a value of β of 0.2 is common, but it is frequently ignored, corresponding to a β of 0.5); Z 1´α = Z score (associated with the area under the normal curve) corresponding to 1 ´α; Z 1´β = Z score corresponding to a 1 ´β value; µ 1 = the mean of dataset one; µ 2 = the mean of dataset two; σ = the standard deviation (same for both datasets, assuming a normal distribution).
It has been the authors' experience during this and other, similar studies that only approximately 25% of events sampled fully satisfy the criteria needed to be considered as qualifying events.The remainder of the samples are discarded for non-conformance with strict sampling protocols.Continuation of a monitoring program to achieve the 280 qualifying events required for statistical certainty in this study (>750 events overall) was found to be financially prohibitive for this research program.The authors' suggest that this would be the case for many field evaluation studies.The authors therefore recommend that the current industry-accepted methodology used in Australia to calculate pollution removal performance of proprietary stormwater quality improvement devices should be modified to accept contingencies, such as those that have been experienced in this study.
Dry Weather Sampling
As the PSTD is located on a relatively small commercial catchment, there is no baseflow through the system during dry weather.TSS and TN concentrations were observed to increase from the first chamber to the second chamber of the device (Table 10).However, it should be noted that the very low inflow concentrations are likely to again be the key factor for any observed increase at the second chamber.For example, typically, the LOD for TSS was 5 mg/L, and had the authors not requested a lower LOD (1 mg/L), the results would have predominantly shown below detectable results on both inlet and outlet samples for TSS.Further, blind duplicate and replicate testing on this and other projects by the authors has demonstrated that the variability of pollutant concentrations on these dry weather samples is within the range of observed variation (˘2.8 mg/L TSS, ˘0.3 mg/L TN and ˘0.02 mg/L TP).Therefore, the authors consider that there is no conclusive evidence that the PSTD releases pollutants during dry weather that could not be attributed to analytical variability.This confirms recent research that also found no significant relationship between nutrient concentrations and length of dry period between events on wet sump devices [27].
Conclusions
The evaluation of proprietary stormwater treatment devices has been performed for decades internationally and appears to be gaining momentum in Australia.While a number of existing guidelines stipulate that the performance of these devices must be demonstrated for local and regional conditions, the guidelines generally do not define how this should be accomplished.
This paper has detailed the evaluation and testing protocol implemented on a Class 1 Stormceptor at one monitoring site in Queensland, Australia.Results from 18 complying events showed a pollution removal efficiency (ER) of 83% for TSS, 11% for TP and 23% for TN.Based on the analyses, TSS was found to be significantly reduced after treatment by the device.Being specifically designed to remove TSS, the system has successfully achieved this objective.Although not unexpected, the removal of TP and TN from outflows was found to be lower than the minimum specified by Queensland policy.Additional features, such as an engineered filter media, would need to be added to the system to totally comply with the specific Queensland Government policies in terms of TP and TN pollution removal.
Although a reasonably large number of rainfall events were analysed in total, further analysis was found to be required due to the variability in the results, particularly for TP and TN.Because of the large number of samples required (>750) to achieve adequate confidence intervals (>95%) for CRE, it was deemed not financially viable.Dry weather testing of the device demonstrated that the results were within the expected levels of analytical variability, and conclusive evidence that the wet sump exported nutrients during dry weather could not be confirmed.
Low inflow pollution concentrations were found to skew average CRE results, leading to low overall CRE.Exclusion of outliers produced substantially different results.The use of ER in place of CRE evaded the skew effects observed for CRE and provided accurate performance evaluation results.
The study results suggest that when pollution influent concentrations are close to the LOD, CRE may not be an accurate reflection of PSTD performance.In these cases, the calculated ER may be a more accurate reflection of the pollution reduction performance.
The authors recommend that the current industry-accepted methodology used to calculate the pollution removal performance of proprietary stormwater quality improvement devices should be modified to accept contingencies, such as those that have been experienced in this and other similar studies.
Figure 1 .
Figure 1.Aerial photograph of the subject site (property boundary shown by the yellow line).
Figure 1 .
Figure 1.Aerial photograph of the subject site (property boundary shown by the yellow line).
Figure 3 .
Figure 3. Design schematic of the study PSTD and monitoring equipment setup.
Figure 3 .
Figure 3. Design schematic of the study PSTD and monitoring equipment setup.
Figure 3 .
Figure 3. Design schematic of the study PSTD and monitoring equipment setup.
Table 3 .
Rainfall and flow data in relation to event CRE.
Table 4 .
Total flow volume and Sum of Loads (SoL).
Table 5 .
Averaged particle size distribution (PSD) analysis across all events.
Table 6 .
Comparison of Nambour surface water quality results with Brisbane MUSIC Guidelines for urban residential areas.
Table 10 .
Dry weather sampling results.
Notes: LOD = limit of detection. | 6,146 | 2015-12-08T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Mathematical Modeling from Metacognitive Perspective Theory: A Review on STEM Integration Practices
This study presents to identify mathematical modeling is the least elements focusing on current STEM integration practices. Through this study, a review of the existing practice of STEM integration curriculums, models, modules, and programmes was undertaken to confirm the issue. The database reviewed to confirm this issue is from Social Sciences Citation Index with keyword “Mathematical Modeling,” “STEM curriculum,” “STEM model,” “STEM module” and “STEM program.” As a result, these studies confirmed that mathematical modeling activities were the least focusing on existing STEM integration practices and the theory of metacognation and the theory of sosial interaction development could promote these abilities.
Introduction
In many countries, the primary policy for the implementation of integrating STEM disciplines is created to enhance the interests and involvement of students in the career fields related to STEM disciplines (Freeman, Marginson, & Tytler, 2015;Kuenzi, 2008a;Merchant, Morimoto, & Khanbilvardi, 2014).According to the report by the Australian Council of Learned Academies (ACOLA) in 2013 (Rowe, 1991), more than 16 countries provide ideas and the implementation of STEM education to enhance students' interest in STEM-related career fields.
These countries are Great Britain, United States of America, Canada, New Zealand, China, Japan and Singapore (Lacey, & Wright, 2010).The initial steps to enhance STEM integration base education taken for this country are due to de-DOI: 10.4236/ce.2018.9141612204 Creative Education cline in students' interest and involvement in the areas of STEM-related careers.
Until nowadays, those countries still cannot fulfill the demand of industry's in STEM backgrounds (Roehig & Moore, 2011).Furthermore Roehig & Moore (Roehig & Moore, 2011) noticed the future careers which require abilities and qualifications at very high-level thinking in the field of STEM integration contribute to this issue.
As mathematical modeling involves activities such as describing natural phenomena or designing a component or a system by writing mathematical equation (Baumann, Keel, Elsworth, & Weston, (Eds.), 2010) was mentioned as a component to interconnecting STEM's discipline.Therefore, the ability on constructing a mathematical model in STEM integration focuses should be made, and these value-added had to focus on the existing standard STEM integration practicing.From the philosophical point of view curriculum, model and module of teaching and learning are usually developed based on one or by combination of some of the educational theories on what students should achieve and how they are going to achieve in their teaching and learning (Baumann, Keel, Elsworth, & Weston, (Eds.).2010).In fact, many theories could explain how students think, act and set strategies for solving the problem (Hacker, Dunlosky, & Graesser, 2009).
To explain these phenomena, the ability to solve a problem in mathematical modeling is closely related to the cognitive activities that are applied while they are facing a problem solving (Chun & Eric, 2010) task.Therefore, good cognitive skills will lead a person to be more analytically minded while facing mathematical modeling problem solving (Sokolowski, 2015).Consequently, a lot of mathematical modeling activities should focus on cognitive aspects and this could help students exposure indirectly to STEM-related careers in real life.There some cognitive theory proposes the ability to set thinking strategy.Mathematical modeling is considered a challenging task and it involves high-level problem-solving abilities (Blum & Borromeo, 2009) and it is proved as an enjoyable task for students to develop their cognitive abilities.Therefore, implementing this task, could lead students to be more analytical as required in STEM careers industry (Tseng, Chang, Lou, & Chen, 2013).However, the difficulties of mathematical modeling activities are because students do not know how to regulate their cognitive ability.The cognitive development at this stage is placed under the zone of proximal development, where students need the elements of scaffolding as a means to assist metacognitive activities (Larkin, 2010;Louca & Zacharia, 2012;Papaleontiou Louca, 2008;Schraw, Crippen, & Hartley, 2006).
Background Problem
The less focusing element have been identified from the existing STEM curriculums, models, modules or programmes is the ability to make a correlation to all STEM disciplines.This could be due to carrying all STEM integration elements M. R. in a teaching and learning activities is something considered challenging to be done (Bowers, 2016;Valtorta, 2015;Berland, 2013).To improve these situations, an appropriate STEM integration task with the characteristics of crossing and balances all STEM disciplines and at the same time could expose students to STEM careers needs to be identified.The exposures of STEM integration elements at an early stage to students mainly on the actual STEM fields activities is crucial to attracting students' interest on adapting STEM careers for the future (Kitchel, 2015;Stotts, 2011;Honey, Pearson, Schweingruber, Education, Engineering, & Council, 2014;Valtorta, 2015).
Meanwhile, Velten (Velten, 2009), highlighted mathematics is a tool for science, technology, and engineering to describe and to relate the variables phenomena under investigation.It is beneficial to use the characteristic of mathematics to interconnecting every discipline in STEM disciplines.Therefore it is worth to using mathematical modeling as a task on STEM integration practices (Alder, 2001).Generally, mathematical modeling can be defined as interpretation, verification, correction and generalization to an eventual situation, phenomena or a system (Roehig & Moore, 2011).Lesh and Zawojewski (Lesh & Zawojewski, 2007) define mathematical modeling as a process of producing an excellent concept, an expression that can be modified and can be reused for controlling the actual situation.As a result, mathematical modeling can provide a space for students to develop the concept of interconnection on science, technology, engineering and mathematics in a way that is more meaningful and significant in the real situation (Kaiser & Stillman, 2011).Activities that involve mathematical modeling usually have been taught to engineering students at the tertiary level.Eventually, for the past several years, studied have found that the importance of the application of mathematical modeling for primary and secondary school students had a significant effect to developing the analytical thinking and problem-solving ability (Stohlmann, M. S., & Albarracín, L. (2016);Cardella, 2006;Lesh & Zawojewski, 2007).As consequences, the competency of STEM careers could be improved.
Research Problem Statement
It has been proved, mathematical modeling is said to be tough for primary and secondary school practises' (McKeachie, 1987).Students were said to have no knowledge and experience in these abilities, as well as the level of students' thinking was reported to be not at the level to build mathematical modeling (Stohlmann & Albarracín, 2016).They have never been exposed to such activity.
This problem can be addressed by identifying the aspects of how students think and plan their thinking or better known as metacognition while they are doing problem-solving on mathematical modeling (Kelley & Knowles, 2016).According to Kaiser and Stillman (Stohlmann & Albarracín, 2016), the use of metacognitive skills is said to be not only useful but very suitable to improve the mathematical modeler's competency especially for someone who is new in this field.Roehig & Moore, 2011).Tasks on mathematical modeling will involve students' cognitive activity (Thompson, 2009;Kelley & Knowles, 2016).
Meanwhile, the confirmation on constructing mathematical modeling by STEM practitioners is no clear pictures.This situation was due to the difficulty of building mathematical modeling, and it is highly dependent on the students' cognitive ability is said to be not at the level to develop mathematical modeling.
The cognitive perspective phenomena on how a person's transfer their science knowledge to engineering applications in the form of a mathematical model are something interesting to be studied, and it sought to an explanation (Hacker, Dunlosky, & Graesser, 2009).
Research Objectives
The purpose of this study is to confirm that mathematical modeling is the element that less been focusing on STEM integration education for secondary school level.Furthermore, this study is to identify the specific cognitive theories to supporting for the mathematical modeling activities for STEM integrated practices.
Methodology
This study undertook a review on confirming that mathematical modeling and aspects of promoting of metacognition were the fewer elements focusing on ex-
Data Analysis
Table 1 shows an analysis of articles review related to the STEM integration obtained from the database journal indexed by the Social Sciences Citation Index (SSCI) by Thomson Reuters.The primary objective of this analysis is to identify the fewer elements of mathematical modeling were stressed in STEM integrations practicing which were believed to have the ability to make connections to all disciplines through authentic activities.
From this review, a total of 149 journals were found reported on STEM As a result 18 out of 149 articles from the journals indexed by SSCI discussing STEM model and module integration and most of the articles were reported on STEM program which only focuses on the exploration of the concept of physics, chemistry and biology and the application of these concepts to solve problems in real situations.Through these activities, the students have to make an investigation on the phenomenon or situations being studied and have to use measuring tools such as a digital timer, measuring tape, weighing, voltmeter, etc. From this review, it is found that most of the practices reported on existing STEM integration programmes were less likely less focusing on relating all STEM element and the ability to make correlation on the variables being studied using mathematical relationship or in the form of a mathematical model.This evidence proved that the element of mathematical modeling is least emphasized.However, based on this review a few articles were found reporting on STEM integration with mathematical modeling that been implemented at the university level.
Most the objectives of the models were for enhancing student's interest on STEM element learning, performance, and skills, but there is one study by Hamilton et al. (Holzman, 1996) which gives focus on complex design or other task settings with underlying science and technology.Khan & Davis (Sundaram, 2015) studied on "Adopt-a-Professor"-A Model For Collaboration in STE between K-12 and Higher Education to strengthen K-12 student learning outcomes in all subject.Through these studied, the enhancing learning skills with a special focus on STEM fields were given but there were no mathematics element were found.In other hand, Lin, Zhu & Ro (Egarievwe, 2015) studying a dynamic project-based STEM Curriculum Model for a small humanities high School.The purposes of these study is to enhance students performances in international assessments on PISA.
From theory point of view, two theories were identified as a guide to activate mathematical modeling activities on STEM practices.The theories were theory of metacognition (Vygotsky, L. (1978)) and the theory of social development (1978) (John H. Flavell. (1963); Piaget, J. ( 1950)).In fact, several theories present in real situations on how students think when they face a problem on learning which can be reflected in mathematical modeling task.The classical theory is the theory of cognitive development by Jean Piaget, which explains the cognitive development of humans through three stages: schematics, adaptation and sensorimotor (Polya, 1945).Meanwhile, Gorge Polya around 1945Polya around -1957 presented the model of problem-solving through three processes.involving understanding the problem, plan for a solution, implement the plan and review the result (Polya, 1945).But the metacognitive theory introduced by Flavell in 1979 explained metacognitive aspects which consist of three main elements, metacognitive knowledge, metacognitive experience and metacognitive strategies (Vygotsky, 1978) & (Hiltz & Turoff, 1993) was considered suitable theory due to the proposing of thinking about knowledge, skills and strategies.Another additional suitable theory is the theory of cognitive development from the social aspects introduced by Lev Vygotsky (Piaget, 1950).This theory mentions that students cognitive development from the social aspect which explains students ability and maturity on specific cognitive can be developed to a higher level if space or support are given to the development on a certain maturity level (Larkin, 2010).
A conceptual framework for this study is then proposed to illustrate on hypothetical elements base on the selected theories.The theoretical framework of STEM integration from the mathematical modeling practises as shown in Figure 1 is built with three intersection circles.The first circle is containing the metacognitive theory elements (metacognitive knowledge, processes, skills, and strategies), the second circle containing the theory of social development (socially mediated interaction-promoting communication and scaffolding media).
The third circle comprises teaching element which is considered essential to creating a community of inquiry for educational purposes.This is because an appropriate cognitive and social presence, and ultimately, the establishment of a critical community of inquiry, is dependent upon the presence of a teacher.This is particularly true if an integration discipline curriculum or advance learning outcome is the primary means of an educational experience.In fact, when integration education based specific approaches fail, it is usually because there has not been responsible teaching presence and appropriate leadership and direction had been practiced (Daniels, 2008).
Therefore from all central themes collected in each of the intersection areas, STEM integration scaffolding on knowledge, processes and skills, STEM integration setting climate, STEM integration teaching content and, the intersection of the three circles as indicate number 4, then STEM integration practice experiences were expected could be quiring.From these new meanings, a guideline of STEM integration in the form of ways and techniques to sets strategy and action to perform mathematical modeling task could be useful for STEM integration practitioners.
A worthwhile STEM integration experience is embedded within a Community of Inquiry that is composed of metacognitive (Vygotsky, 1978) elements, social development elements by Vygotsky (Kozulin, Gindis, Ageyev, & Miller, 2003;Hadi, 2015;Piaget, 1950) and teaching elements as shown in Figure 1.The selected theories could guide this exploration in the STEM integration practitioner.These two theories are proposed based on the following principles so that the practice can be carried out.
There were many theories explained how learning occurs from the cognitive perspective, but the theory of metacognitive by Flavell was found as an appropriate theory to be proposed in this study.Flavell's theory can explain how a student thinks about his thinking, planning strategies and implementing actions precisely in solving problems on mathematical modeling, rather than the theory introduced by Piaget (Polya, 1945;Proust, 2013).The theory of cognitive development Piaget (Polya, 1945) The problem-solving model by Polys (Proust, 2013)
Conclusion
In -Pedagogical knowledge -Teaching processes -Teaching skills -Teaching approach -Teaching strategies.
Metacognitive elements:
-Awareness of Knowledge -Awareness of processes -Awareness of skills -Awareness of strategies
Social development elements:
-Students -Teachers -Media -Internal and external -organization.
1-Scaffolding on metacognition STEM integration execution knowledge, processes and skills 2-Setting STEM integration climate 3-STEM Integration Teaching contents 4-STEM integration practice experience on mathematical modeling Creative Education of several disciplines in STEM learning by using discovery and project-based inquiry activities (Kozulin, Gindis, Ageyev, & Miller, 2003).
Although only relatively few articles were found with data that were congruent with pre-set guidelines, it could be concluded the focusing on ability students to make interrelation across disciplines was less focusing.Concerning previous studies, mathematical modeling had the potential to integrate STEM elements in a task and at the same time enhance students' ability on problem-solving.Even though these activities were considered as a difficult task, STEM integration programmes using mathematical modeling still could be implemented by promoting students metacognitive and student social interaction development.The importance of this study in the STEM integration education could be described concerning implementation and approach should be used in carrying out the teaching and learning of STEM integration.Implementation of mathematical modeling for STEM integration programmes is considered very important in ordered to construct not only concrete and meaningful problem solving but with a comprehensive and coherent way.The construction of meaningful, comprehensive and coherent learning on mathematical modeling could be viewed from constructivist, self-excessive and social development interaction learning practice (Baumann, Keel, Elsworth, & Weston, (Eds.), 2010).
Figure1.The Theoretical framework on Metacognition of STEM integration from mathematical modeling perspectives.
line with current global developments in the demand of thinking skills amongst labors for the future, the Malaysia Ministry of Education (MOE) has begun to implement cross-curriculum education policy in schools.This policy can be seen from the implementation done through the combination of teaching and learning elements of science, technology, and engineering in the pure science like physics, chemistry, biology, mathematics and additional mathematics for school and university level.However, since 2017 the implementation of STEM education has been done entirely through the Primary School Standard Curriculum (KSSR) and Secondary School Standard Curriculum (KSSM)(Mohamad, Lilia, Zanaton, Edy, & Raifana, 2015).Initially, the implementation of STEM integrated education is done through programs conducted outside the formal classes.Through this program, students will be exposed to a combination Teaching elements: Bajuri et al.
Table 1 .
Articles analysis of the missing elements for STEM integration.
ceedings of Institute Electrical and Electronics Engineers (IEEE) and Journal of Advances in Engineering Education which are indexed by the Social Sciences Citation Index (SSCI) Thomson Reuters are found as two journals that are actively reporting on STEM integration education.It is found that the Proceedings of IEEE has published 121 articles related to STEM education.It is found that 7 of them are related to STEM modules and STEM teaching and learning models for school level.The Journal of Advances in Engineering Education has published 28 articles related to STEM education, and 9 of the articles are related to STEM model and modules training for university and college level. | 3,917.2 | 2018-10-25T00:00:00.000 | [
"Mathematics",
"Education"
] |
Exploring 4D microstructural evolution in a heavily deformed ferritic alloy
We present a multi-scale study of recrystallization annealing of an 85% cold rolled Fe-3%Si alloy using a combination of dark field X-ray microscopy (DFXM), synchrotron X-ray diffraction (SXRD), and electron backscatter diffraction (EBSD). The intra-granular structure of the as-deformed grain reveals deformation bands separated by ≈ 3–5°misorientation. We monitor the structural evolution of a recrystallized grain embedded in bulk, from the early stages of recrystallization to 65% overall recrystallization through isothermal annealing steps. Results show that the recrystallized grain of interest (GOI) grows much faster than its surroundings yet remains constant in size as the recrystallization proceeds. Isolated dislocations embedded within the volume of the recrystallized GOI are investigated.
Introduction
Recrystallization and grain growth are important phenomena that occur in deformed metals during annealing.During the deformation of metals, new defects such as vacancies and dislocations are generated which increase the free energy of a given crystal.The density and distribution of these defects not only determine the hardening in the deformed state, but also constitute the main driving force for annealing phenomena.As physical and material properties depend on the state of the deformed and recrystallized microstructures, understanding the nucleation and growth of recrystallization in deformed materials is of great industrial significance.
The experimental understanding of strengthening mechanisms in the deformed state, and subsequent recrystallization during annealing, has improved drastically over the past 30 years.Notably, with the introduction of electron backscatter diffraction (EBSD) in scanning electron microscopes, fast and automated data collection providing morphological and crystallographic information can be employed.Studies in the last decades have shown that the distribution of the recrystallized (RX) grains is highly influenced by the local deformation microstructure [1,2].Both 2D and 3D EBSD methods have been the number one methods of choice of many researchers in assessing the geometrically necessary dislocation (GND) densities in deformed materials over the past years [3][4][5][6].Even though these studies improved our understanding of the static dislocations structures in deformed metals, electron-based techniques have limited possibilities of dynamic studies within bulk material, due to sample preparation requirements.
X-ray based techniques have improved our understanding of the relation between the deformed and the annealed states within the past 20 years.Synchrotron-based techniques such as 3D X-ray diffraction, or High Energy Diffraction Microscopy (3DXRD or HEDM) [7,8], Diffraction Contrast Tomography (DCT) [9] and Differential Aperture X-ray Microscopy (DAXM) [10] have been used to investigate recrystallization and growth phenomena in 3D and 4D (x, y, z, t).While these studies have provided fascinating insights on the recrystallization and growth of near perfect grains within bulk materials, they fail to capture the multi-scale relation between the new nuclei and the highly deformed matrix due to limited spatial resolution and the peak overlap problem [11].
A relatively new synchrotron-based method called Dark Field X-ray Microscopy (DFXM) provides an alternative approach to the above-mentioned challenges.DFXM is a diffraction imaging method for probing 3D nanostructures with their associated strain and orientation in bulk materials, with better angular resolution than its electron counterparts [12][13][14].The microscope can be coupled with 3DXRD and DCT [15], and quantitative texture analysis can also be performed [16].
Here, we investigate recrystallization and grain growth from a heavily deformed ferritic alloy using DFXM, SXRD and EBSD.We monitor the 4D evolution of a recrystallized grain of interest (GOI) upon successive annealing steps up to RX=65% overall recrystallization using DFXM.We explore the orientation relations between the deformed structure and the new grains during annealing in a bulk and industrially-relevant deformed sample.
Experimental
2.1.EBSD Global recrystallization tendencies were followed with EBSD in the as-deformed condition and after interrupted annealing in a dilatometer at 600 • C.More details about the EBSD measurements can be found in [17].
DFXM and SXRD
The DFXM experiments were conducted at Beamline ID06-HXM at the European Synchrotron Radiation Facility (ESRF) [18].We used a monochromatic beam with 17 keV photon energy.The beam was focused in the vertical direction using a Compound Refractive Lens (CRL) comprised of 58 1D Be lenslets with an R=100 µm radius of curvature, yielding an effective focal length of 72 cm.The beam profile on the sample was approximately 200×0.6 µm 2 (FWHM) in the horizontal and vertical directions, respectively.The horizontal line beam illuminated a single plane that sliced through the depth of the crystal, defining the microscope's observation plane, as shown in figure 1.For the SXRD experiments a FReLoN CCD 2D detector was used having 47.3 µm pixel size positioned at 170 mm from the sample.This camera was used to measure the texture of the 110 diffraction ring [16], and to locate the high stored energy (HSE) regions for high resolution DFXM measurements.A near-field camera with 0.622 µm pixel size was then placed 56 mm downstream the sample, and used to orient the crystal into the Bragg condition, to calibrate the temperature on the sample using the lattice parameter expansion, and to measure local orientation after each heating step.These orientation measurements comprised rocking curves with 20 • range with 0.1 • /step.Following the alignment and the rocking curve measurements after each annealing step, the near-field camera was removed and the image was magnified by an X-ray objective lens comprised of 88 Be parabolic lenslets (2D focusing optics), each with a R=50 µm radii of curvature.The entry plane of the imaging CRL was positioned 281 mm from the sample along the diffracted beam, and aligned to the beam using a far-field detector.The objective projected a magnified image of the diffracting sample onto the far-field detector, with an X-ray magnification of M x = 17.9×.The far-field imaging detector used an indirect X-ray detection scheme.It was comprised of a scintillator crystal, a visible microscope and a 2160 × 2560 pixel PCO.edge sCMOS camera.It was positioned 5010 mm from the sample.
The sample mosaicity was acquired by measuring distortions along the two orthogonal tilts ϕ and χ, cf.figure 1.For the deformed grain and its surroundings the two sample tilt angular ranges were ∆ϕ = 8 • and ∆χ = 3 • , respectively.After 240 s of annealing we focused on the RX GOI, and measured it in more detail where the χ-range and step size was ∆ϕ = 0.3 • and ∆χ = 0.4 • , respectively.With this data, each voxel can be associated with a subset of a (110) pole figure, allowing us to generate Center of Mass (COM) maps to describe the average direction of the (110) orientation for each voxel in the layer [19].
Samples and Sample Environments
We studied laboratory produced Fe-3%Si binary cast samples that were hot rolled and annealed having an average grain size of 150 µm before cold rolling [20].These samples were then cold rolled at a true strain of about ϵ vm = 2, corresponding to 85% reduction in size.The samples remain ferritic due to the high Si content of the alloy.Details of sample preparation is given elsewhere [17,20].Different samples from the same batch were used for the EBSD and synchrotron experiments.A hot air blower positioned 5 mm from the sample was used to perform the isothermal annealing steps for the synchrotron annealing studies.A total of 475 s of isothermal heating at ≈ 610 • C was applied in 9 intermittent steps, achieving a global recrystallization amount of ≈65% [21].All DFXM measurements were then done at room temperature after air cooling.We begin by presenting the features of the deformed microstructure.Figure 2(a) shows the EBSD mapping of the deformed state displaying a strongly-textured microstructure with heterogeneous stored energy/dislocation density.Deformation features such as transition bands, shear bands, and deformation bands are present.The deformed microstructure had a typical bcc rolling texture manifested by the marked appearance of α-fibre ({hkl}⟨110⟩), blue-red-magenta colored regions and γ-fibre ({111}⟨uvw⟩), blue regions in ND-IPF (Normal Direction Inverse Pole Figure) [22].It is known that in bcc metals the regions with the highest Taylor factor such as {111}⟨uvw⟩ have high stored energy (HSE) [23].Thus, during the early stages of annealing, these HSE regions typically produce the first RX grains [24][25][26].Therefore, we focused on these HSE regions and mapped the 3D orientation of one using DFXM.
Microstructure of the Deformed State
Figure 2(b) shows a DFXM mosaicity map of two deformed grains elongated along the rolling direction from HSE regions.The orientation gradients along the rolling direction separate zones with different substructure.The high spatial and angular resolutions afforded by DFXM unveil the fine details of the deformed microstructure located in the HSE regions within the bulk, including cells <1 µm having <8 • misorientation across the stretched grain.The overall angular spread of the measured deformed grains exceeds 10 • .We focus on this deformed grain and follow it through different heating steps.To map the local orientation relations between the RX grains and the deformed parent grain before zooming in using DFXM, we measured extended rocking curves with the nearfield camera.By doing so, we can focus on a part of the diffraction ring with high resolution.Figure 3(a) shows generated COM map of the rocking curve measurements of the diffraction pattern from the grain shown in figure 2(b).The deformed grain manifests itself like a "powder diffraction pattern" (with no individually identifiable diffraction spots) on the nearfield camera due to the high angular spread and small cell size on the order of the resolution of the detector.Upon isothermal annealing, (b-f) we observe new recrystallized grains appearing within 10 • orientation spread.At the same time, the intensity of the deformed grain (powder diffraction-like) decreases as recrystallization advances.At 350 s of annealing, most of the diffraction signal from the deformed grain is no longer visible, showing that it was consumed by the new RX grains.We focus on a RX grain (designated by the red arrow in figure 3(b)) and measure its 3D structure using DFXM.By looking at the diffraction spot of the RX GOI in the nearfield, we can see that the grain continued to grow up to 140 s, while some small grains in the diffraction ring disappear.The GOI seem to remain almost at the same size and intensity between 275 s and 350 s (figure 3(e-f)).Now we zoom in on the diffraction ring using DFXM, and look at real space images of the deformed and recrystallized grains.Figure 4 shows the DFXM integrated intensity maps of orientation scans as a function of annealing time and depth in the sample.Note that these scans covered 8 • × 7 • in ϕ and χ.Each image is a 2D slice of the sample from its bulk.At the as-deformed state (RX=0%) we see that the 3D structure of the deformed grain is heterogeneous: different cells with varying diffracted intensities are observed.After 40 s of annealing, we observe some subgrain refinement [27] in the deformed elongated grain, while a substantially larger grain (our GOI shown in figure 3) appears from the empty part of the DFXM image (see the magnified image).This means the GOI seeded from a differently orientated deformed grain, which did not appear in the DFXM image since the CRL objective filtered it out.Moreover, this indicates the region from which the GOI recrystallized, must have a higher stored energy, because it was able to grow faster than any other grain in the field of view.At 40 s, the magnified image of a given layer shows a neighboring grain on the right side of the GOI, which appears to be on the larger side of the other RX grains in the field of view.This indicates that the parent deformed grains for both these grains had the same deformed/RX orientation relations.Upon further annealing the main deformed grain and the refined subgrains continue to vanish at the expense of the RX grains.At 240 s, we observe that all the neighbors of the RX GOI are consumed, and the deformed grain has significantly shrunk.
From 240 s on, we focused on the RX GOI and scanned its orientation in 3D with finer angular steps through the later stages of annealing.Figure 5 42 × 120 × 1000 nm 3 /voxel in x, y, z. Figure 5(a) shows the integrated intensity map while (b) shows the 2D mosaicity map. Figure 5(d) gives the colorkey and the angular spread in the mosaicity map.The GOI shows orientation variations less than 0.3 • .Especially the top part of the grain shows more of a heterogeneity compared to the mid layers.This may be a result of the shear strain accommodation from the parent grain at the early stages of nucleation.However, one should be able to observe the onset of nucleation to verify this statement.Figure 5(c) shows the ϕ COM map of a selected layer from the volume of the GOI.For this layer, the angular spread is less than 0.1 • .Moreover, we observe isolated dislocations coming out of the plane (almost parallel to the z direction).This dislocation shows a typical positivenegative displacement field around the rotation axis (blue-red colors) [28].Investigating the layers around this layer, we find that the isolated dislocation marked with the red arrow in figure 5(c) extends ≈ 11µm in the volume.Recently, we showed that in annealed single crystals, isolated dislocations can extend tens of micrometers, while the dislocation boundaries can be on the order of hundreds of micrometers [29].We observe similar isolated dislocations through the volume of the GOI, which gives us ≈ 0.5−1×10 11 m −2 , inline with the literature [30].Note that in this study, we probed only one diffraction vector using the DFXM method.Therefore, other dislocations with different orientations may not appear due to their Burgers' vector alignment with respect to the diffraction vector.This signifies that the measured dislocation density must be slightly higher in reality.
Next, we monitor the microstructural evolution of the ROI upon furhter annealing from 240 s to 475 s. figure 6 shows the evolution of the ROI through different stages of isothermal annealing.From 240 s (RX=46%) to 475 s (RX=65%).The integrated intensity maps of the same layer at the thickness shows that the size of the GOI remains the same, meaning that the grain stopped growing after 240 s.
There may be a number of reasons why a grain retains its size during annealing.One possible reason is that the stored energy in the surrounding area is consumed, by the GOI and other recrystallized grains, and therefore the grain requires more energy to grow further.This can result in slower growth or even inhibition of grain growth, which can lead to the recrystallized grain retaining its size during annealing.Another possible reason of similar nature is that the GOI has found a microstructural obstacle e.g. a prior grain boundary or impingement with other recrystallized grains has occured, scenarios which would hinder growth.By inspecting our DFXM data, we unfortunately do not see any other grain neighbouring our GOI, except the one that was consumed between 40-75s of annealing (figure 4), implying that a hypothetical neighboring grain would have a significantly different orientation.This highlights the importance of the neighboring grain information, and shows that the coupling DFXM with coarser grain methods such as 3DXRD and DCT is crucial.It is important to note that the retention of grain size during annealing depends on a variety of factors, including the temperature, time, and strain history of the material.Step-wise isothermal heating like we performed in this experiment may result in slower growth after the early stages of recrystallization because the stored energy can be used for recovery and recrystallization simultaneously more than once.Here we observe a single grain that grew faster than any other grain in the field of view, and retained its size after RX=46% overall.
Conclusions and Outlook
We presented a multi-scale investigation of the microstructure and texture evolution of a heavily deformed (85%) ferritic alloy upon isothermal annealing, using a combination of DFXM, SXRD and EBSD methods.For the first time, we show isolated dislocations within a recrystallized ferrite grain in bulk.Our results show the importance of coupling coarser grain methods such as 3DXRD/DCT with DFXM to have a complete multiscale picture of the intragranular (DFXM) and neighbouring grain (3DXRD) information.Combining these methods would yield a powerful "ultra microscope" that can resolve subgrain dynamics and along with texture information, from the deformed to annealing states, offering a whole new perspective in the study of recrystallization.
Figure 1 .
Figure 1.Schematics of DFXM and SXRD setup.A diffracting grain of interest is shown red.DFXM orientation maps comprises two tilts (ϕ and χ) at the constant 2θ angle, and reveal the spatial variation of the orientation of the lattice around the 110 scattering vector.
Figure 2 .
Figure 2. (a) EBSD map of the as-deformed sample.(b) DFXM mosaicity map from a HSE region.Colormap shows the angular spread in the two sample tilts.Note that parts that remain empty in the map because the diffracted intensity from the zones with different orientations isfiltered by the objective.
Figure 4 .
Figure 4. Integrated intensity maps from the DFXM mosaicity scans showing an overview of the annealing microstructures at different positions in the volume and annealing times.Note that the intensity scale is a logarithmic plot.
(a) and (b) show the 3D reconstruction of the GOI with
Figure 5 .
Figure 5. DFXM 3D reconstruction of the GOI at 240 s of annealing (a) using integrated intensity maps (b) using mosaicity maps (c) rocking curve COM map of a selected layer, highlighting an isolated dislocation in bulk (d) orientation color key for the mid layer of the 3D mosaicity scans.
Figure 6 .
Figure 6.Integrated intensity map of the same 2D layer the ROI through different annealing steps at 610 • C. | 4,044.6 | 2023-11-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Cleaner heating policies contribute significantly to health benefits and cost-savings: A case study in Beijing, China
Summary Cleaner heating policies aim to reduce air pollution and may bring about health benefits to individuals. Based on a fixed-effect model focusing on Beijing, this study found that after the onset of air pollution, daily clinic visits, hospitalization days, and hospitalization expenses increased several days after the occurrence of air pollution. These hospitalization changes were observed in males and females and three different age groups. A difference-in-differences (DID) model was constructed to identify the influences of cleaner heating policies on health consequences. The study revealed that the policy positively affects health outcomes, with an average decrease of 3.28 thousand clinic visits for all diseases. The total hospitalization days and expenses tend to decrease by 0.22 thousand days and 0.34 million CNY (Chinese Yuan), respectively. Furthermore, implementing the policy significantly reduced the number of daily clinic visits for respiratory diseases, asthma, stroke, diabetes, and chronic obstructive pulmonary diseases (COPDs).
OPEN ACCESS
from coal to natural gas and electricity, with an emphasis on coal to electricity.Following several years of transformation efforts, Beijing has largely completed the cleaner heating task.As a consequence, millions of rural households have undergone cleaner heating transformations, enabling cleaner winter heating on the household side between 2017 and 2022. 80][11][12] Researchers have revealed that the implementation of cleaner heating policies resulted in a decrease of air pollutants such as PM 2.5 and PM 10 by 3.4 mg/m 3 and 5.3 mg/m 3 , respectively, in the ''2 + 26'' cities. 13 Previous studies have identified that subsidizing electric heat pumps and electricity in Beijing's rural regions eliminated coal use resulting in benefits to indoor air pollution. 14These reduction effects were also observed in other provinces, including Shandong, which experienced a decline of 7.32% in PM 2.5 following the implementation of cleaner heating policies. 15The implementation of cleaner heating policies is expected to result in a reduction in air pollution, which in turn is likely to contribute significantly to the improvement of individuals' health.This positive correlation between air quality enhancements and the prevention of various diseases stems from the fact that fresh air contains more oxygen, which improves lung function and strengthens the body's immunity, thereby reducing the incidence of respiratory diseases.Additionally, the improved air quality positively impacts cardiovascular health by decreasing the damage that inhaling harmful substances causes to the cardiovascular system.
In contrast to the previous studies that have measured the environmental benefits brought about by cleaner heating policies, 2,[16][17][18] the scientific new contributions of our study can be concluded as follows: Firstly, our study clearly revealed the health benefits of cleaner heating policies.Prior studies have primarily focused on assessing the health advantages for individuals resulting from improvements in air quality, [19][20][21][22][23] but they did not determine the health benefits specifically to the cleaner heating policies.Our research has shown that the adoption of cleaner heating policies resulted in a noteworthy decrease in clinic visits for all kinds of diseases.This provides a fresh outlook on evaluating the health advantages that the implementation of cleaner heating policies can offer.
Secondly, our study precisely measured the health-related expenses related to cleaner heating.We conducted a study on daily health expenses in Chinese cities to investigate whether people's medical treatment habits change in response to improvements in air quality resulting from cleaner heating policies.Our research sheds light on the medical cost savings associated with implementing cleaner heating policies and quantifies the benefits of reducing the burden of medical expenses.
Thirdly, our study provided measurements on the potential welfare gains of implementing cleaner heating policies.We measured the cost savings in healthcare expenditures at the city level as a result of implementing cleaner heating policies.Our findings suggest that continuing to implement such policies can lead to significant savings in hospitalization expenses.This information is useful for policymakers, as it highlights the economic benefits of promoting environmental policies.
Our study aimed to determine the link between air pollution and health outcomes in Beijing's 16 districts (Various variables can be seen in Table 1).Using a difference-in-differences model (DID), we analyzed the daily impact of cleaner heating policies on clinic visits, hospitalization days, and healthcare expenses in Beijing, which indicates a health-related behavioral variation due to the implementation of cleaner heating policies.Our study also provided comprehensive measurements of the saved hospitalization expenses in Beijing, implying numerous welfare benefits to individuals by continually implementing cleaner heating policies in the long run.
Specifically, we used a fixed-effect model to estimate the causal relationship between air pollutants, primarily PM 2.5 and PM 10 concentrations, and health-related conditions in Beijing.We analyzed clinic visits, hospitalization days, and hospitalization expenses data at the district level to measure this relationship.We comprehensively considered the influence of air pollutants on hospitalization behaviors with a lag of 0-7 days to capture air pollution's characteristics.Next, we constructed a DID model to determine if the cleaner heating policies resulted in a decline in districts' hospitalization behaviors and related healthcare expenditures.We further investigated the policies' heterogeneous effects on males and females, as well as on three age groups (0-16, 17-60, and more than 60 years old) in all districts of Beijing as a heterogeneity test.Lastly, we measured each district's economic benefits in Beijing based on the implementation of consecutive cleaner heating policies from 2017 to 2022.
Health-related consequences caused by air pollutants
To assess the effects of pollutants on health, we monitored the concentrations of PM 2.5 and PM 10 with a lag time ranging from 0 to 7 days (Figure 1).This is because the concentration of air pollutants in the human body accumulates over time, and within a week, the impact on the human body becomes more profound.In our study, for instance, December 7th was considered the 0 days, December 6th was the lagged 1 day, December 5th was the lagged 2 days, and so on.This allowed us to identify the impact of pollutant concentrations on health under different lag times.There is a significant increase in daily clinic visits after a delay of 3-6 days caused by air pollution (Figure 1A).For instance, clinic visits started to increase on the third day following the onset of air pollution, and this trend continued until the fifth and sixth days.Specifically, on the fifth and sixth days of being exposed to air pollutants (PM 2.5 and PM 10 ), the clinic visits increased by 0.9 and 1.2, respectively, for each additional unit of PM 2.5 concentration.Similarly, when taking PM 10 concentrations into account, the results of the increasing number of clinic visits were similar, at 0.7 and 0.9, respectively, on the fifth and sixth days of being exposed to PM 2.5 and PM 10 .On the fifth and sixth days following exposure to these air pollutants, there was a statistically significant increase in the number of days patients had to stay in the hospital.For example, on the sixth day after being exposed to these air pollutants (Figure 1B), hospitalization days increased by an additional 0.05 and 0.04 days for each one-unit increase of PM 2.5 and PM 10 concentration, respectively.Patients who were exposed to air pollution had larger daily total hospitalization days.Exposure to air pollutants led to an increase in total hospitalization days, but the effects were delayed by two days and lasted for approximately four days (Figure 1C).For instance, the number of days people spent in the hospital increased by 0.6 and 0.4 on the second day of being exposed to air pollutants (PM 2.5 and PM 10 ), respectively.This increase was sustained until the sixth day, after which the total hospitalization days increased by 0.5 and 0.3 for each additional unit of PM 2.5 and PM 10 concentration, respectively.
Our study found a positive correlation between air pollutants, specifically PM 2.5 and PM 10 , and daily hospitalization expenses.We discovered that an increase in clinic visits and hospitalization days leads to higher hospitalization expenses.This correlation was particularly significant on the fifth and sixth days following the exposure to air pollutants.On the fifth day of being exposed to air pollutants (Figure 1D), an increase of one additional unit in PM 2.5 and PM 10 concentrations resulted in an increase of 0.0009 and 0.0007 million CNY, respectively.
Hospitalization consequences across gender and age
We have analyzed the impacts of lagged air pollutants on the total hospitalization days and hospitalization expenses for both males and females.Our study reveals that air pollutants have a significant impact on the total hospitalization days of both genders.We observed that the total hospitalization days for males started to increase on the second and third days of air pollution formation (Figure 2A).This implies that for every additional increase of PM 2.5 and PM 10 concentration, there was a corresponding increase in males' total hospitalization days by 0.3 and 0.2, respectively.However, as the lag days increased, the impact of the lagged air pollutants on the total hospitalization days decreased.After the fourth day of being exposed to air pollution, there was no statistically significant effect on the total hospitalization days.On the other hand (Figure 2B), the increasing effect on the total hospitalization days for females occurred on the sixth day following the exposure to air pollutants.This suggests that for every additional unit increase of PM 2.5 and PM 10 concentration, there was a corresponding increase in total hospitalization days by 0.2 and 0.2, respectively.
Hospitalization expenses tend to increase with the rise in air pollution levels for both males and females.This relationship was observed in both males and females (Figures 2C and 2D).Additionally, we discovered that air pollution has a delayed effect on hospitalization expenses.For instance, an increase in PM 10 concentration by one unit caused daily hospitalization expenses for males and females to increase by 0.0005 million CNY each on the sixth day of being exposed to PM 2.5 pollution.Similarly, when measured by the PM 10 influence, males' and females' daily hospitalization expenses increased by 0.0003 million CNY each on the fifth day after being exposed to air pollution.We estimated the impact of air pollutants on the total number of hospitalization days for three different age groups (Figure 3A-3C).The study shows that air pollutants have a significant lagged effect on the daily total hospitalization days for the age group between 17 and 60 years old but not for the other two age groups.Specifically, an additional unit increase of PM 2.5 concentration with a 2-day lag led to a 0.3 increase in daily total hospitalization days for the 17-60 age group.On the other hand, taking PM 10 concentrations into account resulted in a daily hospitalization day increase of 0.2 for the same age group.On the sixth day of being exposed to air pollutants (PM 2.5 and PM 10 ), the 17-60 age group experienced a 0.2 and 0.1 increase in daily hospitalization days, respectively.
All three age groups experienced a rise in hospitalization expenses due to the delayed effects of air pollution (Figure 3D-3F).The hospitalization expenses increased significantly two or three days after being exposed to air pollutants and continued for several days.For instance, the daily hospitalization expenses for the 0-16 age group increased by 0.000025 million CNY on the second day of being exposed to PM 2.5 , implying that an increase in the concentration of PM 2.5 resulted in substantial additional health-related costs.Similarly, the 17-60 age group incurred an extra cost of 0.000285 million CNY with a 2-day lag in PM 2.5 pollution.Moreover, this increasing trend persisted for several days.The hospitalization expenses for the 17-60 and over 60 age groups rose by 0.000348 and 0.000580 million CNY, respectively, with a 6-day lag of being exposed to PM 2.5 .It is worth noting that all sub-figures in both Figures 2 and 3 displayed the same pattern.In the presence of air pollution on the first day and the following day, all health-related outcomes decreased.For instance, the daily hospitalization expenses for males decreased by 0.001 and 0.0013 million CNY on the day when PM 2.5 pollution occurred and the next day (Figure 2C), respectively.Different age groups also exhibited a similar trend.Hospitalization expenses for the age group of 17-60 decreased by 0.000687 and 0.000817 million CNY on the day when being exposed to PM 2.5 on the first day and the next day (Figure 3E), respectively.This is because air pollution can have a significant impact on people's health.To avoid exposure to air pollution, people often prefer to stay indoors.This may help reduce clinic visits, hospitalization days, and expenses in the short term.However, the concentration of air pollutants increases over time, leading to physical discomfort after a few days.As a result, there may be an increase in clinic visits and hospitalization days on the fifth and sixth days of exposure to air pollution.Ultimately, this can lead to higher hospitalization expenses.
Cleaner heating policy on hospitalization behavior variation
The implementation of a cleaner heating policy in Beijing significantly impacts hospitalization rates (panel A in Table 2).Health outcomes are positively affected by the policy, with an average decrease of 3,280 clinic visits for all diseases.Additionally, total hospitalization days and expenses tend to decrease by 218.96 days and 0.34 million CNY, respectively.
Panel B in Table 2 shows that implementing the cleaner heating policies led to a significant reduction in the number of daily clinic visits for respiratory diseases, asthma, stroke, diabetes, and chronic obstructive pulmonary diseases (COPD).On average, the number of daily clinic visits for these diseases decreased by 49.99, 0.06, 0.45, 10.10, and 60.16, respectively.In contrast, panel C of the same table reflects the effect of the cleaner heating policy on the number of daily hospitalization days for all diseases across different age groups and genders.Both males and females were found to be significantly affected by the cleaner heating policy, with no evident differences in policy effects.More specifically, the number of daily hospitalization days for all diseases for males and females decreased by 117.04 and 75.10 days, respectively, after the implementation of the cleaner heating policy.
Additionally, all three age groups (0-16, 17-60, and more than 60 years old) were found to have reduced their daily hospitalization days due to the cleaner heating policy.The older age group (more than 60 years old) exhibited the largest reduction in hospitalization days, with a decrease of 105.89 days, compared to the age groups of 0-16 (5.24 days) and 17-60 (69.05 days).
In panel D, it is shown that the cleaner heating policy has a significant impact on daily hospitalization expenses, categorized by gender and age groups.Both males and females experienced a reduction in daily hospitalization expenses for all diseases, with a reduction magnitude of 0.18 million CNY and 0.16 million CNY, respectively.Moreover, the implementation of the cleaner heating policies resulted in greater benefits for the young (0-16 years old) and old age groups (more than 60 years old), with a decrease of 0.01 million CNY and 0.32 million CNY in daily hospitalization expenses for all diseases, respectively.
Welfare benefits brought by cleaner heating policies
Figure 4 illustrates the potential welfare benefits for the districts in Beijing from 2017 to 2022 when the cleaner heating policies were implemented.The policy was assumed to have an equal impact on all the districts and resulted in a decrease of 0.3426 million CNY in daily hospitalization expenses in the base year.Each year since 2017, cleaner heating policies have been increasingly implemented in Beijing.To assess the policies' welfare effect on reducing hospitalization expenditures from 2017 to 2022, we assumed a 5% yearly policy improvement effect.Overall, the cleaner heating policies have significantly reduced hospitalization expenses (see Figure 4B), amounting to around 328.90 million CNY in 2017 and 1,049.41 million CNY in 2022.Additionally, the cleaner heating policies have saved approximately 5,099.46 million CNY in hospitalization expenses in Beijing between 2017 and 2022.
We have measured the benefits of the cleaner heating policies in terms of per capita hospitalization expenses for each district in Beijing.The implementation of the policies has resulted in significant benefits for all districts in Beijing (see Figure 4C).For instance, the Xicheng district, which is the most economically prosperous district in Beijing (see Figure 4A), has received an average per capita hospitalization expense benefit of 24.2 CNY in 2017 and 93.2 CNY in 2022.Due to the size of the population, districts with smaller populations have received greater savings in per capita hospitalization expenses.Our study found that the Yanqing district had the highest expense benefits, with a per capita hospitalization expense of 190.7 CNY in 2022.Similarly, the Mentougou, Huairou, Pinggu, and Miyun districts have relatively high benefits, with 165.6, 149.4,143.8, and 124.7 CNY in 2022, respectively.Moreover, all districts have shown an increasing trend in the benefits of per capita hospitalization expenses as a result of the cleaner heating policies.
DISCUSSION
According to our analysis, air pollutants such as PM 2.5 and PM 10 have significant delayed effects on hospitalization behavior and expenses in Beijing.After the onset of air pollution, clinic visits increased on the third day and continued until the fifth and sixth days.On the fifth-and sixth-days following exposure to these pollutants, there was a statistically significant increase in the number of days patients had to stay in the hospital.Patients who stayed in the hospital for longer periods faced higher hospitalization expenses.Our findings show that on the fifth day after the occurrence of air pollution, an increase of one additional unit in PM 2.5 and PM 10 concentrations resulted in an increase of 0.0009 and 0.0007 million CNY, respectively.0][31] The negative impact of air pollution on human health is not immediate and tends to increase with prolonged exposure, resulting in a delayed impact. 32,335][36] These findings are consistent with previous research on the subject.
According to our research, air pollution can have varying impacts on health, depending on the gender and age of individuals.Our findings indicate that both men and women are likely to experience increased clinic visits, hospitalization days, and hospitalization expenses as air pollution levels rise.Some investigations have suggested that males may be more susceptible to the detrimental effects of atmospheric pollution. 37However, the primary reason for this is that men tend to undertake work in outdoor settings, and hence, are more exposed to atmospheric pollution, rendering them more vulnerable to health implications.We have not found any evidence to support the previously mentioned conclusions.This is mainly due to two reasons.Firstly, a higher number of women opt to work in Beijing due to the abundance of job opportunities available, which increases the likelihood of both men and women being exposed to air pollution.Secondly, the available healthcare records in our study do not provide any distinction based on the job categories of individuals.Instead, they are solely focused on presenting the health outcomes at the district level.As a result, there are insignificant differences in the influence of gender.Additionally, three age groups (namely 0-16, 17-60, and more than 60 years old) exhibit similar trends in health-related behaviors and expenditures due to the delayed effects of air pollution, indicating that daily clinic visits, hospitalization days, and hospitalization expenses increase several days after the occurrence of air pollution.
We conducted an evaluation to determine how effective the cleaner heating policies were in reducing healthcare and hospitalization expenses in Beijing.Our study findings indicate that the adoption of the aforementioned policies has resulted in a significant reduction in daily clinic visits, overall hospitalization days, and associated costs.Notably, the policies have proven to be particularly efficacious in curbing clinic visits for five distinct ailments, which include respiratory, asthma, stroke, COPD, and diabetes.Moreover, these policies led to a reduction in hospitalization days for all age groups and both genders, ultimately resulting in a significant decrease in hospitalization expenses.It has been observed that implementing stringent measures to control air pollution can have a positive impact on reducing healthcare expenses.Studies have shown that the benefits reaped from cleaner heating policies were comparable to the health benefits achieved by China's clean air actions.Therefore, it can be inferred that taking steps to reduce air pollution can not only improve the quality of air but also have a positive impact on public health and healthcare expenses. 36,38Furthermore, we measured the potential welfare brought about by the cleaner heating policies for each district in Beijing.Similar to other studies, 39,40 economic welfare benefits in air pollution control policies were found to be considerable.
Our findings suggested that the implementation of cleaner heating policies can result in a significant reduction in hospitalizations that are associated with air pollution at the district level.This information is of utmost importance to policymakers who are seeking to mitigate healthcare expenses.To minimize air pollution during the winter season, it is recommended that developing countries prioritize transitioning from coal to natural gas or electricity for heating purposes.It is imperative that regional economic disparities are taken into account and economic incentives, such as fiscal transfers, are provided to poorer regions to support the implementation of cleaner heating policies.Policymakers should consider the long-term economic sustainability of their policies and ensure that the policies adopted are financially viable in the long run.Limitations of the study Some limitations must be acknowledged.Firstly, we mainly focused on hospitalization-related changes at the district level, as we did not have access to household or individual-level data.This means we were unable to analyze the characteristics of the individuals affected by the policies.Additionally, the lack of analysis on the individuals may also lead to an uncompleted control for the potential influencers that may affect health-related outcomes.Secondly, we did not provide sufficient analysis of the influence of policies on different diseases.This is because there were insufficient numbers of cases to provide the power needed to do a disease-specific analysis.Thirdly, we conducted a welfare analysis of cleaner heating policies at the district level, but we were not able to measure their economic burdens if implemented over a longer period.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
Difference-in-differences model
We conducted a quasi-natural experiment to analyze the impact of cleaner heating policies on different districts.The DID model's fundamental principle is to ensure that there are no systematic differences between the treated and control groups, except for the implementation of policies.This approach helps to overcome the endogenous influence of the traditional model and provides more accurate estimated results.
We set the DID specifications accordingly: Where Y i;t represents hospitalization behaviors and expenditures on day t for district i in Beijing, including daily clinic visits, daily hospitalization days, daily total hospitalization days, and daily hospitalization expenditures.Cleanheat post i;t is an interaction term obtained by multiplying Cleanheat i;t and post i;t .Cleanheat i;t is a dummy variable that indicates whether district i implements cleaner heating policies in time t.Specifically, if district i implements the policy in time t, Cleanheat i;t is 1, otherwise 0. We divided 16 districts in Beijing into two groups: one group implemented cleaner heating policies widely with large rural populations (treated group), and the other group did not widely implement cleaner heating policies with small rural populations (control group).post i;t is a dummy variable that represents the period when cleaner heating policy was implemented.If the cleaner heating policy was implemented during this period, post i;t is 1, otherwise 0. Therefore, the interaction term Cleanheat post i;t implies the districts that implemented cleaner heating policies and the policy period.In our study, the coefficient b of Cleanheat post i;t shows the effectiveness of cleaner heating policies on declining hospitalization behaviors and expenditures.We expect the coefficient b to be negative and statistically significant in theory.
It is noted that the DID model allows us to differentiate between the similar characteristics of the treated and control groups, which helps to reduce the interference in the health effects from public disease events and other influencers.
In order to account for any possible external factors, we defined X it in the same way as in the fixed-effect model.Additionally, we incorporated the district-specific fixed-effect l i and time fixed-effect m t into the model.Here, the year-fixed effect is used to account for the various unobservable year-specific factors that remained constant across different districts, such as extreme temperature fluctuations.These factors may have an impact on individuals' health outcomes, and therefore, it is essential to control them.We also included a district-level fixed effect to account for the year-invariant but district-specific characteristics that could potentially cause endogeneity issues.Although the DID model can reduce the differences between each district, the district-fixed effect was necessary to further control for any district-specific characteristics that might impact the estimation outcomes.
Data description
We conducted an analysis of daily clinic visits, daily hospitalization days, total daily hospitalization days, and daily hospitalization expenses for all diseases in Beijing.We categorized the health-related variables of all diseases into six categories: respiratory disease, asthma, chronic obstructive pulmonary disease (COPD), stroke, coronary heart disease (CHD), and diabetes.However, due to limited data for each disease, we only provided the results of all diseases rather than each disease at the district level in Beijing on a daily basis.The data collected covers the period between January 1, 2015, and December 31, 2019, and was obtained from the Beijing Municipal Health Commission.
We collected data on daily concentrations of air pollutants to analyze the link between air pollutants and health outcomes.The study focused on two primary air pollutants -particulate matter with an aerodynamic diameter of less than 2.5 or 10 mm (PM 2.5 and PM 10 , respectively). 45These pollutants are highly correlated with the cleaner heating policies and have been identified as the main targets for reduction in several government initiatives.We measured PM 2.5 and PM 10 concentrations daily, in sync with health-related variables.Our data source was the China National Urban Air Quality Real-time Publishing Platform.The data on these meteorological variables comes from the China Meteorological Administration.
We measured the health-related impacts in two ways (Table 1): hospitalization days across different sexes and ages and hospitalizationrelated expenses for males and females, as well as for different age groups.The results of the study showed that, during the period of January 1, 2015, to December 31, 2019, males had an average hospitalization of 1.92 thousand days per district per day across all diseases, while females had an average of 1.81 thousand days.Moreover, people who were more than 60 years old were found to be more susceptible to hospitalization due to disease impacts, with an average of 1.97 thousand days per district per day, higher than age groups of 0-16 (0.12 thousand days) and 17-60 (1.94 thousand days).We also found that hospitalization expenses for males were 3.43 million Yuan per district per day, which was higher than for females (3.10 million Yuan).The comparison of hospitalization expenses among age groups was also provided.Additionally, the daily average temperature was 13 C for each district during the period 2015-2019, while the average concentrations of PM 2.5 and PM 10 were 130 and 183 mg/m 3 .
Figure 1 .
Figure 1.Effects of air pollutants on hospitalization-related consequences (A) Effects on daily clinic visits.(B) Effects on daily hospitalization days.(C) Effects on daily total hospitalization days.(D) Effects on daily hospitalization expenses.
Figure 2 .
Figure 2. Effects of air pollutants on hospitalization-related consequences by gender (A) Effects on daily hospitalization days on male.(B) Effects on daily total hospitalization days on female.(C) Effects on daily hospitalization expenses on male.(D) Effects on daily hospitalization expenses on female.
Figure 3 .
Figure 3. Effects of air pollutants on daily total hospitalization days and hospitalization expenses by age groups (A) Effects on daily total hospitalization days of years 0-16.(B) Effects on daily hospitalization days of years 17-60.(C) Effects on daily hospitalization days on years more than 60.(D) Effects on daily hospitalization expenses of years 0-16.(E) Effects on daily hospitalization expenses of years 17-60.(F) Effects on daily hospitalization expenses of years more than 60.
Figure 4 .
Figure 4. Potential healthcare savings brought by the cleaner heating policies in Beijing (A) The per capita GDP (unit: CNY) of each district in Beijing in 2022.(B) The total healthcare savings (unit: Million CNY) brought by the cleaner heating policies from 2017 to 2022.(C) The per capita healthcare savings (unit: CNY) for each district resulting from the cleaner heating policies between 2017 and 2022).
Table 1 .
Summary statistics
TABLE d
B Difference-in-differences model B Data description | 6,526.8 | 2024-06-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Neutron Production in Thick Targets Irradiated with High-Energy Ions
1 Institute of Nuclear Science, School of Physics, University of Sydney, Sydney, NSW 2006, Australia 2 Physics Department, Aristotle University, 54124 Thessaloniki, Greece 3 Joint Institute for Nuclear Research, 141980 Dubna, Russia 4 Fachbereich Chemie, Philipps Universität, 35032 Marburg, Germany 5 Institut für Materialwissenschaften, Technische Universität Darmstadt, 64287 Darmstadt, Germany 6 Forschungszentrum Jülich GmbH, 52425 Jülich, Germany 7 Dr. Westmeier GmbH, 35085 Ebsdorfergrund, Germany
Introduction
Recent investigations on the interaction of relativistic ions (called primaries in this paper) above a total energy E kin > 10 GeV in thick heavy-element targets show unexpected phenomena in the spallation yield distributions, as well as surprisingly intense neutron emission.These unexpected phenomena constitute "unresolved problems" as described in [1,2].
(1) Spallation mass-yield curves in any thin target are completely understood with conventional reaction models like "limiting fragmentation" and "factorisation" [1,2].However, the spallation mass-yield curves in thick targets which are also produced by secondary fragments, generated in the interaction of primaries with target nuclei, are incompatible with these concepts of "limiting fragmentation" and "factorisation."Secondary fragments, in the following called secondaries, seem to release more neutrons when they interact than primaries.
(2) The neutron emission from thick targets irradiated with ions having E kin > 10 GeV is rather intense, in particular for 44 GeV 12 C and 72 GeV 40 Ar onto thick Cu-and Pb-targets.These phenomena need further investigation.
Experimental results published for thick targets irradiated with relativistic ions are scarce and sometimes contradictory.In the experiment on "neutron yields from 1 GeV/ nucleon 238 U ion beams on Fe target" by Yordanov et al. [3], the neutron spectra above 50 MeV were studied.The authors could fit their experimental data with well-known models and stated (quote) that "after a first beam-target collision the projectile residue may subsequently undergo the same type Table 1: The direct measurement of neutron production in THICK Pb-targets irradiated at the Synchrophasotron [11].
(a) The total number of neutrons n, generated by one primary ion ( 1 H, 2 H, 4 He, or 12 C) in a very thick lead target (Φ = 20 cm and L = 60 cm) and moderated within 1 m 3 paraffin, was measured by Vasil'kov et al.This work started in Dubna around 1980.The last column gives the ratio of the neutron yields at the energy E kin /A = 3.7 GeV compared with the E kin /A = 1.0 GeV.
Ion
Mass of reactions as just mentioned" (i.e., in agreement with wellaccepted standard models).The results reported in [1,2] show evidence that secondary fragments excite target nuclei stronger than primary ions, thus producing more neutrons than expected from calculation.Findings from [1][2][3] are not in contradiction as will be shown [4].It seems that as if unresolved problems arise only after an energetic limit that is barely approached in the 1 GeV/u 238 U + Fe experiment.This may become an important issue for high intensity, high energy heavy-ion accelerators presently under construction.
One essential aspect of this construction is the consideration of all aspects of radiation protection for the operation of these machines with respect to (i) workers in the laboratories, (ii) materials close to the beam line and target areas, (iii) and-not the least-the surrounding environment.
The aim of this paper is to concentrate on the experimentally known facts which may serve as benchmarks for any radiation protection model.Two major topics will be considered.
(1) The neutron emission from thick targets irradiated with ions in the energy range of 1 GeV ≤ E kin ≤ 44 GeV at the JINR in Dubna (Russia) and its influence on radiation protection.
(2) The experimental spallation mass-yields produced in a 20 cm thick Cu target in the irradiation with 72 GeV 40 Ar at the LBNL in Berkeley (USA).Calculations with modern code MCNPX 2.7a [5] are compared with experiments demonstrating that secondaries interact with target nuclei stronger than primaries.The corresponding neutron emission in the irradiation was measured to be large, however, quantitative results have not been published.
A key question in all investigations is the determination of the total number of neutrons produced in a thick target by a single ion with a well-defined primary energy.A target is considered as being thick when a large fraction of secondary particles induce additional interactions within this target.A recent publication by Yurevich et al. [12] reports on the direct measurements of the total number of neutrons emitted from lead targets of various configurations in proton irradiations in the energy range (1 GeV < E(p) < 3.7 GeV).The authors used various experimental techniques, including TOF and Pb(p,xn), to obtain the total number of emitted neutrons.They also investigated the same thick Pb target as the one employed in [11].The observed neutron numbers per proton for the same target as shown in Table 1(a) are (26 ± 4) at 1 GeV and (76 ± 7) at 3.7 GeV, which is in fair agreement with the results from [11].
Radiation Protection and Indirect Measurements of Neutrons
Produced by (1.0-1.5)GeV Protons at the Nuclotron Accelerator.A recent publication [6] from experiments at the Nuclotron accelerator in JINR describes measurements of the neutron dose from 1 GeV protons around two thick Pb target assemblies, called "Gamma-2" and "Energy plus Transmutation" (E + T).The neutron dose measurements were carried out close to the targets and in addition behind a 1.0 m thick concrete shielding wall of the experimental setups as shown in Figure 1.
The target system "Gamma-2" (for details see Figure 2 and Section 2.3 below) had been used earlier in the same experimental location of the Laboratory for high energy (JINR, Dubna) for heavy ion irradiations using the Synchrophasotron accelerator.Thus, one can compare the results from the present-day neutron dose experiments with earlier irradiations with relativistic heavy ions a decade ago using the same "Gamma-2" target.
The Pb core in the "Energy plus Transmutation" (E + T) target has a diameter of 8.4 cm and a length of 45 cm, and it is surrounded by a blanket of 206.4 kg natural uranium plus also a massive neutron shield.A detailed description can be found in [6,[13][14][15].
The neutron dose measurements were carried out with solid state nuclear track detectors (SSNTD's) by Fragopoulou et al. [6].Their experimental techniques allow to obtain separate results for The actual neutron ambient dose equivalent in units of Sievert (Sv) was calculated using experimental conversion factors.The results are shown in Table 2: The agreement between experiment and calculation is fine within uncertainties for these two target systems and in this energy range.The calculation for the "Gamma-2" target gave 15 neutrons per 1.0 GeV proton which is smaller than the corresponding number in Table 1(a) as the Pb target in Gamma-2 was smaller than Vassil'kov's target.
A further detailed analysis of the fission rate inside the massive uranium blanket for the (E + T) system has been carried out in [15].The authors used Monte Carlo code MCNPX 2.6C and showed that the experimental fission rates are only about (22 ± 14)% larger than the calculated ones.
Neutron Dose in the
Vincinity of the Target.The intermediate-fast neutron dose around the (E + T) target with its massive uranium blanket around a thick lead target is larger than around "Gamma-2", however, the thermalepithermal neutron dose is larger at "Gamma-2", due to the paraffin moderator around this target (see Table 2).
Neutron Dose behind 1 m of Concrete.
The "Gamma-2" target produced a considerable thermal neutron dose behind the concrete wall.The irradiation lasted 11 hours with a total fluence of 10 13 protons of 1 GeV on target, corresponding to an average of 2.5 * 10 8 protons/sec.The experimental neutron ambient dose equivalent behind the concrete wall was 37 μSv/h [6], which is too much to be tolerated by humans.The ICRP 66 (International Commission on Radiological Further experiments measuring the neutron dose equivalent under well-defined conditions during the irradiation with heavy ions onto thick targets are necessary with relativistic ions like 2 H, 4 He, and 12 C.Such irradiations had been carried out a decade ago using the "Gamma-2" target at the Synchrophasotron, however, without any quantitative measurements of the neutron ambient neutron dose equivalent outside the concrete shielding.In this paper, a "postfactum" estimate of the corresponding neutron ambient dose equivalent outside the concrete shielding is presented. 4He, and 12 C) at the Synchrophasotron.The comparison of the neutron emission from the "Gamma-2" target irradiated with heavy ions at the Synchrophasotron accelerator and with protons at the Nuclotron accelerator will allow the intercalibration of experimental results from both accelerators.Figure 2 shows the detailed lay-out of the "Gamma-2" target.It consists of a metallic core of either 20 Cu or 20 Pb disks (1 cm thick, 8 cm diameter) and it is surrounded by a 6 cm thick paraffin moderator.The moderator contains grooves for plastic vials containing 1 g of La or U for radiochemical studies.
Neutron Emission from the "Gamma-2" Target Irradiated with Protons at the Nuclotron and Heavy Ions ( 2 H,
The "Gamma-2" target allows two-parameter experiments: (1) In the irradiation with relativistic ions onto the metallic core, all kinds of spallation products are produced inside the metallic disks.These spallation products can be determined with standard radiochemical techniques after the irradiation.
( 3 and for heavy ion interactions at the Synchrophasotron in Table 4 using the same "Gamma-2" Pb target are shown.Table 3 allows the following conclusions: (i) The recent experimental values of B( 140 La) ratios obtained for (1.0-3.7)GeV proton irradiation are in fine agreement with model calculations.Details of the B( 140 La) distribution on top of the "Gamma-2" Pb target with MCNPX 2.7a calculations for 1.0 GeV and 2.0 GeV protons reveal good agreement between experiment and calculation [16].(ii) The B( 140 La) ratios between (3.7 GeV/1.0GeV) and (2.0 GeV/1.0GeV) agree with model calculations.
During the last decade of the operation of the Synchrophasotron until about the year of 2000, extended irradiations of "Gamma-2" targets with 2 H-, 4 He-, and 12 C-beams in the range of total energies from 3 GeV up to 44 GeV were carried out [7,8] where the metallic target core was copper or lead.Radiochemical sensors, such as stable lanthanum (see (1)) and natural uranium were irradiated and B( 140 La) and B( 239 U) values were obtained.Results of irradiations with 2 H-, 4 He-, and 12 C-ions onto "Gamma-2" targets at the Synchrophasotron are shown in Figures 3 and 4.
The resulting distributions for 2 H-, 4 He-, and 18 GeV 12 C-irradiations are surprisingly similar, irrespective of the projectile element and energy.In the 44 GeV 12 C irradiation, however, one observes a drastic increase in the production of 140 La and 239 Np.
Similar results are observed in experiments using a Cucore in "Gamma-2."The results are shown in Figure 4, where the average B-value for the investigation in five La-or Usensors on top of the moderator is given [7].Again, for 44 GeV 12 C irradiations, one observes a strong increase in Bvalue, whereas at lower total bombarding energies, B-values are grouping around significantly lower values.* * * B( 140 La) is the average value [7,9] from five La-sensors on top of the moderator (see Figure 2).
Neutron induced interactions were studied by this collaboration over a wide energy range: Wang et al. [17] studied the integral neutron energy spectra emitted from "Gamma-2" with the nuclear emulsion technique.They measured secondary neutrons with energies up to 1 GeV.Their results are complementary to the work of Yordanov et al. [3], who studied high energy neutron spectra behind 20 cm thick iron targets with counter techniques.
A summary of B( 140 La)-values, originating from the capture of low-energy neutrons, measured in 2 H, 4 He, and 12 C ion irradiations is given in Table 4.The experimental ratios of B-values at (3.7 AGeV/1.5 AGeV) are (i) (1.1 ± 0.1) for 2 H and 4 He induced reactions, and (ii) (2.3 ± 0.2) for 12 C irradiations, which is significantly higher.
The large B( 140 La)-value for 44 GeV irradiations is again an indication of something different, already observed in [1,2] and termed there as "unresolved problem."A similar observation was found earlier by Brandt [18].In the irradiation of a large Pb block with 3.7 AGeV ions from the Synchrophasotron, fine agreement between experiment and calculations for proton, deuteron, and alpha irradiations was observed.In the irradiation with 44 GeV 12 C, however, B( 239 Np) = (15±5) * 10 −4 (atoms per carbon ion per gram of Np) was measured, which is about twice as large as calculated by Tolstov and rather similar to our result shown in Figure 3.
All evidences demonstrate that one needs additional experiments to study the neutron production in thick targets using heavy ions at high energies.In these future experiments, one may be confronted with nontrivial radiation protection problems.1 GeV protons on the "Gamma-2" Pbtarget lead to a large neutron dose behind the concrete shielding; therefore, one might expect much more severe radiation protection problems in irradiations with 44 GeV 12 C beams.
If assuming that (i) one uses the same experimental setup as shown in Figure 1, and (ii) the "neutron ambient dose" behind the concrete shielding increases linearly with B( 140 La)-values that were measured on "Gamma-2" with Pb core at the Synchrophasotron, then one obtains estimated neutron ambient doses as given in Table 5.
The measured ambient neutron dose (μSv/h) and the experimental transmutation rate B( 140 La) in the interaction of (1 GeV p + Pb target) yielding a ratio [(μSv/h)/(n/ion)] = 2.3 ± 0.3 are in agreement with recent calculations.For irradiation with 44 GeV 12 C onto a thick Gamma-2 target core, however, one estimates an average which is much larger and statistically significantly different from 2.3.This is a further indication that one may observe for large beam energies onto thick targets significantly higher than calculated neutron numbers.The neutron ambient doses behind the concrete shielding exceed significantly the radiation protection allowance for humans to stay near this area during the experiment.A further clarification of this problem can only be obtained with more experiments.Such experiments are needed to understand the underlying physics more accurately, as well as to supply more adequate radiation protection shielding data for thick-target irradiations.
Thick Target Studies at LBNL, Berkeley, California
Thick-target experiments at Lawrence Berkeley National Laboratory (LBNL) started around 1980 with the irradiation of two Cu disks in contact, with each disk having a thickness of 1 cm and a diameter of 8 cm.The aim of such studies was the investigation of possible differences in nuclear interactions of relativistic secondary fragments in comparison with the relativistic primary ions as reviewed in [1,2] and presented in Figures 5 and 6.
The ratio of nuclear interactions induced by primaries to those induced by secondaries is larger in the first Cu disk than in the second Cu disk.The study of experimental yield ratios of individual spallation products A Z in the second Cu disk as compared with the first Cu disk reveals evidence for a different behaviour of secondary fragments as compared with the primary ions.The ratio R 0 ( A Z) is defined for two Cu disks in contact (d = 0 cm), where activities are decay-corrected to the end of bombardment.
Figure 5 shows the experimental setup and the results are shown in Figure 6.Our focus will be on the R 0 results.Due to considerable production of relativistic secondaries, one observes for all spallation products R 0 (A) > 1.00.It is interesting to note two surprising features in Figure 6: (i) R 0 for 24 Na is R 0 (24) = (1.50 ± 0.02).This value is considerably larger, by at least 25%, than all values based on various theoretical model calculations as has been discussed by Aleklett et al. [19] and in detail in [1,2]: the production of 24 Na by secondaries is larger than that by primaries.(ii) One observes for product masses above A = 56 a decrease in R 0 with increasing mass A. All theoretical interpretations have failed to describe this phenomenon.
The comparison of experimental results (see Figure 6) with the calculated R 0 (A) distribution using the MCNPX 2.7a code is shown in Figure 7 and reveals surprising results.Some details of this comparison shall be emphasized (i) For mass A = 7 ( 7 Be) and for masses 43 ≤ A ≤ 48 the agreement is good.
(ii) The experimental production rate of the isotope 24 Na (as well as 22 Na and 28 Mg) is significantly larger than models calculate.(Region 1 around A = 24).
(iii) Above A = 56 experimental R 0 decreases with rising A whereas an increase in R 0 with rising A is theoretically predicted.(Region 2 above A = 56).
The results for Region 1 could correlate with the result seen in Region 2. One finds an excess of experimental cross section in Region 1 whereas one misses cross-section in the second Cu disk just below the target mass in Region 2.
It may be useful to carry out further experiments to learn more about these phenomena from additional experimental approaches: (i) One should measure R 0 for many nuclides close to the target mass but also for 60 The experimental R 20 (A) distribution shown in Figure 6 is compared with the theoretical calculation using the MCNPX 2.7a code in Figure 8.The congruence between the theoretical fit and the experiment is remarkable.One observes only a slight experimental deficiency around masses A = 45 and a slight experimental excess for 24 Na.
The last issue of this paper is concerned with the radiochemical aspects in the study of a 20 cm thick Cu target (i.e., 20 Cu disks of 1 cm thickness in contact) irradiated with 72 GeV 40 Ar.In this irradiation at LBNL, a very strong neutron dose was registered even outside the experimental area of the Bevalac accelerator.However, quantitative neutron data about this event have never been released.
The 20 cm thick Cu target was designed as a twoparameter experiment: (i) The determination of neutron production was in principle possible, however, no results of neutron measurements were ever published, as mentioned several times.(ii) Nuclear reactions inside the Cu disks were actually studied, that is, spallation product yields in several 1 cm Cu disks were determined, yielding information about nuclear interactions of relativistic ions inside the entire thick Cu target.Both sets of information (neutron production rates plus spallation yields) are needed for a complete understanding of the reaction mechanism.Some detailed experimental yield ratios R i ( A Z) for two typical spallation nuclides ( 24 Na and 57 Ni) in several Cu disks (number = i) as compared with the first Cu disk (number = 1) will be discussed.The nuclide 24 Na was chosen as representative for "Region 1" and 57 Ni as representative for "Region 2." The experimental yield ratios are compared with their calculated ratio using MCNPX 2.7a in Figure 7. Two nuclides appear to be of particular importance: (i) The key isotope 24 Na (Region 1) is produced in the downstream 1 cm thick Cu disks (I > 1) definitely in larger yield than calculated by computer codes, including MCNPX 2.7a.
(ii) The isotope 57 Ni (Region 2 with A > 56) is produced in the downstream 1 cm thick Cu disks (I > 1) with about the same yield as calculated by MCNPX 2.7a.
The following Figures 9 and 10 present the respective ratios for R i ( A Z)-values, where i is the number of the Cu disk within the 20 cm Cu stack.The experimental ratios are compared with model calculations, experimental data tables are given in [21].
(i) 24 Na is produced more abundantly than calculated in every disk, just as in the "2 cm Cu disk" experiment.
(ii) 57 Ni is produced a little less abundantly in this experiment as compared with calculation.This deficiency is observed in all Cu disks in the 20 cm Cu target.
Repeating the argumentation presented for Figure 7, one can correlate the behaviour of 24 Na with the behaviour of 57 Ni.One finds an excess in experimental cross section for 24 Na in all Cu disks as compared with calculation, and one misses experimental cross section in all Cu disks for 57 Ni.This is continuing evidence for discrepancies that requests further experiments to measure product yields and additionally also the neutron production per ion in the reaction of 72 GeV 40 Ar with a thick Cu target (or similar).Another question to be asked is: what amount of "enhanced neutron production" can one expect in the reaction (20 cm Cu + 72 GeV 40 Ar) as compared with the wellinvestigated reaction (20 cm Cu + 44 GeV 12 C)?There exists only one indirect measure for the "enhanced nuclear destruction" ability of secondary fragments in thick copper targets, which is the measurement of R 0 ( 24 Na) in "two Cu disk experiments" (Figures 5 and 6) for a large variety of projectile ions using several accelerators around the world.The result of such investigation is presented in Figure 11.It was shown in [1,2] that there is a smooth relation between R 0 ( 24 Na) ) GeV Figure 11: The ratio R 0 ( 24 Na) as a function of total kinetic energy.The calculated "theoretical" line from [1,2] is given for comparison.and the energy of the ion when the experimental R 0 ( 24 Na) values are plotted with respect to the total ion energy E kin , irrespective of the ion.Some of those experimental ratios for 24 Na production are (i) 1.5 GeV 1 H onto two Cu disks yield R 0 ( 24 Na) = 0.99 ± 0.03; (ii) 44 GeV 12 C onto two Cu disks yield R 0 ( 24 Na) = 1.24 ± 0.02; (iii) 72 GeV 40 Ar onto two Cu disks yield R 0 ( 24 Na) = 1.50 ± 0.02.
The increase in R 0 ( 24 Na) for 72 GeV 40 Ar as compared with 44 GeV 12 C may yield stronger increase in the "enhanced neutron production"-but only an experiment can give the answer.This answer is needed to understand neutron multiplicities in thick targets and to provide proper radiation protection for any new construction of high intensity, high energy heavy-ion accelerators.
Recent calculations using the MCNPX code indicate that the simulated mass yields are model dependent.Conclusions presented here on the 72 GeV Ar + Cu results (Figures 7-10) are based on the current results of calculations.A detailed analysis of the MC-calculated mass yields in heavy-ion interactions using all available physics models in the MCNPX code and comparison with the experimental data from the literature is in progress and will be presented in another publication.
Conclusions
Neutron ambient dose equivalents have been measured in the irradiation of a 20 cm thick Pb target with 1 GeV protons close to the target and at larger distances in the experimental hall.Experiments and calculations based on DCM/CEM code agree within uncertainties.Based on experiments where breeding rates B( 140 La) were measured with 1 GeV protons and 44 GeV 12 C under similar conditions, one can estimate the neutron ambient dose equivalent during irradiations with 44 GeV 12 C onto thick Pb and Cu targets close to the target and in the experimental hall.The estimated neutron ambient dose equivalents are on the order of 1000 μSv/h at about 5 meters distance and behind 1 meter thick concrete shielding for the irradiation parameters given.Such large doses require that humans stay away a considerable distance from the experimental area, maybe more than 50 m.One may observe in these heavy ion irradiations of thick targets some physical phenomena constituting a radiation protection problem connected to the "unresolved problems" as described in [1,2].
Detailed calculations carried out with the MCNPX 2.7a code have shown that radiochemical spallation yield distributions in thick Cu targets irradiated with 72 GeV 40 Ar cannot be reproduced by calculations.Secondary fragments destroy Cu nuclei stronger than primary 72 GeV 40 Ar ions, thus confirming observations reported in [1,2].The neutron ambient dose equivalent in the irradiation of 20 cm thick Cu targets with 72 GeV 40 Ar is experimentally known to be large, but data have never been published.Such published data would be useful for two reasons: (i) to design the radiation protection shielding for heavy ion accelerators producing high energy heavy ions with large intensities, and (ii) to learn more about the physical reason connected with these "unresolved problems." (i) low-energy neutrons with E(n) < 1 eV, (ii) epithermal neutrons with 1 eV < E(n) < 10 keV, and (iii) intermediate-fast neutrons with 0.3 MeV < E(n) < 3 MeV.
Figure 1 :
Figure 1: The spallation sources used in JINR in Dubna, Russia: (a) "Gamma-2" target, (b) "Energy plus Transmutation" (E + T) assembly.The term "Uz blanket" stands for the uranium blanket surrounding the Pb target.(c) This diagram illustrates the positions of the spallation sources, the concrete shielding wall, and the locations of the SSNTDs (solid state nuclear track detectors) that serve to determine the neutron ambient dose equivalent.Distances can be estimated from the thickness of the concrete wall which is 1.00 m (Figure taken from [6]).
Figure 5 :
Figure 5: The original "two Cu disk experiments" in Berkeley showing the first evidence for unresolved problems [1, 2] in irradiations with 72 GeV 40 Ar at the Bevalac accelerator (LBNL) after 1980.The original "two Cu disks" experiments were carried out from 1980 with relativistic 40 Ar-ions at the Bevalac in Berkeley.(a) Schematic representation of the target set-up using two Cu disks and a surrounding guard ring around the second disk.Two configurations were irradiated: (i) (top): 0 cm distance between the disks (R 0 ), (ii) (bottom): 20 cm distance between the disks (R 20 ).(b) Autoradiographic negative picture of a Cu disk after an irradiation with 72 GeV 40 Ar showing the well-focussed Ar beam in the centre.(c) Schematic representation of 3 different reaction paths.The path Ar-3 is of particular importance for the study of effects due to secondary fragments.
Figure 6 :
Figure 6: The original "two Cu disk experiments" in Berkeleyshowing the first evidence for unresolved problems[1,2] in irradiations with 72 GeV 40 Ar at the Bevalac accelerator (LBNL) after 1980.R d ratios for reaction products A Z measured in interactions of 72 GeV 40 Ar with two Cu disks are defined as: (activity of nuclide A Z downstream/activity of nuclide A Z upstream).The distance between the two Cu disks is d = 0 cm for R 0 and d = 20 cm for R 20 .These ratios can be determined very accurately.
Figure 7 :
Figure 7: Comparison of the experimental and theoretical R 0 (A) distributions in the interaction of 72 GeV 40 Ar with two Cu disks.
Figure 8 :
Figure 8: Comparison of the experimental R 20 (A) distribution and the MCNPX 2.7a calculation.
The total number of neutrons n generated by one primary ion ( 1 H, 2 H,4He, or 12 C) in a very thick lead target (Φ = 20 cm and L = 60 cm) and calculated with the model MCNPX2.7a.The last column gives the ratio of the neutron yields at energy E kin /A = 3.7 GeV and at E kin /A = 1.0 GeV.
Table 2 :
[6]erimental neutron ambient dose equivalent from the irradiation of a Pb target in "Gamma-2" and "Energy plus Transmutation" (E + T) with 10 13 protons and a comparison with model calculations[6].
Protection, Recommendation No. 66) recommends that for a position outside the concrete shielding of accelerators in "controlled areas," a dose should not exceed 10 μSv/h.Therefore, the Health Physics Department of JINR requested the scientists to stay at least 50 m away from the experimental area during irradiation.For irradiations of the (E + T) target, similar rules applied, but the radiation level outside the experimental area was considerably smaller due to the additional neutron shield provisions around the Pb/U target.
Table 5 :
[8,10,19] neutron ambient doses (for 2.5 * 10 8 ions/s) in irradiations with 44 GeV 12 C, as based on recent experiments with 1 GeV protons onto a "Gamma-2" Pb target.The calculated neutron production rate in neutrons per ion is taken as the basis for calculation of the last column. 140)-value estimated from the comparison of experiments on "Gamma -2" with Pb target and "Gamma-2" with Pb/U target as described in[8,10,19].
[20]1Co,62Zn, and64Cu in order to compare with Cumming et al.'s[20]data where production rates for these isotopes in thin Cu targets irradiated with 80 GeV 40 Ar were determined. (i) Additionally, one should carry out neutron counting experiments using thick Cu targets having thicknesses 2 cm ≤ T ≤ 20 cm irradiated with 72 GeV 40 Ar ions.
Figure9: Comparison of the experimental R i ( 24 Na) yields in the 20 cm Cu stack with theoretical values from the computer code MCNPX 2.7a.R i ( 24 Na) is the activity ratio of 24 Na in Cu disk number "i" as compared with the first disk.Figure 10: Comparison of the experimental R i ( 57 Ni) yields in the 20 cm Cu stack with theoretical values from the computer code MCNPX 2.7a.R i ( 57 Ni) is the activity ratio of 57 Ni in Cu disk number "i" as compared with the first disk. | 6,812.4 | 2011-10-03T00:00:00.000 | [
"Physics"
] |
A Deep Belief Network and Dempster-Shafer-Based Multiclassifier for the Pathology Stage of Prostate Cancer
Object Pathologic prediction of prostate cancer can be made by predicting the patient's prostate metastasis prior to surgery based on biopsy information. Because biopsy variables associated with pathology have uncertainty regarding individual patient differences, a method for classification according to these variables is needed. Method We propose a deep belief network and Dempster-Shafer- (DBN-DS-) based multiclassifier for the pathologic prediction of prostate cancer. The DBN-DS learns prostate-specific antigen (PSA), Gleason score, and clinical T stage variable information using three DBNs. Uncertainty regarding the predicted output was removed from the DBN and combined with information from DS to make a correct decision. Result The new method was validated on pathology data from 6342 patients with prostate cancer. The pathology stages consisted of organ-confined disease (OCD; 3892 patients) and non-organ-confined disease (NOCD; 2453 patients). The results showed that the accuracy of the proposed DBN-DS was 81.27%, which is higher than the 64.14% of the Partin table. Conclusion The proposed DBN-DS is more effective than other methods in predicting pathology stage. The performance is high because of the linear combination using the results of pathology-related features. The proposed method may be effective in decision support for prostate cancer treatment.
Introduction
Prostate cancer is the most common cancer in men, with around 1.1 million cases diagnosed and approximately 309,000 deaths in men worldwide in 2012 [1]. It is estimated that 40-50% of men may also have potentially extraprostatic disease [2].
Carcinectomy and radiotherapy are the typical treatments for prostate cancer [3]. The choice of treatment for prostate cancer requires extensive experience and analysis of treatment cases. Pathological staging is the process of predicting the likelihood of prostate cancer disease spreading in a patient prior to treatment. The clinical stage evaluation is based on data gathered from clinical tests that are available prior to treatment or the surgical removal of the tumor. Cancer staging evaluation occurs both before and after the tumor is removed: the clinical and pathological stages, respectively [4]. Pathologic staging is determined after the removal of the tumor tissue and after surgery. This is more likely to be more accurate than clinical staging because it evaluates the direct nature of the disease. Therefore, the prediction of pathological stages using clinical data analysis is an important factor in the treatment of prostate cancer [5].
Pathologic staging prediction is very important because it provides physicians with optimal treatment and management strategies. For example, radical prostatectomy (RP), the surgical removal of the prostate gland, provides the best opportunity for cure when prostate cancer is localized and accurate prediction of the pathology stage can provide the most beneficial treatment approach [6][7][8]. Currently, Partin tables are used to predict the prognostic clinical outcome for prostate cancer, which are based on statistical methods such as logistic regression [9,10]. The Partin tables use clinical test data including prostate-specific antigen (PSA) level, Gleason score, and clinical T stage to predict the pathology stage. While the Partin tables have been verified from 2001 to 2011, there are questions about their applicability to current patients following environmental changes [11]. Thus, a new classification method using machine learning is needed to provide an accurate prediction of the pathology stage [12].
Deep belief networks (DBN) are a deep learning technique and is an effective method for classification prediction [13,14]. As DBN supports both unsupervised and supervised learning, it is possible to effectively learn about uncertain data relationships [15,16]. Because PSA level, Gleason score, and clinical T stage for stage prediction have uncertainties in each patient, a combination of evidence for each variable is needed. The Dempster-Shafer theory (DS) is a technique used to fuse information based on trust values [17,18]. The DS allows the combination of evidence from different sources to arrive at a degree of belief (represented by a mathematical object called a "belief function") that considered all available evidence [19,20]. This technique is a method for fusing information using a stochastic calculation method for belief values [21]. This allows fusion of the classification results of each variable to the pathology stage.
In this paper, we propose a DBN-DS-based multiclassifier for pathologic stage prediction of prostate cancer. The proposed DBN-DS uses patient PSA level, Gleason score, and clinical T stage and three DBNs to predict the pathology stage by combining the predicted information from the classifier. The classifiers are created by learning data according to features. When output values are generated using each learned DBN classifier, the final predicted result is provided by stochastically calculating the predicted output from each DBN classifier using DS. This paper is organized as follows: Section 2 presents the proposed technique and its process. Section 3 explains the experiments and presents their outcomes. Finally, Section 4 presents the conclusions.
Materials and Methods
2.1. Data Set. The study data comprised 6345 male patients extracted from the Korean Prostate Cancer Registry (KPCR) which is extended from Smart Prostate Cancer Data Base (SPCDB) at six tertiary medical centers in Korea [22]. The three input variables consist of initial PSA, Gleason score, TRUS volume, and clinical T stage. Two output variables consisting of pathologic T stage (pT2a, pT2b, pT3a, pT3b, and pT3c) and N stage (pN1) were used. The output variables are transformed using the guidelines of the American Joint Committee on Cancer (AJCC), which were used to identify the pathologic stage as organ-confined disease (OCD; pT2+) or non-organ-confined disease (NOCD; pT3+ or N+) [23]. For the experiments, the data from the KPCR were divided into a training set 70% (4039 patients) and a validation set 30% (2306 patients).
Deep Belief Network.
A deep belief network (DBN) is a generative graphical model or a type of deep neural network composed of multiple layers of latent variables, with connections between the layers but not between the units within each layer. The DBN is composed of restricted Boltzmann machine (RBM) layers. The learning method in the DBN is done by configuring the visible layer and hidden layer 1 into a single RBM. The DBN is composed of multiple layers of RBMs [24]. The RBMs consist of visible and hidden unit layers. Once learning is complete, hidden layers 1 and 2 are trained via the RBM by giving a new input as a value of the hidden layer 1. As such, learning is performed up to the last layer sequentially [25]. One classification technique using the DBN is back propagation, which is configured in the uppermost layer in the DBN [26]. This technique shows better results than an artificial neural network (ANN), which uses a connection intensity that is arbitrarily selected.
In this study, we constructed a classifier for three input and two output variables to construct a multiclassifier, as shown in Figure 1. We created one classifier for each variable. Our idea was to use multiclassifiers for each variable [27]. The purpose of this study was to make a linear combination of the predictions of the classifiers using DS [28]. Therefore, one variable must be converted into several input values. As PSA levels are continuous data, they were converted into binary numbers and configured as an input node. Because Gleason score and the clinical T stage are categorical data, they constitute an input node by constructing data in flag form.
Dempster-Shafer-Based Information Fusion. Dempster-Shafer (DS) is a mathematical theory that deals with the uncertainty and inaccuracy problems presented by Arthur
Dempster and Glenn Shafer [29]. The DS provides an effective method for establishing evidence intervals using belief and likelihood values for the data set. The DS can support the combination of information. As a result, it is possible to use a combination rule to set various information as an evidence value and to calculate the result of all the evidence [30].
The DS expresses the degree of certainty as a section and sets mutually exclusive hypotheses such as probability. The set of objects is called the environment and is denoted by θ. The θ can have several elements such as θ = θ1, θ2, θ3, … , θk , and the number of subsets is 2 k . When θ has only one element, it is called an identification frame. A set of 2 k subsets is called a power set and is denoted by θ. The degree to which θ is supported by any evidence is called the basic probability assignment function m (1). The m is mapped to a probability value of 0 for an empty set, and the sum of m is 1 for all subsets of θ (2).
Belief H , which is the belief value for any hypothesis H (hypnosis; belief in a hypothesis is constituted by the sum of the masses of all sets enclosed by subjective probabilities) by given evidence, as shown in The degree of trust depends on the reliability of the given evidence and on the overall environmental impact; the ratio of the degree is expressed by e.
where r is a value between 0 and 1 and is true if r = 0 and false if r = 1. The DS calculates the value of a new belief through the process of fusion between different evidence. Thus, the convergence between the evidence can be expressed as (5); if X ∩ Y = ∅, then the convergence value of the two evidence is zero.
The DS expresses the confidence measure for H as Bel H , Pls H and the term as the interval. This interval is called the "evidential interval." Plausibility Pls means the extent to which the hypothesis is not negated based on evidence (empty period except for true and false intervals), which means the maximum likelihood of being trusted. Bel has a range from 0 to 1 (true and false), Pls can be defined as in (7) and has a value of [0,1]. Likewise, the likelihood values can express the process of fusion from multiple evidence as well as the fusion of belief values.
Pls U = Pls 1 ⨁Pls 2 ⨁Pls 3 ⨁⋯ ⨁Pls n 8 In this study, three output data predicted from a multiclassifier were fused and calculated. The calculation process using DS shown in the figure as DBN#1 (initial PSA) was set to m 1 , DBN#2 (Gleason score) was set to m 2 , and DBN#3 (clinical T stage) was set to m 3 . For the output data, the empty set of each of m 1 , m 2 , and m 3 is given by As described above, m 1 , m 2 , and m 3 were obtained, and then m 4 is combined. The combination of m 4 is shown in Next, the interval of the pass and fail of the evidential interval are summarized as As described above, the evidential interval section is constructed for OCD and NOCD, and the higher probability value of OCD and NOCD was set as the final output value.
Uncertainty data processing is a critical issue in the data fusion process. The DS and the Bayesian methods were compared to deal with this uncertainty. Unlike Bayesian inference, DS can contribute different levels of information to each source. In addition, a popular approach to data fusion has been established; unlike the Bayesian method, reliability can be assigned to all subsets of a hypothetical group, making it possible to form distributions for all subsets [31].
Result
3.1. Dataset Description. The characteristics of the initial PSA variable in the OCD and NOCD groups are shown in Table 1. Among the 6345 men, the average PSA levels in the OCD and NOCD groups in the training set were 9.535 and 18.606 ng/mL, respectively. In general, the level in the OCD group was higher, and the validation set also shows a difference of 9.377 and 17.899 ng/mL in the OCD and NOCD groups, respectively. The difference in values between the training and validation sets was not large. Although a high number of patients were observed at maximum, this is not a problem for analysis because they were only a fraction of the outlier compared to the mean.
The Gleason scores in the OCD and NOCD groups are shown in Table 2. Patients with OCD had a high Gleason score of 6. The NOCD group had scores of 6 or more. The difference between the OCD and NOCD groups was significant. In the scores below 5, OCD is more distributed than NOCD, and even more than 9 patients showed more NOCD patients.
The clinical T stages in the OCD and NOCD groups are shown in Table 3. Most patients were T2+. T1a occurred only in patients with OCD. In addition, many patients that are distributed in OCD until T1+ and patients with T3+ belong to NOCD. Although all variables are bounded by OCD and NOCD, there are many patients who belong to the same distributions. Figure 2. The training set was first changed to binary form. The initial PSA values were expressed as nine binary numbers based on the highest value (440 ng/mL). The Gleason score was composed of nine flags ranging from 3 to 10. The clinical T stage consisted of eight flags from T1a to T3b. The binary data of each of these variables was learned by the DBN classifier; that is, the first DBN consisted of nine input nodes because it was the input data of the initial PSA binary data. The output nodes of all classifiers were composed of two so that OCD and NOCD could be calculated with probability. The DBN consisted of three RBM layers, with the number of nodes of each RBM the same as the number of input nodes. Unsupervised learning was performed 100 times in total, while supervised learning using back propagation was performed 1000 times. Finally, we calculated the probability of the output variables as DS and determined the final number of m 4 (OCD) and m 4 (NOCD) as the final outputs.
Experiments.
To evaluate the DBN-DS-based multiclassifier, the entire data set was divided into a 70% training set and a 30% testing set. The control groups included Decision Tree C4.5, naive Bayesian (NB), logistic regression (LR), back propagation (BP), support vector machine (SVM), random forest (RF), deep belief network, and Partin tables. The experiments compared the sensitivity, specificity, accuracy, and area under the curve (AUC) using confusion matrix [31] and receiver operating characteristics (ROC) curve analysis [32]. The experimental results of confusion matrix are shown in Table 4. In general, the results from a training set are better than those of a validation set because of differences in dataset volumes. Sensitivity was defined as the probability of correctly matching NOCD. Because NOCD has less data than OCD, it is difficult to match. The proposed method has a 61.77% improved performance compared to those of the other models. In other words, the probability of matching NOCD is very important because it is a prediction of the risk of the pathology stage. Specificity was defined as the probability of correctly matching OCD. NB had the highest specificity, with 93.78%, but its sensitivity was low. The proposed method showed 93.56% higher performance than those of the other models. The accuracy was defined as the probability of predicting both NOCD and OCD. The proposed model had the highest accuracy, at 81.27%. The AUCs are shown in Figure 3 and Table 5.
The ROC curve has the highest DBN-DS of 0.777. The error of all models was about 0.01, and the p values were all 0.000, so the experimental results of the ROC curves were usable. The DBN-DS predicted each of the three classifiers constructed for each variable separately and combined them into one. In this paper, we propose a new classification method for the classifier. The proposed method is based on the classification of two classifiers. In addition, as the DS computes probability, if one classifier predicts NOCD at a ... ... high number and the two classifiers predict a low number for OCD, then the NOCD is finally predicted based on the belief value of the DS algorithm.
Next, the DBN-DS was evaluated. The result of the confusion matrix for DBN-DS is shown in Table 6. In addition, the results of the ROC curve analysis are shown in Figure 4 and Table 7. DBN#1 learned the initial PSA. DBN#2 learned the Gleason score, while DBN#3 learned the clinical T stage.
Among the three variables, the initial PSA level had the highest prediction rate. The PSA level is closely related to pathologic stage and is the most important parameter in prostate cancer. Variables combined with PSA showed a high prediction rate. In other words, the reason for the high prediction rate was that the Gleason score and clinical T stage also affect the pathology. However, the combination of Gleason score and clinical T stage had a lower accuracy than that predicted by the initial PSA level alone. The two variables are uncertain because they are diagnosed according to the doctor's experience. However, when combined with PSA level, the performance was much higher. In this study, we found that initial PSA was the most important predictor, and that the Gleason score and clinical T stage were also important predictors.
Discussion and Conclusion
Prediction models for pathology staging of prostate cancer are based on clinical tests and can be used to predict the spread of cancer. It is possible to diagnose cancer more precisely at the postoperative, pathological stage and to determine the degree of metastasis of prostate cancer.
We proposed a DBN-DS-based multiclassifier approach to predict the pathologic stage of prostate cancer. The proposed method provides a predictive model to improve accuracy through deep learning and information fusion based on the relationship between data measured using clinical tests. The inputs include initial PSA level, Gleason scores, and clinical T stage variables. The output can be OCD or NOCD in pathological staging (pT). This approach was evaluated using an existing validated patient dataset that included 6345 patient records from the KPCR database, which collected data from six tertiary medical institutions.
The performance of the proposed DBN-DS was compared with that of the NB, LR, BPN, SVM, RF, DBN, and Partin tables. The results showed that the proposed DBN-DS had better sensitivity and accuracy than all other methods.
In a recent pathological staging methodology study, Cosma et al. [4] use a neuro-fuzzy model, with an approach similar to ours. The results also indicated that the neural network-fuzzy-based computational intelligence learning approach is suitable for prostate cancer staging and exceeds the performance of the Partin tables. The neuro-fuzzy model and our proposed method aim to predict whether a patient has OCD (pT2) or NOCD (pT3+). All methods use the initial PSA level, Gleason scores, and clinical T stage to predict the pathologic stage of prostate cancer, but the [4], although different data sets were used for each study; however, they show a high consistency with the results of the present study. Currently, the proposed DBN-DS method is implemented as a research tool. Once the clinical evaluation is completed, the proposed tool will be developed as an easyto-use clinical decision support system that can be accessed by clinicians.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean Government (NRF-2016R1A2B4015922). | 4,501 | 2018-03-19T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
An Iterative Method for Time-Fractional Swift-Hohenberg Equation
We study a type of iterative method and apply it to time-fractional Swift-Hohenberg equation with initial value. Using this iterative method, we obtain the approximate analytic solutions with numerical figures to initial value problems, which indicates that such iterative method is effective and simple in constructing approximate solutions to Cauchy problems of time-fractional differential equations.
Introduction
In 1695, L'Hopital wrote a letter to Leibniz, and he proposed a problem: "What is the result of / if = 1/2?"Lcebniz answered to L'Hopital " 1/2 will be equal to √ : .This is an apparent paradox, from which, one day useful consequences will be drawn."[1,2] Later, as the development of mathematics, especially, the theory of operator, researchers started to have a new recognition of the fractional derivative.They found the fractional derivative has a wide applications in many fields, such as physics, chemistry, and many other sciences [3,4].It should be emphasised that the fractional derivative is defined by integral and it is a nonlocal operator with a singular kernel; hence it can provide an excellent instrument for description of memory and hereditary properties of various physical processes.For example, half-order derivatives and integrals proved to be more useful for the formulation of certain electrochemical problems than the classical models [5].However, the definition of fractional derivative has not been unified; there are many kinds of fractional integral and fractional derivative, such as in the sense of Riemann-CLiouville, Caputo, Riesz, and Weyl [6][7][8][9].The Riemann-Liouville fractional integral and the Caputo fractional derivative are the most commonly used.For a fractional system with respect to some initial or boundary conditions, one of the fundamental problems is naturally that what is the exact solution or approximate solution to such system.Solving fractional system is usually more difficult than the classical system, for its operator is defined by integral.Luckily, there are some different effective methods which have been developed to construct approximate solutions of fractional systems and even obtain the exact solutions such as the homotopy analysis method [10][11][12], the residual power series method [13][14][15][16], the differential transform method [17,18], the Laplace transform method [19], the perturbation method [20,21].In addition, using polynomials to approximate the fractional system is an effective method as well, such as Jacobi polynomials [22], Bernstein polynomials [23], and Chebyshev and Legendre polynomials [24].In this paper, we introduce a type of iterative method, based on decomposing the nonlinearity term, for solving a class of functional equations [25][26][27][28][29][30].
Outline of Paper.In Section 2, we introduce some necessary concepts and lemmas on fractional differential equations.In Section 3, a type of iterative method for solving a class of functional equation is presented.Also, we obtain the convergence analysis of this iterative method.In Sections 4 and 5, we take the linear time-fractional S-H equations, including the term of dispersion, with respect to differential initial conditions as examples to illustrate the strong power of such iterative method, respectively.
Notations of Fractional Calculus
In subsection, we introduce some concepts and lemmas we need in this paper, such as the Gamma function, the Mittag-Leffler function, the Riemann-Liouville fractional integral, and the Caputo fractional derivative.It should be emphasised that there are many kinds of fractional integral and fractional derivative, such as in the sense of Riemann-Liouville, Caputo, Riesz, and Weyl [6][7][8][9].The Riemann-Liouville fractional integral and Caputo fractional derivative are the most commonly used version.
A Type of Iterative Method
In this section, we introduce a generalized iterative method for solving a class of functional equation (7) (see below).Some more specific details about this type of iterative method could be found in [25][26][27][28][29][30] and the references therein.Now we state this iterative method as the following lemma together convergence analysis.
Proof.Obviously, the nonlinear operator N can be decomposed Similarly, linear operator L can also be decomposed Set M ( (, )) fl L ( (, )) + N ( (, )) . then Define the following recurrence equations: Since operators L and N are contractive, then M is also contractive; i.e., there exists a constant 0 < < 1, such that where ‖ ⋅ ‖ denotes the usual norm on Banach space .What is more, for +1 , one can obtain Since 0 < < 1, the series ∑ ∞ =1 +1 ‖ 0 ‖ converges absolutely as well as uniformly.
According to the Weierstrass M-test, one can obtain that the series ∑ ∞ =0 converges absolutely as well as uniformly.
Linear Swift-Hohenberg Equation
The Swift-Hohenberg (for short S-H) equation is a model pattern-forming equation which was derived from the equations for thermal convection by Jack Swift and Pierre Hohenberg [33].Here = (, ) is a scalar function defined on the line or the plane, is a real bifurcation parameter, and () is some smooth nonlinearity.The S-H equation plays an important role in pattern formation theory.In [34], Braaksma et al. proved the existence of quasipatterns for the S-H equation.Also, wave process described by the S-H equation is important as well.For example, it describes the patterns inside thin vibrated granular layers, the mechanism of the amplitude of optical electric field inside the cavity, and so on [35].
Fractional S-H equation was firstly introduced in [36 and obtained the approximate analytic solution.Here is the dispersive parameter.Lately, in [39], Merdan applied the fractional variational iteration method to obtain the approximate solution to time-fractional S-H equation with respect to initial condition (, 0) = 1/10 sin(/).Also, homotopy analysis method is valid for S-H equation as well [40].
In this subsection, we apply the iterative method introduced in Section 3 to linear time-fractional S-H equation with different initial values, such as , sin , and cos .
Linear Time-Fractional S-H Equation with Initial Value
(, 0) = .Consider the following linear time-fractional S-H equation: with initial condition Clearly, applying I to both sides of ( 21), then initial value problem ( 21)-( 22) is equivalent to the following integral equation: where The solution to system (21)-( 22) what we are looking for has the form According to iterative scheme ( 9)- (10), one can obtain
Linear Time-Fractional S-H Equation with Initial Value
(, 0) = sin .Consider the following linear time-fractional S-H equation: with initial condition Applying I to both sides of (29), then initial value problem (29)-( 30) is equivalent to the following integral equation: where The solution of system ( 29)-( 30) what we are looking for has the form We shall distinguish the following two cases.
Linear Time-Fractional S-H Equation with Initial Value
(, 0) = cos .Consider the following linear timefractional S-H equation: with initial condition Applying I to both sides of (41), then initial value problem (41)-( 42) is equivalent to the following integral equation: where (45) We shall distinguish the following two cases.
Linear S-H Equation with Dispersion and Initial Value
(, 0) = .Consider the linear S-H equation with dispersion [41] as follows: with initial value Here ∈ R is a parameter.Clearly, applying I on both sides of (53), then initial value problem (53)-( 54) is equivalent to the following integral equation: where The solution to system (53)-( 54) what we are looking for has the form According to iterative scheme ( 9)-( 10), one can obtain
Nonlinear Time-Fractional Swift-Hohenberg Equation
In this subsection, we apply the iterative method introduced in Section 3 to nonlinear time-fractional S-H equation with such as .
Nonlinear S-H Equation with
Clearly, applying I on both sides of (61), then initial value problem (61)-( 62) is equivalent to the following integral equation: where According to iterative scheme ( 9)- (10), one can obtain Advances in Mathematical Physics Hence the -th approximate solution to (61)-( 62) is and the exact solution to (61)-( 62) is Numerical Simulation.See Figure 5.
Nonlinear Time-Fractional S-H Equation with Dispersion and Initial
Value (, 0) = .Consider the following nonlinear S-H equation with dispersion: with initial value Clearly, applying I to both sides of (69), then initial value problem (69)-( 70) is equivalent to the following integral equation: where Numerical Simulation.See Figure 6.
Concluding Remark
This paper introduce an iterative method which has considerable power in constructing approximated solutions and even exact solutions to time-fractional differential equation.We take the linear and nonlinear Swift-Hohenberg equations with different initial conditions to illustrate the effectiveness of such method.
Figure 2 The
Figure 2 | 1,915.2 | 2018-09-02T00:00:00.000 | [
"Mathematics"
] |
The Influence of Investor Emotion on the Stock Market: Evidence from an Infectious Disease Model
InMarch 2020, four consecutive circuit breakers in the US stockmarket underscored the impact of investor sentiment on the stock market. With the development of technology, public opinion and other information now spread easily through social media and other channels, indirectly affecting investor sentiment. *is makes it important to understand the underlying dynamics of such situations to help manage the market impact of such events going forward. To that end, we analyze investor sentiment, investor structures, and the capital market fuse mechanism using infectious disease dynamics. We use an extension of the SIR (susceptible, infectious, and recovered) model, called the dynamic SIRS model (where individuals return to a susceptible state), to simulate the impact of investor sentiment on the stock market. Accordingly, we study the circuit breakers in the US stock market and the simulation results of the model to analyze the fuse mechanism process in China that triggers a pause in the market based on volatile trading. *e results of our study show that when the influence rate of investor mutual communication increases or when the emotional calm rate decreases, investor emotions will start to diffuse, leading to an increase in the probability of either a serious stampede or zealous overbuying in the stockmarket. At the same time, the trading frequency of investors and the ratio of investors in both buying and selling directions will have a certain formal impact on the direction of the stock market, with the final impact determined by the ratio of normal investors to emotional investors. When emotional investors dominate the market, their emotions are diffused throughout. Our study provides the reference for relevant agencies to monitor and improve the stock market fuse mechanism in the future.
Introduction
Following the stock market crash of 1987, US regulators put the first circuit breakers in place to prevent the repetition of a swift plunge in the Dow Jones Industrial Average. A circuit breaker is a safeguard that pauses trading for 15 minutes in the hope that the market will calm itself. Since then, a circuit breaker has been triggered only once in 1997, until March 2020 when four consecutive circuit breakers were used in the US stock market-on March 9, March 12, March 16, and March 18-underscoring the strength of the effects of investor sentiment on the stock market. e circuit breaker or fuse mechanism, also known as the automatic suspension mechanism, is meant to control risk when the stock index reaches a specified fusing point. is fuse mechanism, which stops trading to give the market room to recover and calm down, gives regulatory authorities time to take relevant risk control measures before continuing with trading. e fuse mechanism has been put in place for foreign exchanges and the three major exchanges in China, where it was triggered during the trading of China's A-shares between January 1 and January 8,2016. e use of the four circuit breakers in the US stocks in March 2020 and the two circuit breakers in China's A-shares in 2016 reinforce the importance of controlling risks in the capital market. However, the factors that cause the circuit breaker environment remain under debate. e objective of this study is to analyze and clarify the process and the phenomena that trigger the circuit breakers. In the fusing process, the most obvious phenomenon is the escalation of public pessimism, leading to an imbalance between trading volume and the trading ratio, and to people stepping on people, reinforcing the importance of investor sentiment.
is study investigates investor sentiment, investor structure, and the capital market fusing system through the infectious disease dynamics. We use an extension of the SIR (susceptible, infectious, and recovered) model (a basic epidemiological model designed to describe the transmission of infectious diseases), called the dynamic SIRS model (where individuals return to a susceptible state), to simulate the impact of investor sentiment on the stock market. We combine the simulation results of the model and study the US stock circuit breakers and the China A-shares fusing process to provide a reference for the relevant agencies to monitor and improve the fuse mechanism going forward.
Literature Review
Many scholars have studied the linkage between fluctuations in investor sentiment and stock market returns. Among these, Cui and Zhang [1] indicate that investor sentiment has a significant impact on the risk of collapse. Tang et al. [2] show how the number and tone of media reports lead to a change in investor mood and how the links between media and investors cause a fluctuation in attention and the herd effect. Xiao et al. [3] find that investor enthusiasm can promote information exchange and that investor emotions influence each other. e herd effect is more pronounced when the stock market falls and the impact is relatively less when it rises. Zheng [4] believes that investor emotional intensity affects the herd effect in the stock market: the stronger the emotions, the greater the herd effect. Compared with pessimistic leanings, the herd effect driven by optimism is correspondingly weaker. Zhang [5] finds that the trading frequency of some investors affects the overall market equilibrium in his research on behavioral economics. Lu et al. [6] find a correlation between the Baidu index and stock market returns. Wang and Jia [7] find that investor sentiment can adjust the fluctuation in commodity market prices and changes in the inflation rate, while changes in the commodity price level can reflect the impact of market sentiment and domestic inflation levels as well. Li and Li [8] reveal that the stock market is manipulated by investor sentiment. During the manipulation, investor sentiment is high and a stock price bubble forms; after the manipulation, investor sentiment drops, triggering the risk of stock prices collapsing. ese studies prove that public opinion is a factor influencing investor sentiment and that investor sentiment can have either a strong or weak influence. Investor sentiment also spreads through the market network and other communication channels across investors, one of the causes of the herd effect.
According to the research on investor sentiment in the stock market, we can see a byproduct, namely, the similarity of investor sentiments and the spread of emotion, with the media representing public opinion. Some scholars have investigated how these public opinions spread from person to person. e SIR and SIRS models have been used in related research. Ballinari and Behrendt [9] find that investor sentiment has five contagion effects, which are more obvious in a pessimistic environment. Nguyen et al. [10] show how customer sentiment expressed through social media affects investment decision-making and the corporate value of institutional investors, highlighting the importance of social media. Ji et al. [11] explore public opinion transmission based on a microblog using the SIR model. ey consider public opinion transmission similar to that of infectious diseases, with a threshold and a balance point. Ally and Zhang's [12] study concludes that a rewiring model based on linear function generates the fastest spread across networks.
us, linear function may play a pivotal role in speeding up the spreading process in stochastic modeling. Fan [13] states that the memory effect in a two-layer SIR information propagation model will enable nodes to receive messages gradually and expand the scope of information transmission, which is at the expense of time. Yin [14] finds that a public opinion crisis in a limited group will spread like an infectious disease. In the stock market, similarly, investor emotions spread through the opinions of a limited group in a fixed environment. Zhang et al. [15] use the SIR model to prove that independent communicators are the source of information, leading to a higher transmission probability and a wider transmission range, with investors spreading related emotions and public opinion through the Internet and other channels in the current stock market. In the simulation of these models, a threshold and equilibrium point in the spread of infectious diseases are found. From Fisher's [16] article, within the SIR model, the agent is able to react to a combination of expectations, bullish or bearish, and comparative advantages through the information they received when going into the market for CDS. In Rui et al.'s [17] opinion, the SIR model has high accuracy during the information diffusion process in social media. Wang and Li [18] set up a creative model between different nodes based on the SIR propagation model and proved that the improved model could quickly suppress the rumor propagation in networks. Zhou et al. [19] find that positive energy information would have a great contribution to society when information bombing takes place. Sahafizadeh and Tork Ladani [20] extend the SIR information propagation model and conclude that rumor-spreading behavior in these networks do not make a significant difference if there is rumor propagation in groups. Fibich [21] believes that a smallworlds structure has a negligible effect on the diffusion based on Bass-SIR model. Qian et al. [22] find that independent spreaders can start the information diffusion in remote regions without relying on the bridges between communities. Zhao et al.'s [23] study modifies a flow chart of the rumor-spreading process with the SIR model and finds that rumors are capable of disseminating extensively in a short time and causing social instability. Wang et al. [24] indicate that users with more followers ensure that information diffuses faster and wider. Liu et al. [25] established the SIR model and the MLM organizer to obtain MLM participants' income evolution rules; if the threshold or equilibrium point is exceeded, a series of problems occur, such as overpessimism or overoptimism in the overall public opinion environment. is is similar to the stock market fuse trigger. e application of the two models is shown in Table 1.
In studying investor sentiment and stock market leverage, Chen et al. [26] find that investor sentiment has a greater impact on the market in terms of trading ratio and trading 2 Discrete Dynamics in Nature and Society volume than leveraged trading. In studying the trading situation of the CSI 300 stock index (equity index that reflects China's A-share market performance), in addition to noticing abnormal investor sentiment regarding certain data, Yang and Zhang [27] find that the trading volume, active before and after the market triggered fuse, increased abnormally during the fuse period with trading overly volatile. Li [28] believes that stronger trading volume will have a negative feedback effect on the fuse system. In their empirical study, Yang and Jin [29] find that market panic spreads further when stocks drop; however, whether or not stocks fall, selling orders swarm out, and the imbalance of the order flow deteriorates significantly. Tang [30] believes that, in volatile markets, fusing is easier to trigger and will cause vicious panic selling in the market. Hu et al. [31] believe that investor sentiment and stock market returns are negatively correlated in the bear market state and positively correlated in the bull market state. Kim and Kim's [32] study shows that investor sentiment is positively affected by prior stock price performance and the Internet postings have predictive power for volatility and trading volume. Chen et al. [33] find that when investor sentiment rockets, it indicates that the market will experience a shock in the following month. He et al. [34] show that retail sentiment is more likely to infect each other in China's stock market. Janková [35] concludes that the sensitivity of stock indices shows a negative relationship to the volatility of the VIX stock market, especially out of phase and in crisis periods.Živkov et al. [36] examine the interrelationship between national equities and 10Y bonds in six emerging markets. Dash and Maitra [37] examine the relationship between investor sentiments and conclude that whether investors are short-or long-term, their investment activities cannot be stripped of sentiment. Based on the extant research, although scholars have studied the influence of public and online opinions on the stock market, with some evidence that the fuse and leverage bull markets are affected by extreme emotions, there is no study that looks at how these public opinions affect investor emotions. A number of studies have found that investor sentiment is influenced by public opinion, but have not explained this in terms of reasonable system dynamics. To address this gap, this study examines obvious abnormal data during fusing.
The Fuse Mechanism: Analysis of A-Shares and US Stocks
To analyze the A-share data, the number of traded shares, trading time, number of shares traded in a day, trading space ratio, the Baidu index (Baidu is the major search engine in China and the index is the corresponding keyword retrieval amount), and the trading volume of the Google index and US stocks 17 days before and after the two fuses in 2016 are examined. As shown in Figures 1 and 2, on January 4, 2016, due to the pessimistic A-share market environment, stock market trading stopped after only 2.3 hours. e total number of shares traded had reached 27.488 billion, but the number of shares traded per hour had touched 1.19513 billion. e trading frequency of the day also had peaked at 11.7183 million shares/hour, reaching the highest point ever.
e trading ratio of the day was 0.6972. When the fuse mechanism was triggered again in the A-share market on January 7, 2016, trading had lasted 0.25 hours before triggering the fuse and the number of shares traded was the lowest at 10.496 billion shares, while the transactions per hour reached 4.198 billion shares, four times more than that on January 4, 2016. e average number of transactions per hour was stable at 650-800 million shares.
In Figure 3, we analyze market emotions in the United States. In the two weeks around the circuit breakers, the trading volume of US stocks rose overall, similar to China's situation at the time of the fuse in 2015. After the first break, the speed and ratio of the second break increased.
In March, the yield of US stocks after the first circuit breaker was − 7.79%; after the second, it reached − 9.99%, and after the third it dropped to − 12.93%. Subsequently, due to the Federal Reserve's water release and the federal government's bailout behavior, overall market sentiment recovered after the fourth breaker, with the yield increasing to − 6.3%. e trading volume of the market as a whole reached 750.43 million shares after the first breaker and 911.77 million shares after the second. Moreover, after the second, investor sentiment began to change, from the initial aggressive pessimism to a more stable pessimistic market, with the trading volume dropping to 775.91 million shares and 874.09 million shares, respectively, although remaining higher than the previous average trading volume of 300-400 million shares. e search of Google's index for the circuit breaker shows the highest level of 100% at the first circuit breaker; then the search index gradually declines, achieving a small peak for every circuit breaker. ese search degrees include the communication query degree in Twitter, which reflects the frequent communication of investors during the transaction process. ese data provide some fundamental ideas for building the SIRS model for investor sentiment. [13] e memory effect Zhang et al. [15] Independent communicators as the source of information Fisher [16] Market for CDS Rui [17] e information diffusion process in social media Wang and Li [18] Rumor propagation in networks Fibich [21] Negligible effect Qian and Zhang [22] Independent spreaders and information Liu et al. [25] MLM and information Discrete Dynamics in Nature and Society
eoretical Analysis.
With the development of Internet technologies, information exchange among investors on the market and its frequency have increased. Due to varied communication modes and self-cognition of investors, changes in emotion transmission in the market are not limited by time and space. e object and path of an emotional transmission in the market can be divided into S or normal investors (susceptible population), I or emotional influence investors (infected population), and R or emotional stability investors (infected immune population). e main reason investors become calm and stable after emotional influence (hereafter referred to as calm and stable) is the emotional recovery rate; the main factors that affect the emotional recovery rate are factors such as media information and government policies. When emotional investors turn into calm and stable investors, there is a certain probability that they will be less immune to the spread of emotions in the market and turn into normal investors again. For example, due to the impact of performance, investors may be firmly optimistic about a stock, but with the influence of rumors or related emotional news, they may turn bearish on the stock and, after a period, turn optimistic again. e analysis shows that susceptible population, infected population, infected immune population, nonimmune population, and external media or government are involved in this process.
After determining the boundary of system research, through the analysis of the relationship between the elements in the boundary, we can create the system causality diagram. e information dissemination system is no longer a single linear relationship, but a complex nonlinear system, which is dynamic under the joint action of multiple factors.
We can see from Figure 4 that the causal loop diagram provides several feedback loops in the graph, as follows: "Normal investors" ⟶ "Contact impact rate" ⟶ "Investors affected by emotions" ⟶ "Calm rate" ⟶ "Calm and stable" ⟶ "Investors without immunity of emotional information" ⟶ "New normal investors." As the number of normal investors increases, the number of contact impact rate also increases, resulting in a rising number of investors affected by emotions. When the number of investors affected by emotions is large, the relevant information will be fed back to the government or the media and eventually affect the value of the calm rate. Finally, the number of calm and stable will change. After the increase of calm and stable, a large number of investors will lose immunity and become new normal investors, leading to the increase of normal investors. "Normal investors" ⟶ "Contact impact rate" ⟶ "Investors affected by emotions" ⟶ "Calm and stable" ⟶ "Normal investors." "Normal investors" ⟶ "Contact impact rate" ⟶ "Investors affected by emotions" ⟶ "Calm rate" ⟶ "Calm and stable" ⟶ "Normal investors." With the increase of normal investors, contact impact rate also increases, resulting in an increase in the number of investors affected by emotions. e increase in the number of investors affected by emotions changes the behavior of the government and influences policies, leads to calm and stable changes, and finally affects normal investors.
eoretical Model and Hypotheses.
According to the above theoretical analysis and based on the accumulation and infection of emotions, overall market sentiment is considered similar to an infectious disease. Although the SIRS model is considered a stable and accurate dynamics model, in using it with investors, the premise is that investors are irrational and the spread of investor sentiment can be compared with that of an infectious disease. In this model, we do not consider that new investors may be joining or withdrawing; instead, we consider only the change in time. We assume the market is a balanced market, meaning buying and selling are balanced and consistent, and no other factors are influencing the overall market. Supposition 1.
e overall market transactions are balanced and the number of investors in the market and the number of institutional investors, N, remain unchanged. e number of retail investors or individual investors dominates the Chinese market, but the opposite is true in the US, and the overall number is regarded as approximately the same large number. Investors are divided into normal investors, investors affected by emotions, and calm and stable. At time t, these three kinds of investors are represented as s (t), i (t), and r (t). us, where S, I, and R represent the total number of each type of investor and N all investors in the market. Supposition 2. Emotion-affected investors can communicate with other normal investors who are not affected by emotions through online and social behaviors. e daily contact impact rate on other investors is λ, while the emotional calm rate of the affected investors is μ. Discrete Dynamics in Nature and Society Hypothesis 1. All those who are calm and stable will be transformed into normal investors and investors affected by emotions. ose who remain calm and stable will still be affected by investors affected by emotions. ose who are calm and stable will be converted into investors who are affected by emotions by rate c in a future unit of time.
Hypothesis 2.
e trading frequency of normal investors is φ1: daily trading times/annual trading times. Normal investors are two-way traders (both buying and selling stocks) and the ratio is ω and 1 − ω.
e trading frequency of emotion-affected investors is φ 2 , and emotion-affected investors are one-way traders, with a trading ratio of ξ.
ree equations about s (t), i (t), and r (t) can be obtained from building simultaneous equations for Supposition 2 and Hypothesis 1: (2) At this point, the ratio of normal investors, investors with emotional influence, and those with emotional calm and stability after emotional influence can be recorded as s 0 , i 0 , and r 0 when (s 0 , i 0 , r 0 > 0). e model of infectious diseases after this change can be transformed into the SIRS model with mutual infection of investors' emotions, which can be written as As s (t) + i (t) + r (t) � N remains constant, the number of calm investors at time t can be obtained by Since S and I must be greater than or equal to 0 and the sum of the two must be less than or equal to 1, we can define the threshold θ, as the ratio of the affected investor emotional communication influence rate to the investor calm rate. When θ ≤ 1, the investor emotional calm rate is greater than the investor emotional influence rate, and the overall market situation tends to be stable. e equilibrium point (n, 0) can be obtained through the above formula. If the affected investors inflict a weak emotional contagion and the overall investors remain rational in dealing with external emotions, the overall market will eventually be classified as balanced, and the investors in the market will become normal investors. When θ > 1, the rate of investor calm is less than that of the affected investors' emotional influence rate, so the overall market sentiment will be affected by the affected investor mood and the overall market atmosphere will become unbalanced. At this point, there is an emotional balance point (μ/λ, c(N − (μ/λ))/μ + c), which means that the overall market will be influenced by emotions where affected investors have a greater influence on other investors' emotions or related public opinion. e final equilibrium state will be dominated by affected investor sentiments, which may also become the state of zealous leverage, rising stock prices, and enlarged trading volume in a mad cow market, or short selling and bearish market sentiment in a bear market. e equation for the transaction ratio ξ in two directions under the overall market transaction mode as stated in Hypothesis 2 can be obtained as follows: at is, Discrete Dynamics in Nature and Society
Model Simulation and Results
e model simulation is conducted using MATLAB 2014a. In the form of MATLAB programming, the relevant parameters, preset proportion, and data are programmed into the process sequence, and the relevant simulation results are finally run. According to noise theory and equilibrium theory, under the equilibrium trading state, the two-way trading ratio of normal investors is approximately equal; thus, ω � 1 − ω. According to the herd effect and the agglomeration effect, the trading frequency of investors affected by emotions will be slightly higher than that of normal investors, soφ_2 > φ_1. ree initial values are set at this time and the proportions of investors in these simulations affected by emotions are 20%, 50%, and 70% of the total, respectively. When θ ≤ 1, assuming the total number of transactions is 1000, the emotional influence rate of investors is λ � 0.01% and the calming rate is μ � 0.1. Applying dynamical system theory, the phase space curves at a conversion rate c � 0.05 change with time as shown in Figures 4 and 5, while other phase space curves with a constant conversion rate c � 0.5 change with time as shown in Figures 6 and 7.
We see from the phase space graphs in Figures 5 and 7 and the time-varying graphs of investor ratios in Figures 6 and 8 that initially, when c is small, the change rate of emotional investors with different initial percentages will be more stable over time, while when c is high, the change rate of emotional investors will be more dramatic. In different proportions, the final trend and emotion affect the state of the investors being removed. erefore, irrespective of the proportion of investors affected by emotions, overall investors eventually move to the equilibrium point (N, 0), that is, (1000,0), and the investors affected by emotions are less and less part of the transaction process, finally resolving to 0. According to the investor proportion curve, in the case of a small c, a large number of emotional investors recovered at the initial stage of transformation, and the speed of transforming to emotional investors was slow, which caused a decline in the number of normal investors and emotional investors at the beginning. Similarly, in the case of large c, a large number of investors with emotional recovery will be directly transformed into investors with emotional influence, so the overall change will be dramatic and obvious in the figure. With time, because the exchange rate of emotional people is far less than their recovery rate, all investors with emotional influence will become normal investors. It also shows that when θ ≤ 1, the ratio of emotion-affected investors, normal investors, and emotion-calmed investors will change dynamically. Based on the large calming rate and a low conversion rate, the overall market will shift to normal investors, and a large number of calmed investors will also convert to normal investors, leaving fewer and fewer emotion-affected investors in the market. In this way, the capital market returns to equilibrium, while the ratio of buying and selling is also determined by the investors. When the transactions approach equilibrium, the difference is the change in time and in the ratio. However, due to the different conversion rates for emotionally calm investors, the spatial phase changes are slightly different, with a larger c meaning slower conversion. e changes in the investor ratios will also change the buying and selling ratio. e simulations of the ratio of buying and selling via changes in two-way transactions are shown in Figures 9 and 10.
We can see in the figures that although the trading frequency of all parties is relatively high at the beginning, the market equilibrium is unbalanced, especially in a market where the proportion of traders affected by emotion is relatively high. As shown in Figure 9, the proportion of investors in the initial mood is 80%. In this case, the proportion of buy/sell transactions will be significantly higher than the equilibrium state of 1, even up to 2.2. is shows that the strength of one side of the multishort side is much greater than that of the other, and the situation is similar in other initial ratios. However, since the recovery rate is much Discrete Dynamics in Nature and Society higher than the rate of investor emotional exchange, the emotional impact on investors will eventually be transformed into normal investors, the trading ratio of buying and selling will tend toward 1 after a period of adjustment, and the overall market buying and selling ratio will become balanced. If we compare Figures 9 and 10, we see that, in the case of a small c, the shorter the change time of buy/sell ratio, the larger the change range. In the case of a large c, the smaller the change range, the longer the overall continuous change cycle. is shows that when the selling and buying ratio covers most normal investors, the market tends to buy and sell in balance. e difference in c changes the rate of change in the buying and selling ratio. Comparing the two results, we find that the larger the c, the slower the rate of change. erefore, similar to the normal market, when the investor's recovery rate is greater than the exchange rate, the emotion is more balanced. Even if there are so-called noisy traders or emotional investors, the market will eventually reach an equilibrium due to self-correction. When θ > 1, the overall market situation changes as shown in Figures 10-14.
Here, we assume that the total number of transactions is 1000 and the emotional influence rate is λ � 0.1%, the calming rate is μ � 0.1, and the conversion rate is c � 0.05. erefore, as calculated, θ � 10 is far greater than 1, while other parameters, such as the calming rate and conversion rate, do not change, and the phase space curve is in proportion to the investors.
Regardless of the proportion of investors affected by emotions, when c � 0.05, investors will eventually tip toward the equilibrium point of emotions (μ/λ, c(N − (μ/λ))/ μ + c), that is, (100, 300), while this equilibrium point will reach (100, 750) when c � 0.5. According to Figures 11 and 13, in the case of a small c, the number of investors affected by sentiment will continue to decline due to the influence of recovery rate, and they will turn into those with stable sentiment. After the transformation, calm and stable investors are more likely to return to being emotional investors because of the larger θ.
us, the number of emotion-affected investors will remain stable in the transaction process, as shown in the simulation. While the number of investors affected by emotion is stable at 300, there will be fewer normal investors than emotion-affected ones. e number of investors in the capital market will return to equilibrium and transactions will tend to be one-sided. When c is greater than a certain value, the influence of emotion on the investor ratio will be far greater than that of normal investors and in a stable range, as shown in the simulation; the final number of emotional investors will be stable at 750, while the sum of normal investors and calm and stable ones will be steady at 250. erefore, if θ is greater than 1, regardless of the conversion rate, the attitude of the overall market will change from neutral to positive or negative. e changes in the ratio of two-way buying and selling transactions are shown in Figures 15 and 16. e proportion of the two-way buying and selling changes from 1 :1 before in the stable market to stabilize at a value greater than 1 in the emotional market, which shows that unilateral buying and selling in the market will be higher than before. When θ is greater than 1 and c is small, different emotions affect the initial proportion of investors, and the final buying and selling proportion will be different. e higher the initial proportion, the higher the final buying and selling proportion. When c is large, the final buy/sell ratio is similar to that when c is small, but when c is large, the buy/sell ratio will have more concentration effect, and the overall market will be more extreme. However, the change in the θ, that is, the change in λ, shows that if investor sentiment changes through some means such as the Internet, the media, policy interpretation, or other public opinions, it will cause a change in market sentiment. Nowadays, there are many tools to spread investor
Discussion
Using the SIRS model, this paper analyzes the fuse mechanism ignited by the stampede event in China's A-share market in 2016. e results explain the triggering of the circuit breakers in US stocks and in China A-shares, through the process shown in Figure 17.
During normal market operations, normal investors are affected by public opinion and certain policies (such as increasing bearish sentiment in the market and media sentiment), and some sentiment-affected investors become bearish and pessimistic investors, while other calm investors are transformed into sentiment-affected investors, further affected by public opinion and the sentiment-affected investors around them. As more and more normal investors become emotional investors, the A-share market changes from normal to emotionally influenced. Pessimistic emotions affect overall investor feelings, the trading volume starts to rise, and a large number of pending sell orders appear, further stimulating investor sentiment and pushing the market quickly to the fuse state. In the fuse mechanism process, investors follow the herd effect, negative emotions spread, and the degree of infection strengthens, causing irreversible transformation.
As the situation analysis shows, the market transaction volume and the number of transactions per hour quickly grow larger and, in the second fuse situation, these numbers are greater than in the first one. is also shows that the contagion of pessimism increases and the calming rate decreases once the fusing mechanism process begins. is is why the second fuse occurs after only 0.25 hours of trading. e recent circuit breaker phenomenon in the US stock market is similar to what happened in the China A-share market.
e inertia of trading caused multiple circuit breakers. In 2008 and 2015, a leveraged mad cow market and the stock market crash, along with some internal mechanisms, caused a change in investor sentiments during the fuse mechanism process. Only after this mechanism morphed into a mad cow market, the extraordinary rate of return and media and public opinion further affected the structure of internal investors. e relationship between investor sentiment and the number of investors in the stock market under a normal healthy trading environment is shown in Figure 18. Because the overall market environment is relatively balanced, even if there are emotional investors, they will eventually shift and become normal investors, and investor mood will rebalance. Simulation results of trading market θ = 10 100 200 300 400 t 500 600 700 800 900 80% exotional investors 50% exotional investors 30% exotional investors
Conclusion and Suggestions
In this study, we use the SIRS model to simulate the change in investor sentiment and the trading ratios in an equilibrium stock market. First, we build a related path graph examining investor sentiment as the direction point. Second, through simulations, the influence of investor sentiment and an infection situation in the capital market is reconstructed. e simulation confirms that investor sentiment can be transmitted through public opinion and other communication mediums in a transmission process similar to that of infectious diseases. Finally, the influence of the change in the related parameters on the whole simulation is studied. e results are consistent with reality. Under the influence of an extreme market environment, investor sentiment will fluctuate significantly; the process will be infected to a certain extent, as in the case of a disease, and the intensity of the infection may change with changes in the market environment. In addition, the process will be influenced by different information and media, which will change the infection's intensity. In the end, there are a large number of extremely infected investors who will affect the normal, rational, and balanced state of the overall market. e degree of buying and selling is also affected by the investor mood, which creates further imbalance and changes in the market.
In the process of the SIRS simulation, the results show that, when the influence rate λ of all investor mutual communication increases or when the emotional calm rate μ decreases (that is, θ increases), investor emotions will start to spread, leading to an increase in the probability of a serious stampede or zealous overbuying in the stock market. At the same time, the trading frequency of investors, ω, and the ratio of investors in both buying and selling directions will have a certain formal impact on the direction of the stock market, with the final impact determined by the ratio of normal investors to emotional investors. When emotional investors dominate the overall market, emotions spread.
According to the SIRS model simulation results and analysis of investor sentiment changes in the equilibrium market, measures to prevent and control the occurrence and spread of specific emotions in the capital market should be considered. e balance in the overall market environment can be improved by increasing positive guidance around investor emotions-increasing the calming rate μ or reducing the emotional influence rate λ of overzealous emotional investors. At the same time, the spread of bad news and emotions must be prevented, such as stirring up the market environment though the media, Internet, or public opinion. Some administrative measures could be considered, such as cracking down on illegal stock market news groups on the Internet, controlling and eradicating corrupt self-media or marketing numbers, regulating individuals and groups who earn profits from spreading false news on the Internet, strengthening the controls on online public opinion and media public opinion, and establishing a public opinion index to ensure the healthy operation of the market.
is study has some limitations in that it still has to address how to calculate more scientifically the influence rate λ of all investors' mutual communication and how to incorporate other public opinion indicators into investor sentiment indicators. ese problems are expected to be addressed in future research.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon reasonable request (e-mail: 1179188@mail.dhu.edu.cn). Discrete Dynamics in Nature and Society 11 | 8,794.8 | 2021-06-17T00:00:00.000 | [
"Economics",
"Business"
] |
On the Importance of Individualized, Non-Coplanar Beam Configurations in Mediastinal Lymphoma Radiotherapy, Optimized With Automated Planning
Background and Purpose Literature is non-conclusive regarding selection of beam configurations in radiotherapy for mediastinal lymphoma (ML) radiotherapy, and published studies are based on manual planning with its inherent limitations. In this study, coplanar and non-coplanar beam configurations were systematically compared, using a large number of automatically generated plans. Material and Methods An autoplanning workflow, including beam configuration optimization, was configured for young female ML patients. For each of 25 patients, 24 plans with different beam configurations were generated with autoplanning: 11 coplanar CP_x plans and 11 non-coplanar NCP_x plans with x = 5 to 15 IMRT beams with computer-optimized, patient-specific configurations, and the coplanar VMAT and non-coplanar Butterfly VMAT (B-VMAT) beam angle class solutions (600 plans in total). Results Autoplans compared favorably with manually generated, clinically delivered plans, ensuring that beam configuration comparisons were performed with high quality plans. There was no beam configuration approach that was best for all patients and all plan parameters. Overall there was a clear tendency towards higher plan quality with non-coplanar configurations (NCP_x≥12 and B-VMAT). NCP_x≥12 produced highly conformal plans with on average reduced high doses in lungs and patient and also a reduced heart Dmean, while B-VMAT resulted in reduced low-dose spread in lungs and left breast. Conclusions Non-coplanar beam configurations were favorable for young female mediastinal lymphoma patients, with patient-specific and plan-parameter-dependent dosimetric advantages of NCP_x≥12 and B-VMAT. Individualization of beam configuration approach, considering also the faster delivery of B-VMAT vs. NCP_x≥12, can importantly improve the treatments.
INTRODUCTION
Patients treated with a combination of multi-agent chemotherapy and radiation for Hodgkin or non-Hodgkin lymphoma are mostly young at diagnosis. About 80% of these patients achieve long-term remission. Given the age at diagnosis and the favorable long-term prognosis, therapy-related late effects including secondary malignancies (1-7) and cardiovascular disease (8)(9)(10)(11)(12) have become increasingly important. In recent years, radiotherapy (RT) for lymphoma has evolved by considerably decreasing target volumes (from extended field to involved field to involved site or involved node) and radiation doses (from 40 to 30 Gy or even 20 Gy in selected cases). These factors contribute to a decrease in the risk of late toxicity (1,6,(13)(14)(15)(16).
Applied radiotherapy techniques have also evolved, with intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) emerging as alternatives to 3D conformal RT (3D-CRT). In this context, the typical low-dose bath of VMAT plans has been pointed at as a cause of concern, as it could increase the risk of secondary cancers relative to 3D-CRT (17). The low-dose bath in the lungs has also been associated with increased risk of radiation pneumonitis (18). Choice of beam arrangement may impact plan quality. This has been investigated in detail for 'butterfly' beam arrangements that can contain non-coplanar beams. In particular, the (noncoplanar) B-VMAT approach described by Fiandra et al. (19) has shown to reduce breast Dmean and V4Gy compared to VMAT, leading to similar calculated lower risks of secondary breast cancer as 3D-CRT (but risk of lung cancer relatively higher), as well as a lower risk of cardiac toxicity, in a group of patients with largely non-bulky disease, without axillary involvement (20). Voong et al. (21) observed a reduction in heart dose (but not in breast dose) by using five to seven IMRT beams (butterfly) with eventually one non-coplanar beam, relative to 3D-CRT in patients without bilateral axillary involvement. Proton therapy has also been proposed for further reductions of late toxicity in selected lymphoma patients (17,(22)(23)(24)(25).
Current literature is non-conclusive regarding the optimal choice of RT treatment technique. The International Lymphoma Radiation Oncology Group (26) has benchmarked the best practice of 10 centers in 2013, showing that (i) the applied (photon) RT technique varied largely between institutions leading to large differences in the low-dose volumes, and (ii) in practice, difficult cases were often not planned according to the standard. The authors could not provide universal/consensus recommendations. Moreover, different authors pointed at the necessity for individualized selection of planning technique (19,21,27). This was in part attributed to the high heterogeneity in tumor location, shape, and size, as well as patient characteristics.
It is well known that manually generated treatment plans may suffer from inter-and intra-planner quality variations (28,29). Moreover, finding optimal beam configurations with trial-anderror planning is extremely complex and time-consuming. On the other hand, the large anatomical variability in lymphoma patients (target size/shape and position) is a real challenge for development of a system for automated planning, where the aim is to generate a unique workflow that works well for all patients without further interactive fine-tuning of plans by a user. The issues with manual beam angle selection put heavy constraints on the number of beam configurations that were compared in published ML planning studies, and on the total number of included plans. To the best of our knowledge, in all published studies comparing beam configurations for treatment of lymphoma patients, beam angle class solutions (e.g., B-VMAT) were investigated, or beam angles were selected by planners, i.e. there was no patient-specific computer optimization of angles. So far, only the study by Clemente et al. (30) reported on autoplanning for lymphoma patients, but this did not include optimization of beam directions. Moreover, their workflow worked for OAR sparing, while there were limitations for PTV doses.
In this work, we used a large number of automatically generated plans for comparison of radiotherapy beam configurations for young females with ML. To this purpose, an automatic workflow for IMRT/VMAT plan generation, including integrated coplanar or non-coplanar beam angle and beam profile optimization for IMRT, was implemented and validated. The system was used to systematically compare plan quality differences between 24 coplanar and non-coplanar beam configuration approaches for 25 study patients.
Patients and Clinical Protocol
The study was based on a database with contoured planning CTscans and manually generated, clinically delivered plans (CLIN) of 26 previously treated female ML patients (21 Hodgkin lymphoma and 4 B cell non-Hodgkin lymphoma). As explained in detail below, one patient (patient 0) was excluded from populationbased analyses, leaving 25 evaluable patients for such analyses (patients . Visual inspection of planning CT-scans ensured a heterogeneous selection of anatomical presentations in the patient cohort (superior/inferior mediastinum, with/without involvement of supraclavicular or axillar nodes, bulky disease, complex anatomy; see Figure B1 in Electronic Supplement B). The median patient age was 27 (range, 19-50). The PTV volumes varied from 97 to 1654 cc (median 605 cc). The prescription dose was 30 Gy in 15 fractions, excluding the sequential boost applied for some patients (3 × 2 Gy), which was not considered in this study.
Automated Plan Generation
An automated planning workflow for young ML patients was developed following the clinical planning aims described above. The core of the system was Erasmus-iCycle, an in-house developed multi-criteria optimizer featuring integrated beam angle and profile optimization (33), coupled to a Monte Carlo dose calculation engine (34). Pareto-optimal plans with clinically favorable trade-offs between all treatment requirements were realized with the optimization protocol ['wish-list' (33),] reported and explained in Electronic Supplement B. All plans for all patients were automatically generated with the same wishlist without any manual fine-tuning.
For coplanar beam angle optimization (BAO), the candidate beam set consisted of 36 equiangular beams (0°, 10°, …, 350°). For non-coplanar BAO, beam candidates, defined by all combinations of beams with 10 degree separation from each other in all directions, were verified at the linac to exclude beams with (potential) collisions between the patient/couch and the gantry, ending up with a set of 194 candidate beam directions (including the 36 coplanar beams). The applied beam energy was 6 MV.
Plan Evaluations and Comparisons
Plans were mainly evaluated and compared using PTV and OAR planning goals applied in clinical planning (above). On top of that we also reported on breast(s) V4Gy (19), PTV V107%, conformity index (CI, defined as patient V95%/PTV volume), and patient V5Gy (cc) and V20Gy (cc), where the patient is defined by the external skin structure. PTV V110%, mentioned in the clinical planning protocol, was always far below the requested 1%, and was therefore not reported. Two-sided Wilcoxon signed-rank tests were used for statistical analyses, with p-values lower than 0.05 indicating statistical significance in plan parameter differences.
Quality of Autoplans
Prior to the comparisons of beam angle configurations, several analyses were performed to ensure that the autoplans used for these comparisons were clinically acceptable and of high quality. Data is partly presented below, and partly in Electronic Supplement A.
From the 624 autoplans defined in the M&M section (24 plans for all 26 patients), 617 (98.9%) satisfied the clinical PTV coverage requirement, i.e. V95% ≥ 95%. The seven autoplans with insufficient PTV coverage were from the same patient (patient 0 in Figure B1 in Electronic Supplement B), all with relatively low numbers of coplanar beams (CP_5-11). In the IMRT plan used for treatment of this patient, sufficient PTV coverage was obtained at the cost of exceptionally high breast and heart doses: breast Dmean = 11.9/6.3 Gy left/right, and heart Dmean = 23.2 Gy (by far the highest in the group), all strongly exceeding clinical thresholds. The wish-list for autoplanning ( Table B1 in Electronic Supplement B) was developed to balance OAR vs. PTV dose, which could result in too low PTV coverage to protect OARs. For 25/26 patients, all autoplans had sufficient coverage while also avoiding constraint violations. As indicated above, for patient 0, 17/24 plans had adequate coverage, the remaining seven had not. To avoid patient group analyses with unacceptable plans, patient 0 was not in such analyses in the remainder of the paper and the Electronic supplements, leaving 600 evaluable plans. The relevance of the proposed autoplanning workflow for patient 0 is further discussed in the Discussion section.
Automatically generated plans had overall favorable plan parameters compared to clinically delivered plans, generated with manual planning ( Table 1, further analyses in Electronic Supplement A, section A1). Table 1 compares mean autoplan parameters with the corresponding mean parameters for the CLIN plans. All averaged PTV dose parameters of the autoplans were favorable compared to those of the CLIN plans. The values for mean/minimum PTV coverage went up from 98.1%/95.0% to 99.5%/97.1%. A remarkable reduction in PTV V<90% was observed, with mean/maximum values decreasing from 2.9 cc/ 19.0 cc to 0.5 cc/6.6 cc. Autoplans were also superior to CLIN in all mean OAR plan parameters. For lungs and patient, observed maximum values in the autoplans were slightly higher than those in the CLIN plans. This could be related to the improved PTV dose, but statistics might also contribute here: the more plans generated, the higher the chance on outliers (25 CLIN plans vs. 600 autoplans).
As discussed in Electronic appendix A, section A2, involved clinicians rated positively the automatically generated plans.
Comparisons of Beam Configurations
All analyzed 600 autoplans for patients 1 to 25 showed highly comparable PTV doses (standard deviations for V95%, V<90%, and V107% were 0.2%, 0.4 cc, and 0.3%). Therefore, only OAR doses are reported in this section. Figure 2 shows population average plan parameters for VMAT, B-VMAT and CP_x and NCP_x (x = 5-15) (p-values for all mutual comparisons are reported in Figure B2 in Electronic Supplement B). Below, the main observations are summarized: • Beam number x in NCP_x and CP_x: Both for CP_x and NCP_x plan quality increased with increasing x. For some parameters there was some leveling off for x≥11 beams, but not for all. Improvements obtained by adding a beam were highly statistically significant for high-dose plan parameters, i.e. lungs and patient V20Gy, heart Dmean and lungs Dmean. For medium dose parameters (lung V5Gy and breast Dmean) differences were almost always statistically significant.
Improvements in left breast V4Gy were not statistically significant. • NCP_x vs. CP_x: For equal beam numbers, x, NCP was always better than CP. Figures 4A, B show that plan improvements with NCP_15 compared to CP_15 were observed for all Figure B2 in Electronic Supplement B. patients, although the gain was clearly patient and plan parameter dependent. Differences in mean values were often considered clinically significant. • NCP_x vs. VMAT: NCP_x≥10 was better than or equal to VMAT for all OAR plan parameters. For many parameters, equality was achieved for much less beams. • NCP_x vs. B-VMAT: NCP_x was overall superior for lungs and patient V20Gy and for conformality (CI) (higher doses), and, for larger x, also for heart Dmean and lungs Dmean. Figures 4C, D show that differences are strongly patient-and parameter dependent. Possibly patients that may benefit most from NCP over B-VMAT in terms of heart or lungs doses are those with targets extending to the lower mediastinum (e.g., pt. 4, Figure B1) and/or the supraclavicular region bilaterally (pts. 3,5,11), or with asymmetrical target relative to the midline (e.g., unilateral axilla, pt. 16). Overall, B-VMAT had lower left breast Dmean and V4Gy, lungs V5Gy and patient V5Gy (lower dose parameters). However, some patients did benefit from the individualized beam choice in terms of breast dose, such as patients with axillar involvement (e.g., pts. 8 and 24) and with asymmetrical targets relative to the midline (e.g., pt. 12). • VMAT vs. B-VMAT: Lungs V20Gy, patient V20Gy and CI (higher dose parameters) were on average lowest with VMAT. B-VMAT was on average superior for all other plan parameters. This is consistent with the findings by Fiandra et al. (19). Figures 4E, F show strong patient-and plan parameter dependences of differences between VMAT and B-VMAT. • VMAT vs. CP_x: For small x, VMAT was clearly superior. For larger x, differences were dependent on plan parameter. • Breast: Non-coplanar approaches scored best. B-VMAT was overall the clear winner, followed by NCP with 12 beams or more (NCP_x≥12). Superiority of B-VMAT could be related to geometrical constraints as defined by the butterfly geometry, limiting the dose delivered to the breasts.
• Heart: Non-coplanar approaches were best. NCP_x≥10 plans had on average a lower heart Dmean than B-VMAT. The superior heart sparing with NCP_15 and B-VMAT is illustrated for patient 3 in Figure 3. • Lung: NCP_x≥13 was overall best for Dmean and V20Gy. B-VMAT was overall best for V5Gy but resulted in high V20Gy. • Low vs high dose in lungs and patient (V5Gy vs V20Gy): Compared to B-VMAT, NCP improved lung and patient V20Gy (mostly p < 0.001), at the cost of lungs and patient V5Gy (mostly p < 0.001) and breast V4Gy (only significant for right breast). This can also be observed in the dose distributions in Figure 5, where B-VMAT was less conformal around the tumor (red and yellow isodose lines), but showed less spread of low doses (light green and azure isodose lines in sagittal view), compared to CP_15 and NCP_15. • Dose conformality: On average (Figure 2), conformality was best for VMAT (lowest CI), closely followed by NCP_15 and CP_15. B-VMAT was clearly the worst. • Overall observations: In Figure 4, patients are sorted according to decreasing heart Dmean in NCP_15 plans. A clear reduction in differences among techniques is visible for patients with decreasing heart Dmean, showing a dependence on patient anatomy ( Figure B1 in Electronic Supplement B) when selecting the optimal technique. E.g. patient 25 showed smaller differences between techniques, making the less complex CP or VMAT the favorable choice.
Patient-Specific Beam Orientations
For NCP_15 and CP_15, patient group analyses were performed on selected beam directions. The population distributions of selected beam directions are shown in Figure 5. The rectangles in the left panel of Figure 5 show the coplanar and non-coplanar beam directions used for B-VMAT. Non-coplanar beams resulting from a couch angle of 90°and gantry angles between FIGURE 3 | Dose distributions for patient 3. CP_6 was added as, on average, six beams were used clinically. CP_15 was similar to VMAT and was therefore not added. The isodose lines are percentages relative to the prescribe dose, i.e., 100% = 30 Gy, with color legend as light blue, 16.7% (5 Gy as OAR constraints); azure, 20%; light green, 40%; dark green, 60%; yellow, 80%; red, 95%. 10°and 30°, entering the patient from anterior-inferior directions, were frequently present in NCP_15 plans. These entrance angles have a heart sparing/avoidance effect (see also sagittal views in Figure 3). The (couch, gantry) directions around (−70°, −30°) and around (−45°, −15°) were also often present in the NCP_15 plans. A clear prevalence of anterior beams was found in both NCP_15 and CP_15 with gantry angles between ±90°. For all patients, at least one anterior beam was present in the range −10°to 10°for CP_15 plans. Many beams in NCP_15 coincide with the anterior beam directions of B-VMAT. On the other hand, the posterior angles of B-VMAT were hardly selected in NCP_15. Apart from the clustered areas, Figure 5 shows broad distributions of selected beam directions for NCP_15 and CP_15. This is in agreement with the large inter-patient variations in selected directions, shown in electronic appendix C.
DISCUSSION
To the best of our knowledge, in all published studies comparing beam configurations for treatment of mediastinal lymphoma patients, treatment plans were generated with manual trial-anderror planning, including selection of beam angles. It is wellknown that manually generated plans may suffer from inter-and intra-planner quality variations, aggravated by the complex selection of optimal beam configurations. In this paper we present the first study using autoplanning with integrated beam angle optimization to systematically explore advantages and disadvantages of various coplanar and non-coplanar beam configuration approaches for young female mediastinal lymphoma patients. Due to this automation, plan generation became fully independent of planners, and the analyses could be based on a large number of high-quality plans. From the 624 generated autoplans (26 patients with 24 autoplans), 617 (98.9%) satisfied the clinical PTV coverage requirement. The seven autoplans with insufficient PTV coverage were from the same patient (patient 0). Because of these plans with too low coverage, patient 0 was not included in patient population analyses comparing beam configuration approaches (see also Results section). Of the remaining 600 autoplans (25 patients with 24 autoplans), the dosimetric parameters compared favorably with those of corresponding clinically delivered plans, generated with manual plan generation. This observation was in agreement with the evaluations of 100 autoplans by the two physicians involved in this study who considered these plans of high quality (Electronic Supplement A).
There was not an overall superior beam configuration approach for the patient population, i.e. being on average best for all plan parameters. Performances of the various approaches were dependent on the considered OAR and the endpoint. There were also large inter-patient variations in the gain of one technique compared to another. However, overall there was a clear tendency towards improved plans with non-coplanar configurations (B-VMAT and NCP_x≥12). NCP_x≥12 was on average better in producing highly conformal plans with reduced high doses in the lungs and patient and also a reduced heart Dmean, while B-VMAT had reduced low-dose spread, related to the confinement of beam angles to the butterfly geometry. Levis et al. (36) have recently reported on a new-generation butterfly VMAT, where the coplanar part consists of a standard full-arc VMAT (FaB-VMAT). While this approach may solve some of the issues pointed out here for the B-VMAT approach (lack of conformity in the high doses), it might not be superior to NCP_15 for selected patients. In fact, the authors report a loss in breast dosimetry with FaB-VMAT for bulky tumors, compared to B-VMAT.
A distinct disadvantage of non-coplanar treatments can be an increase in delivery time. There is also enhanced risk of collisions due to human errors in delivery. Whether the dosimetrical benefit justifies increases in delivery time and complexity remains a clinical choice that may be highly dependent on the patient at hand with her specific plan quality improvements and required number of non-coplanar beams. In most radiotherapy departments, the number of ML patients is limited, which may render non-coplanar treatment (for a selected group) more feasible. Risks of collisions can be mitigated with adequate delivery protocols, and instruction and training of RTTs.
The observed large inter-patient variations in dosimetric differences between various beam set-ups are an incentive for prospective clinical use of automated planning to generate multiple plans for each new patient, and then select the best plan, considering quality and delivery time. This could further personalize radiotherapy for ML patients. We believe that for a clinical application, not all 24 autoplans discussed in this study need to be generated for each new patient. Coplanar plan generation could be limited to VMAT and for non-coplanar treatment, B-VMAT and, e.g., NCP_9 and NCP_15 could be generated. Based on a comparison of these plans, a final plan could be selected, or NCP_x plans with other beam numbers could be generated to refine the choice.
The seven autoplans of patient 0 with insufficient PTV coverage to avoid excessive OAR dose delivery were all coplanar with relatively low numbers of beams (5)(6)(7)(8)(9)(10)(11). For the remaining five coplanar plans with 12 to 15 beams and for all 12 non-coplanar plans, adequate coverage was obtained. Many of these plans also had superior OAR dose delivery compared to the clinical plan. The automated workflow presented above, based on automated generation of a small set of treatment plans for each patient, would naturally have avoided generation of the low beam number coplanar plans with unacceptably low PTV coverage.
This study and the proposed clinical workflow, based on generation of a small set of plans for each patient, are incentives for manufactures of treatment planning systems to extend their systems with advanced options for patient-specific beam angle optimization.
The automated planning applied in this study was developed to generate plans that balance all treatment aims in line with the clinical protocol. However, effectively, the various investigated beam angle approaches did in the end result in different overall balances between the objectives, resulting from the respective opportunities and limitations in beam angle choice (above). This could be extremely useful in case of co-morbidities or specific toxicity risks. E.g. for most patients with a heart comorbidity, NCP_x≥12 plans would be favorable, while B-VMAT would often be the modality of choice if low dose in the contralateral breast is of high relevance. Variations in plan quality could be further enhanced by also generating plans with wish-lists that focus on sparing of particular OARs. In a future work we will investigate pre-defined deviations from the clinical planning protocol, each focusing maximally on a specific endpoint/OAR.
In clinical planning, beam energies of 6, 10, and 18 MV were used, often also in combinations. For autoplanning in this study only 6 MV was used to avoid prolonged optimization times due to inclusion of beam energy optimization. Nevertheless, the obtained plan quality was high.
As mentioned in the M&M section, Erasmus-iCycle was used to optimize intensity profiles, i.e. time-consuming segmentation of the 600 plans was avoided. This does not impact the main conclusions of the paper; in many previous studies, we have demonstrated the ability to segment these plans for VMAT (35,37,38). Moreover, for the technique comparisons, only differences in plans were evaluated, and interesting differences were generally large.
We used heart Dmean for restricting the risk on radiationinduced cardiac toxicity. This is in line with the study by Darby et al. on radiation-induced cardiac toxicity in breast cancer patients (39). On the other hand, there are indications that selective sparing of heart substructures could be important (31,40). To the best of our knowledge, there are no systematic studies for ML comparing planning with and without the use of heart substructures, including an evaluation of the impact on target dose and doses in the other OARs. Unfortunately, in our study these substructures were not delineated for clinical treatments, and were therefore not available for detailed analyses.
In research environments, different solutions for beam angle optimization have been proposed (33,(41)(42)(43). In our study, the solution developed by Breedveld et al. (33) has been used as shown to produce high quality results (37,44). Comparisons of different algorithms are lacking.
In conclusion, using autoplanning including computerized coplanar and non-coplanar beam configuration optimization, 24 beam configuration approaches were compared for 25 young female mediastinal lymphoma patients. The quality of the applied autoplans was superior to that of manually generated, clinically delivered plans. Non-coplanar beam configurations were overall favorable, but significant patient-specific and planparameter-dependent dosimetric advantages and disadvantages of different beam configurations were observed, suggesting a need for prospective generation of multiple plans per patient to optimally personalize radiotherapy treatment. A workflow was proposed for automated generation of a small set of plans for each patient, followed by a selection.
DATA AVAILABILITY STATEMENT
All relevant data are within the paper and its supplementary files. Access to raw-data underlying the findings in this paper will be made possible on request to corresponding author.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
LR and PC: conducting main research, brainstorming, and writing manuscript. JL: data, collection, brainstorming about research, and partial data analysis. JP and BH: supervision of research, brainstorming, and writing manuscript. SB: developing code and writing manuscript. MV and CJ: data collection, data revision, and writing manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was in part funded by Elekta AB (Stockholm, Sweden). Erasmus MC Cancer Institute also has research collaborations with Accuray, Inc (Sunnyvale, USA) and Varian Medical Systems, Inc (Palo Alto, USA). The funder bodies were not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication. | 5,986.4 | 2021-04-15T00:00:00.000 | [
"Medicine",
"Physics"
] |
Do Conventional Comparative Cost Efficiency Analyses Adequately Value Nitrogen Loss Reduction Best Management Practices?
Increasing public awareness of nitrogen (N) loading to surface waters has resulted in increasing pressure for the adoption of Nitrogen Loss Reduction Best Management Practices (NLR BMPs). These practices are commonly evaluated by a comparative cost efficiency (CCE) that determines the ratio of implementation costs to N load reduction. This conventional methodology is essential for comparison of practices from a policy perspective. However, the CCE method does not consider potential short-term, on-farm benefits of in-field NLR BMPs. Therefore, it is the opinion of the authors that there is a need to advance the economic analysis of current NLR BMPs to better relate to the economics of adopting producers. In this paper we will discuss CCEs for several common NLR BMPs, argue that inclusion of cost-benefit analyses in CCE estimations of cover crops (CC) may alleviate producer financial concerns and increase practice adoption, and assert that there is a need for further research examining the economics of combinations of best management practices on field, watershed, and regional scales.
Introduction
Growing awareness of nutrient contamination from row crop agriculture and its link to surface water contamination, such as the Gulf of Mexico Hypoxic Zone, has increased public pressure on agriculture to reduce our environmental footprint [1][2][3]. In agriculturally dominated watersheds, producers are implementing best management practices (BMPs) to prevent excessive nutrient losses, which contribute to public safety and environmental issues, through voluntary adoption or governmental cost share initiatives. The authors recognize that many nutrients can potentially pose an environmental concern. However, this paper will focus specifically on nitrogen (N) contamination of surface waters and agriculture BMPs designed to reduce N loading. Cover cropping (CC), one of the most widely recommend N Loss Reduction BMPs (NLR BMPs), has been shown to reduce N leaching and loss from agriculture fields [4][5][6][7]. Additionally, CC potentially provides multiple soil health benefits, beyond reducing surface water N contamination, that are not included in traditional comparative cost efficiency (CCE) calculations, the ratio of implementation costs to nitrogen load reduction. However, these demonstrated environmental and soil health benefits have not translated into producer adoption, thus only 2% of cropland acreage in the U.S. has adopted CC and only 4% of U.S. farmers have used cover crops [8]. This is evidence that there is a disconnect between cover crop research and producer adoption; which is exacerbated by the dearth of information that allows producers to understand the short-term value of CC. It has been established that BMPs are effective at reducing N loads; however, to drive voluntary adoption by producers, we must advance our understanding of how to determine the CCE of individual and systems of NLR BMPs. With this in mind, we intend to discuss CCEs for several common NLR BMPs, argue that inclusion of cost-benefit analyses in CCE estimations of CC may help producers value the sshortterm benefits of cc, and assert that there is a need for further research examining the economics of combinations of best management practices on field, watershed, and regional scales.
Fundamental Differences in Edge of Field and In-Field NLR BMPs
NLR BMPs can be categorized into two groups Edge-of-Field (EOF) or In-Field (IF) practices. Constructed wetlands, denitrifying bioreactors, two-stage ditches, and controlled
Agricultural Research & Technology: Open Access Journal
drainage are examples of EOF practices. In general, these practices reduce N loading by slowing water flow, creating anaerobic conditions, and facilitating denitrification. In contrast, IF BMPs are adaptive management strategies that reduce the actual losses of N from agriculture fields before it reaches the tile drainage system. Some examples of IF BMPs are cover cropping (CC) and 4R N management (right rate, right time, right source, and right place). In the literature, economic assessments of NLR BMPs have been conducted using CCE methodologies; estimates for several common BMPs are reported here (Table 1). In policy discussions, CCE values allow for the ranking of N load reduction efficiencies across multiple BMPs. Examples of this can be found in the Nutrient Loss Reduction Strategies of multiple Midwestern states [9,10]. [18] These CCE analyses only examine implementation costs to producers but do not consider any potential benefits. This is appropriate for most EOF practices that do not provide on-farm benefits beyond N loss reduction. However, an essential feature of IF practices are their potential to provide short-term benefits to producers such as erosion control, improved nutrient cycling, and potentially increased N utilization by cash crops. Therefore, it is the authors' opinion that the conventional CCE calculations underestimate the efficacy of CC and do not relate well to the economics of a producer.
Inclusion of Cost-Benefit Analysis in Comparison Cost Efficiencies
As indicated above, CC CCE analysis focuses only on the ratio of implementation costs (establishment, termination, and yield effects) to the N load reduction, regardless of potential on-farm benefits. In the literature, CCE values of US$2.22kgN -1 year -1 [11], US$6.24/kgN/year [13], US$7.08 kgN -1 year -1 [9], US$7.95 kgN -1 year -1 [12], and [18] have been reported that represent an average CCE of US$6.64 kgN -1 year -1 . The lack of methodology to include short-term, on-farm benefits in conventional CCE analyses result in less relevancy to producers because they do not reflect the impact of adoption on producers' profitability.
To address this concern, the literature has suggested a method for CCE that uses corrected implementation costs by performing cost-benefit analyses that include short-term, onfarm benefits (erosion control and improved N cycling); termed the Conservation Economic Efficiency Cost (CEEC) [13]. In comparison to the CCE values, they reported an average CC CEEC of US$1.09 kgN -1 ha -1 [13]. This model allows for a more accurate estimate of the cost of CC as an NLR BMP; which may alleviate producer financial concerns that are a barrier to BMP adoption. While this model is an initial step in improving our analysis of CC cost efficiency, more research is needed to provide valuation methods for other short-term benefits of CC across different regions and management systems.
Systematic Best Management Practice Approach to N Reduction Goals on Field, Watershed, and Regional Scales
In addition to improving our economic analysis of individual NLR BMPs, there is a need to understand how to use CEECs to asses a system of conservation practices (multiple NLR BMPs over a field, watershed, or regional scale). This has not been investigated in the literature. However, we do know from the literature that there is potential for combined BMPs to provide increased environmental benefits. For example, individual N load reductions of 11.8% and 35.6% have been observed for cover crops and controlled drainage, respectively. In comparison, the combination of these two practices resulted in an N load reduction of 47.5%, which was greater than either of the individual treatments alone [14].
The systematic coupling of cover cropping and 4R management (N application timing) has been investigated across a two-year corn-soybean rotation. Researchers reported an average N load increase of 17% by adjusting N application timing from fall to spring; a 3.7% decrease and 41.7% increase in the corn and soybean years, respectively [7]. Spring N application resulting in increased N load in the soybean year has been reported elsewhere in the literature [15][16][17][18][19][20]. However, when N application timing and CC are coupled, a 15.1% decrease in the corn year and 32.8% reduction in the soybean year was observed Agricultural Research & Technology: Open Access Journal in drainage system N loading [7]. A similar trend was observed where cover crops resulted in an average reduction in N load of 53.3% across both phases of a corn/soybean rotation [20]. These studies demonstrate that the combination of multiple BMPs has the potential to increase N load reductions across field, watershed, and regional scales.
However, there is a dearth of knowledge regarding how to value these conservation systems in a way that economically producers. Therefore, there is further need for research that examines the economics of conservation systems that include multiple BMPs. | 1,859.2 | 2017-12-13T00:00:00.000 | [
"Engineering"
] |
Crystal structure of 1,1′-{(1E,1′E)-[4,4′-(9H-fluorene-9,9-diyl)bis(4,1-phenylene)]bis(azanylylidene)bis(methanylylidene)}bis(naphthalen-2-ol) dichlorobenzene monosolvate
A novel bis(anil) compound was synthesized and structurally characterized. Theoretical calculations suggested that the new bis(hydroxyimine) will exhibit histone deacetylase SIRT2, histone deacetylase class III and histone deacetylase SIRT1 activities, and will act as inhibitor to aspulvinone dimethylallyltransferase, dehydro-l-gulonate decarboxylase and glutathione thiolesterase.
The bis(anil) molecule of the title compound, C 47 H 32 N 2 O 2 ÁC 6 H 4 Cl 2 , contains two anil fragments in the enol-enol form, exhibiting intramolecular O-HÁ Á ÁN hydrogen bonds. The two hydroxynaphthalene ring systems are approximately parallel to each other with a dihedral angle of 4.67 (8) between them, and each ring system makes a large dihedral angle [55.11 (11) and 48.50 (10) ] with the adjacent benzene ring. In the crystal, the bis(anil) molecules form an inversion dimer by a pair of weak C-HÁ Á ÁO interactions. The dimers arrange in a onedimensional column along the b axis via another C-HÁ Á ÁO interaction and a stacking interaction between the hydroxynaphthalene ring system with a centroid-centroid distance of 3.6562 (16) Å . The solvent 1,2-dichlorobenzene molecules are located between the dimers and bind neighbouring columns by weak C-HÁ Á ÁCl interactions. Theoretical prediction of potential biological activities was performed, which suggested that the title anil compound can exhibit histone deacetylase SIRT2, histone deacetylase class III and histone deacetylase SIRT1 activities, and will act as inhibitor to aspulvinone dimethylallyltransferase, dehydro-l-gulonate decarboxylase and glutathione thiolesterase.
Structural commentary
In the title bis(anil) molecule, two hydroxynaphthalene ring systems are approximately parallel to each other with a dihedral angle of 4.67 (8) between them (Fig. 1). The 9Hfluorene ring system (C1-C13) forms large dihedral angles of 78.80 (10) and 61.41 (9) , respectively, with the benzene C14-C19 and C31-C36 rings. Each hydroxynaphthalene ring system also forms a large dihedral angle with the adjacent benzene ring [55.11 (11) between the C21-C30 ring system and the C14-C19 ring, and 48.50 (10) between the C38-C47 ring system and the C31-C36 ring]. Both fragments of the hydroxynaphthalene Schiff bases are in the enol form, forming intramolecular O-HÁ Á ÁN hydrogen bonds (Table 1).
Supramolecular features
In the crystal, the bis(anil) molecules form an inversion dimer via a pair of weak C-HÁ Á ÁO interactions (C3-H3Á Á ÁO1 i ; symmetry code given in Table 1). The dimers form a 1D column along the b axis through a C-HÁ Á ÁO (C35-H35Á Á ÁO1 ii ; Table 1) and astacking interaction between the hydroxyl naphthalene ring systems with a centroidcentroid distance of 3.6562 (16) Å (Cg1Á Á ÁCg2 ii ; Cg1 and Cg2 are the centroids of C21-C30 and C38-C47 ring systems, respectively). Dichlorobenzene molecules are located between the dimers and bind the neighboring columns by weak C-HÁ Á ÁCl interactions (Table 1 and Fig. 2).
Refinement
Crystal data, details of data collection, and results of structure refinement are summarized in Table 2. All C-bound H atoms were placed in calculated positions (C-H = 0.95 Å ) and refined using a riding model [U iso (H) = 1.2U eq (C)], while the H atoms of the OH groups were localized in a difference-Fourier map and refined with U iso (H) = 1.5U eq (O).
Computing details
Data collection: APEX3 (Bruker, 2018); cell refinement: SAINT (Bruker, 2016); data reduction: SAINT (Bruker, 2016); program(s) used to solve structure: SHELXT (Sheldrick, 2015a); program(s) used to refine structure: SHELXL2018/3 (Sheldrick, 2015b); molecular graphics: OLEX2 (Dolomanov et al., 2009); software used to prepare material for publication: OLEX2 (Dolomanov et al., 2009). Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. | 919.8 | 2020-09-04T00:00:00.000 | [
"Chemistry"
] |
Fuzzy clustering-based feature extraction method for mental task classification
A brain computer interface (BCI) is a communication system by which a person can send messages or requests for basic necessities without using peripheral nerves and muscles. Response to mental task-based BCI is one of the privileged areas of investigation. Electroencephalography (EEG) signals are used to represent the brain activities in the BCI domain. For any mental task classification model, the performance of the learning model depends on the extraction of features from EEG signal. In literature, wavelet transform and empirical mode decomposition are two popular feature extraction methods used to analyze a signal having non-linear and non-stationary property. By adopting the virtue of both techniques, a theoretical adaptive filter-based method to decompose non-linear and non-stationary signal has been proposed known as empirical wavelet transform (EWT) in recent past. EWT does not work well for the signals having overlapped in frequency and time domain and failed to provide good features for further classification. In this work, Fuzzy c-means algorithm is utilized along with EWT to handle this problem. It has been observed from the experimental results that EWT along with fuzzy clustering outperforms in comparison to EWT for the EEG-based response to mental task problem. Further, in case of mental task classification, the ratio of samples to features is very small. To handle the problem of small ratio of samples to features, in this paper, we have also utilized three well-known multivariate feature selection methods viz. Bhattacharyya distance (BD), ratio of scatter matrices (SR), and linear regression (LR). The results of experiment demonstrate that the performance of mental task classification has improved considerably by aforesaid methods. Ranking method and Friedman’s statistical test are also performed to rank and compare different combinations of feature extraction methods and feature selection methods which endorse the efficacy of the proposed approach.
Introduction
Brain computer interface (BCI) is a communication system by which a person can send messages or request for basic necessities via his or her brain signals without using peripheral nerves and muscles [1]. It is one of the areas which has contributed to the development of neuron-based techniques to provide solutions for disease prediction, communication, and control [2][3][4]. Three acquisition modalities have been discussed in the literature [5,6], viz, invasive (microelectrode array), semi-invasive [electrocorticography (ECoG)], and non-invasive (EEG) for capturing signals corresponding to brain activities. EEG is a widely preferred technique to capture brain activity for BCI system [7,4] as its ability to record brain signals in a nonsurgical manner leading to low cost. Response to mental tasks is one of the BCI systems [8], which is found to be more pragmatic for locomotive patients. This system is based on the assumption that different mental activities lead to typical, distinguishable and task-specific patterns of EEG signal. The success of this BCI system depends on the classification accuracy of brain signals. Extraction of relevant and distinct features from EEG signal associated with different mental tasks is necessary to develop an efficient classification model.
In the literature, a number of analytic approaches have been employed by the BCI community for better representation of EEG signal such as band power [9], amplitude values of EEG signals [10], power spectral density (PSD) [11][12][13], autoregressive (AR), and adaptive autoregressive (AAR) parameters [14]. However, the primary issue with AR modeling is that the accuracy of the spectral estimate is highly dependent on the selected model order. An insufficient model order tends to blur the spectrum, whereas an overly large order may create artificial peaks in the spectrum. In fact, the frequency spectrum of the EEG signal is observed to vary over time, indicating that the EEG signal is a non-stationary signal. As a consequence, such a feature extraction method should be chosen which can model the non-stationary effect in the signal for better representation.
The wavelet transform (WT) [15,16] is an effective technique that can be used to analyze both time and frequency contents of the signal. However, WT uses some fixed basis mother wavelets, independent of the processed signal, which makes it non-adaptive. Another successful method for feature extraction, empirical mode decomposition (EMD) [17], represents the non-linear and non-stationary signal in terms of modes that correspond to the underlying signal. EMD is a data-driven approach that does not use a fixed set of basis functions, but is self-adaptive according to the processed signal. It decomposes a signal into finite, well-defined, low-frequency and high-frequency components known as intrinsic mode functions (IMFs) or modes.
Due to multi-channel nature of EEG data, the dimensionality of extracted features is very large but the available number of samples per class is usually small in such application. Hence, it suffers from curse-of-dimensionality problem [18], which also leads peaking phenomena in the phase of designing classifier [19]. To overcome this problem, dimensionality reduction using feature selection is suggested in the literature [20].
In this paper, a two-phase approach has been used to determine a reduced set of relevant and non-redundant features to solve the above-mentioned issues. In the first phase, features in terms of eight different parameters are extracted from the decomposed EEG signal using empirical wavelet transform (EWT) or the proposed FEWT. In the second phase, the multivariate filter feature selection approach is employed to select a set of relevant and nonredundant features. To investigate the performance of different combinations of the two feature extraction and multivariate feature selection methods, experiments are performed on a publicly available EEG data [4].
The rest of the paper is organized as follows: The EWT have been discussed briefly in Sect. 2. The proposed feature extraction technique for mental task classification and Fuzzy c-means (FCM) algorithm have been discussed in Sect. 3. Multivariate feature selection methods are included in Sect. 4. Description of experimental setup and results are discussed in Sect. 5. Finally, Sect. 6 includes conclusions and future work.
Empirical wavelet transform
The nature of the EEG is non-linear and non-stationary [21]. To deal this nature of the EEG signal, in recent past, a fixed basis function based on the WT [22,23] and an adaptive filter-based EMD methods have been applied [24,25]. The major concern of EMD method is the lack of mathematical theory [26]. Combining properties of these two methods, recently Gilles [26] has proposed a new adaptive basis transform called EWT to extract the mode of amplitude-modulated-frequency-modulated (AM-FM) signal. The method to build a family of adaptive (empirical) wavelets of the signal to be processed is the same as the formation of a set of bandpass filters in Fourier spectrum. The idea to achieve the adaptability is the dependency of filter's supports on the location of the information in the spectrum of the signal [26].
Let x denote the frequency, which belongs to a segmented of N continuous segment, Fourier support, o; p ½ . Further x n denotes the limit between each segment (x 0 ¼ 0 and x N ¼ pÞ and K n ¼ x nÀ1 ; x n ½ denotes a segment such that S N n¼1 K n ¼ 0; p ½ . It is assumed that the each segment having a transition phase, which is centered around x n , of width 2s n in research work of Gilles [26].
The empirical wavelet can be define as a bandpass filter for each K n by utilizing idea of both Littlewood-Paley amd Meyer's wavelets [15]. The empirical scaling function can be defined aŝ and the empirical wavelets can be given as follows: The EWT of signal f(t), W e f ðn; tÞ, is defined the same as classic WT [26]. The detail coefficient is defined as where hi denotes inner product. Similarly, the approximation coefficient is defined as The reconstruction of the signal f(t) can be obtained as 3 Proposed feature extraction approach Although EWT has been proposed by Gilles [26] for building adaptive wavelet to represent the signal to be processed, the author, however, has mentioned that the proposed method might fail to decompose properly when the input signal, like EEG signal (due to nature of multiple channels), compose of more than one chirp which overlaps in both time and frequency domain. As the performance of the classification model is highly dependent on the extracted features, features obtained using EWT from EEG signals are not suitable to produce an efficient classification model due to the problem mentioned above. Keeping this point into consideration, a very familiar fuzzy clustering method has been employed in this paper. The proposed method is able to deal with the problem of EWT by reassigning the extracted features from EWT to the more similar type of segment using FCM algorithm. And this final processed signal will be able to produce good classification model. The brief description of FCM is given in the next subsection.
Fuzzy C-means
Fuzzy C-means algorithm [27] is a clustering technique based on fuzzy set theory. Basically, fuzzy set theory is developed by Zadeh [28] and is viewed in different prospects by some researchers such as Nguyen [29] and Tiwari and Srivastava [30]. The core idea of FCM is that one object can belong in more than one cluster on the basis of fuzzy membership value ( 0; 1 ½ ) rather than on the ground of crisp value (f0; 1g) as in k-means algorithm. The nonlinear optimization problem for FCM can be given as where X ¼ ðx 1 ; x 2 ; :::; x p Þ are p objects, c ð1\c\pÞ is number of the clusters, and m ð1\m\1Þ is fuzzifier constant. u ij is the degree value of membership of jth object to belong in ith cluster. U ¼ ðu ij Þ c  p and V are fuzzy partition and centroid matrix, respectively. Further, d 2 ðx j ; v i Þ denotes the Euclidean distance between jth object and ith centroid. The updation of the fuzzy membership value of the given object after k iteration is given as Similarly, the centroid point can be updated as
Feature coding
The proposed approach of extracting features from EEG signal is carried out in three steps. In the first step, the decomposition of the signal into desire number of support (segment) through the EWT is made. FCM clustering algorithm is employed in the second step of the proposed approach to avoid overlapping segments obtained from thê Fuzzy clustering-based feature extraction method 137 first step. To represent each segment more compactly, eight statistical or uncertainty parameters (root mean square, Lempel-Ziv complexity measure [31], shannon entropy, central frequency, maximum frequency, variance, skewness, and kurtosis) have been calculated in the third or final step of the proposed technique as every signal or data have the distinguishable property in terms of a set of statistical parameters associated with the signal or data. It may be possible that the two signals have same value associated with one or more statistical parameter. In this work, these eight parameters are selected empirically.
Feature selection
The feature vector from each channel obtained encloses all the features constructed with the above statistical parameters. The final feature vector obtained after concatenation of features from six channels is large, i.e., each feature vector contains 144 parameters (3 EWT segments  8 parameters  6 channels). Hence, feature selection is carried out to exclude noisy, irrelevant, and redundant features. Two major categories of feature selection methods are the filter method and the wrapper method. In filter method, the relevance of features is determined on the basis of inherent properties such as distance, consistency, and correlation without involving any classifier. Hence, it may not choose the most relevant feature set for the learning algorithm. Alternatively, the wrapper method [32] has a tendency to find relevant features subset, better suited to a given learning algorithm. However, wrapper method is computationally more costly since the classifier needs to be learned for each feature subset separately. On the other hand, filter feature selection method is computationally less intensive and bias free. Filter methods have a simple structure with straightforward search strategy like forward selection, backward selection, or the combination of both.
Filter approach is further classified into two categories [20] as univariate (ranking) and multivariate (feature subset). A scoring function is used by feature ranking method for measuring the relevance of each feature individually. These methods are simple to compute. The research works have used univariate filter method in the BCI field [33][34][35][36]. It is noted that the reduced relevant features obtained from using univariate methods significantly improves the classification accuracy. But it ignores the correlation among the features. Hence, the selected feature subset may have high redundancy among features and may not provide high discriminatory capacity.
In the wrapper approach [37,38], the seminal work of Keirn and Aunon [4] has used a combination of forward sequential feature selection and an exhaustive search to obtain a subset of relevant and non-redundant features for the mental task classification. However, wrapper approach is not suitable for high-dimensional data as it is computationally expensive.
On the other hand, efficient time multivariate filter method finds features which are relevant to the class and non-redundant among themselves. Thus, it overcomes the limitations of both univariate and wrapper approaches. Thus, we have preferred most widely used multivariate filter feature selection methods namely Bhattacharya distance measure [39], ratio of scatter matrices [40], and LR [41] for selecting relevant and non-redundant features. Brief discussion of these techniques is given below.
Bhattacharyya distance
In the literature, BD is used as a dissimilarity measure between two probability distributions. It is a special case of Chernoff distance which measures the overlap between samples of two different probability distributions. For multivariate normal probability distribution, Chernoff distance measure is given as [42] where l i and R i are mean vector and covariance matrix for class C i ; respectively(i=1, 2). When b = 1 2 then this distance is known as BD [39], which is given as However, it suffers from the problem of singularity when the determinant of covariance for a given class takes zero value.
Ratio of scatter matrices
In the literature, a simple measure based on the scatteredness of features in high-dimensional space is recommended, which is a ratio of the trace of the SR. The measure selects those relevant features which are well clustered around their class mean and the means of two different classes of data are well separated. The SR, withinclass scatter matrices, S w , and between class SR, S b , are defined as where l i , P i ; and l 0 are mean vector of ith class data, prior probability of i th class data, and global mean of data samples, respectively. From the definitions of SR, the criterion value, which is to be maximized, is given as J SR takes high value when the inter-cluster distance is large and intra-cluster distance is small. The main advantage of this criterion is that it is independent of external parameters and assumptions of any probability density function. The measure J SR also has the advantage of being invariant under linear transformation.
Linear regression
Regression analysis is another well-established statistical method suggested in the literature that investigates the causal effect of independent variable upon dependent variable. The class label is used as the dependent variable (target), and the features that affect this objective are sought. The LR method attempts to find the linear relationship between a response variable and two or more explanatory variables by substituting a linear equation to the observed data. Since many features can affect the class, therefore multiple regression model is more appropriate. A multiple regression model with k independent variables f 1 ; f 2 ; . . .; f k and a target variable y is given by Park et al. [41]: where b 0 ; b 1 ; . . .; b k are constants estimated by class label y and observed values of X. The sum of squared error (SSE) which is sum of the squared residuals is given by where y i and y p i are target and predicated values, respectively. The smaller value of SSE shows better regression model. The total sum of squares (SSTO) is given by where y is the average value of y i ; i ¼ 1; 2; . . .; n. The criterion value J LR is given as The value of J LR lies between 0 and 1. It considers a linear relationship between data and class labels. In a linear regression analysis, the feature for which the value of J LR is higher is selected.
Dataset
For our experiment, we have used publicly available data for mental task classification (Keirn and Aunon, 1990). The original EEG dataset consists of recordings from seven subjects, but we utilized data from all subjects except subject-4 due to some missing information. Each subject performed five different mental tasks: the baseline task (B)(no task); the mental letter composing task (L); the nontrivial mathematical task (M); the visualizing counting of numbers written on a blackboard task (C); and the geometric figure rotation task (R). Each of the recording session consists of five trials of each of the five mental tasks. EEG recording was taken from six electrodes placed on the scalp at C3, C4, P3, P4, O1, and O2 referencing to two electrodes placed at electrically linked mastoid, A1, and A2, as shown in Fig. 1. Each trial is of 10 s duration recorded with a sampling frequency of 250 Hz, which resulted into 2500 samples points per trial. More detail about the data can be found in the work of Keirn and Aunon [4]. 1
Construction of feature vector and classification
For feature construction, the data are decomposed into halfsecond segments as some researchers have done [13], yielding 20 segments per trial for each subject. Features are extracted from each signal using three steps: in the first step signal is decomposed from three number of supports using EWT, in the second step, FCM clustering algorithm (with fuzzifier constant m = 2) is employed to form non-
Results
The proposed FEWT methods is compared with the EWT method through the experimental setup as described above for binary mental task classification. Figures 3, 4, 5, 6, 7 and 8 show the classification accuracy taken overall average for the 10 binary combination of the five mental tasks for subject-1, subject-2, subject-3, subject-5, subject-6, and subject-7, respectively. From these figures, the following observations can be noted: • The performance of classification model has significantly improved after incorporating the fuzzy clustering method along with the EWT compare to EWT alone irrespective of with or without feature selection method for all the binary combination mental tasks for all mentioned subjects. • The classification accuracy of a given classifier has drastically increased with the application of feature selection methods (BD, LR, and SR) as compared to without feature selection (WFS) irrespective of feature extraction methods.
Ranking of various combinations of feature selection methods with proposed FEWT method
We have applied a robust ranking approach utilized by Gupta et al. [43], to study the relative performances of various combinations of feature selection methods with the proposed feature extraction method, i.e., FEWT with respect to EWT. To rank various combinations, the basis of percentage gain in classification accuracy with respect to maximum classification accuracy obtained using EWT feature extraction method with combination of various feature selection methods has been chosen. A mathematical description of this ranking procedure is as follows: If i = 0, then no feature selection is used; otherwise ith feature selection is used. a i FEWT t denotes classification accuracy of ith feature selection method in combination with FEWT feature extraction method for tth task combination.
. . .; a n EWT t g ð21Þ Then the average (over all task combination) percentage gain in accuracy for sth technique is given by Finally, the rank r s of each ith combination is assigned in such a way that r a r b if p a ! p b : ð24Þ Figure 9 shows four combinations of the feature selection and FEWT extraction methods compared against each other on the basis of percentage gain in accuracy. From Fig. 9, we can see the combination LR with FEWT acquires highest percentage classification accuracy gain with respect to the best combination of EWT with or without feature selection.
Friedman statistical test
In order to determine the significant difference in various combinations of feature selection and EWT or FEWT statistically, we have applied a two-way [44] and nonparametric statistical test known as Friedman test [45]. Our null hypothesis H 0 was that there is no difference in performance among all combinations of feature extraction and feature selection. The alternative hypothesis H 1 was that there are differences among combinations. The H 0 was rejected at significant level p = 0.05. From Table 1, it can be noted that the combination of FEWT feature extraction and LR feature selection is the winner among all combinations of feature extraction and feature selection. 6 Conclusion and future work A theoretical adaptive transform, EWT, has been proposed in recent past to analyze signal on its content basis. EWT would fail to handle the signal which is overlapped in time and frequency domain as the case with the EEG signals from multiple channels. This work has suggested employment of FCM followed by EWT for better representation of EEG signal for further classification of mental task. It can be concluded from experimental results that the proposed approach outperforms as compared with the original EWT technique. It is also noted that the features from multiple channels generate a large size of the feature vector, but the available number of samples is small. Under such a situation, the performance of the learning model degrades in terms of classification accuracy and learning time. To overcome this limitation, this paper has investigated and compared three well-known multivariate filter methods to determine a minimal subset of relevant and non-redundant features. Experimental findings endorse that the employment of feature selection enhances the performance of learning model. Ranking mechanism and Friedman statistical test have also been performed for the strengthening the experimental findings.
As the employment of FCM enhances the performance of EWT technique for the mental task classification, it would be better to explore some other fuzzy-based clustering which has been explored in image segmentation [46]. It will also be interesting to explore whether the FEWT would work in other type of BCI such as motor imagery and multi-mental task classification. | 5,238.8 | 2016-09-03T00:00:00.000 | [
"Computer Science"
] |
Kauffman Knot Invariant from SO(N) or Sp(N) Chern-Simons theory and the Potts Model
The expectation value of Wilson loop operators in three-dimensional SO(N) Chern-Simons gauge theory gives a known knot invariant: the Kauffman polynomial. Here this result is derived, at the first order, via a simple variational method. With the same procedure the skein relation for Sp(N) are also obtained. Jones polynomial arises as special cases: Sp(2), SO(-2) and SL(2,R). These results are confirmed and extended up to the second order, by means of perturbation theory, which moreover let us establish a duality relation between SO(+/-N) and Sp(-/+N) invariants. A correspondence between the firsts orders in perturbation theory of SO(-2), Sp(2) or SU(2) Chern-Simons quantum holonomies and the partition function of the Q=4 Potts Model is built.
Introduction
In a milestone work [1] Witten realised that the expectation value of a Wilson loop, computed with a three-dimensional Chern-Simons action measure, was a knot invariant. This is due to the fact that the Wilson loops are observables for Chern-Simons theories, having therefore diffeomorphism invariant expectation values. More in general this feature stems from the property that such a quantum field theory manifests general covariance, which in turn is a consequence of the metric independent structure: any physical quantity computed in this framework is a topological invariant. In practise, for SU(N) Chern-Simons field theory, the resulting knot invariant is the HOM-FLY polynomial, which in particular specialises into the Jones polynomial in the case of SU (2). These outcomes were derived through both conformal field theory (as in [1]) or perturbative quantum field theory (see for instance [2]). But a simpler heuristic derivation was proposed in [3] and [4] (for reviews see also [5] and [6]), at least up to the first order in the inverse coupling constant of the theory. It is based on a variational approach: it studies the behaviour in the expectation value of the Wilson loop when one performs small geometric deformation. In the conformal field theory scheme similar results have been found in [9], [10] and [11] for several other groups: SO(N), Sp(N), SU(n|m) and OSp(m|2n). It would be interesting to test whether the variational procedure, which is expressly realised to reproduce the HOMFLY polynomial from SU(N) gauge theory, may apply also in different contexts. In section 3 are studied the SO(N), SL(N,R) and Sp(N) cases. The results obtained are moreover analysed in section 4 by means of the more rigorous standard perturbation theory and extended up to the subsequent order, the second. Finally in section 5 we try to interpret these results from the statistical mechanic point of view, trying to connect the holonomies's first order expansion to one of the more famous lattice statistical system: the Q-Potts Model 1 ; which at the moment remains unsolved apart for its easiest personification when Q=2, the Ising model. We start (section 2) introducing the notation and summarising the fundamental properties of Chern-Simons theory and Kauffman polynomial that are useful in derivation of skein relations.
Chern-Simons theory and Kauffman polynomial
Let's consider a Chern-Simons theory for a gauge field connection one-form A = A a µ (x)T a dx µ valued in a generic semi-simple Lie algebra g, with action: where M 3 is a compact three-dimensional manifold whose coordinates are labelled by Greek letters (µ, ν, ρ, ...); while the internal group indices will be denoted by Latin letters (a, b, c, ...). The Lie algebra is spanned by generators T a , T b , . . . , obeying the commutation relations [T a , T b ] = if abc T c and normalised as follows: Tr (T a T b ) = 1 2 δ ab . This action got several notable properties: (i) it changes by 2πkn g under a gauge trans- is a complete gauge invariant quantity that will play the rôle of the path integral measure. (ii) The curvature of the gauge field at the point x ∈ M 3 is given by: δA a λ (x) We will interested in computing expectation values W (γ) for Wilson loops W γ [A] along closed paths γ, that in fact may be thought as a knot on M 3 , defined as follows: In this notation γ represents both common knots γ(t) : I → M 3 and n-component knots, also called knot-links, γ(t 1 , t 2 , . . . , t n ) = (γ 1 (t 1 ), γ 2 (t 2 ), . . . , γ n (t n )) : In the latter case W (γ) = W (γ 1 )W (γ 2 ) . . . W (γ n ) . Without losing generality one may think the compact interval I i = [0, 1] and γ(0) = γ(1) in order to have closed paths. The fact that Chern-Simons action is independent of the particular choice of a metric on the three-manifold suggests that the Wilson loop expectation values may capture some invariant or topological characteristic of the system's geometry: either that of the knots or of the manifold itself.
Now we introduce the Kauffman polynomial which is a regular isotopy invariant of knots and, if suitably normalised, becomes an ambient isotopy invariant. Actually we will deal with its equivalent Dubrovnik version. To each knot-link there is associated a finite Laurent polynomial D K = D K (a, z) of two variables with integer coefficients, such that if K 1 ∼ K 2 , then D K 1 = D K 2 (while the reverse is not necessary true). The polynomial can be constructed, as in [7] or [14], by the following rules 2 (see figure 1 for notation, stands for the unknotted circle): In i) and ii) the small diagrams {L k } k=±,0,∞ stand for larger link diagrams that differ only as indicated by the smaller ones. Starting from any knot-links K and using recursively Reidemeister moves and the skein relations (2.1) at each diagram's crossing, one can obtain uniquely its regular isotopy invariant D K (a, z). It is possible to normalise D K by a factor that take into account also eventual contributions of twists. For this purpose is used the writhe w(K) = p ǫ(p), where p runs over all crossing in K and ǫ(L ± ) = ±1 is the sign of the type of crossing. So finally we are able to define a genuine ambient isotopy invariant: the normalised Kauffman-Dubrovnik polynomial 3 : 2 Sometimes, as in [5], can be found a different normalisation for D K : iii) ′ D( ) = 1 + a−a −1 z ; in our notation 1 + a−a −1 z will result the 's normalisation. 3 While D k is defined for unoriented knots, to calculate the writhe in Y K one needs to define an orientation. At the end the orientation does not affect the result for knots but it affects the invariant polynomial in case of proper links. Thus Y K is said to be defined for semi-oriented knot-links.
Variational derivation of the skein relation
It's well known (see [5] for details) that the Wilson loops satisfy the following differential equations: where δ γx is the variation corresponding to an infinitesimal deformation of the loop γ in the neighbourhoods of a point x. It's then possible to compute this variation for an expectation value of a Wilson line along a knotted path γ and to use it to obtain a formula for the switching identity W (L + ) − W (L − ) as 4 follows: Note that studying the formal properties of this integral three assumptions are always used: i) the limits of differentiation and integration commute: ii) integrating by parts it's possible to discard the boundary term; iii) the existence of an appropriate functional measure on this moduli space. From the previous equation one is able to write the switching identity W (L + ) − W (L − ) . The quantity ǫ µνλ dx µ dx ν dy λ is dimensionless and, whether properly normalised, can be thought -1,0 or 1. Then (3.1) has a standard interpretation (we follow [5]) if one calls the operator, which in some sense enclose the loop's small deformation, C = a T a T a : Graphically C W (γ) is represented in the l.h.s of figure's 2 equation. Note that the sign is a convention which may be reversed exchangingL + ↔L − . Till this point the whole model has been valid for a generic gauge group G. In particular was successfully used in the literature to reproduce the Witten's result for HOMFLY polynomials from the SU(N) group. Instead in this paper we specialise our study to two particular algebras which have simple Fierz identities: the ones associated to the orthogonal group SO(N) and the symplectic group Sp(N), for a generic N.
SO(N) and Kauffman polynomial
Here the features of the algebra under consideration begin to play an important rôle. In fact to evaluate the operator C one needs to use the Fierz identity; in particular we have for SO(N) in the fundamental representation (in [8] Fierz identities are presented for almost all semi-simple Lie groups): This expression in the Baxter's abstract tensor notation (see [5]) reads as the diagrammatic relation drawn in figure 2. Hence, substituting in (3.2) the Fierz identity we have: To get in touch with the known results, one has to take the limit of k >> 1, namely the analogous of the first order perturbation expansion, thus the previous expression reduces to: These are exactly the skein relations that are found by means of the original Witten's method based on conformal field theory arguments (see [9] and [10]), once q := exp(− πi 2k ) is defined 5 . So is not difficult to see that D K = W (K) / W ( ) fulfils the definition of Dubrovnik polynomial (normalised as in [7] and [14] 6 ), with z = (q − q −1 ). The only thing that remains to fix is the value of a such that W (L + ) = a W (L 0 ) . This can be done considering the closure of the path in the skein relation (3.3), as shown in the figure below: uses a different killing metric normalisation for the Lie algebra generators; in order to compare with it one has to define a slightly different q := exp(− πi k ). [9] uses an inverse definition of the writhe and of the crossing diagrams, so what they call α = a −1 and their q is our q −1 . 6 Clearly if write-normalised by a factor a −w(K) (where w(L ± ) = ±1) D K (a, z) became an ambient isotopy invariant. Solutions for (3.4) are a = q N −1 or a = −q 1−N , which however gives rise at an equivalent D K polynomials 7 . The factor N comes from the diagrammatic tensor interpretation of the unknot circle, that is δ i i = N. It's worth to observe that these Dubrovnik-Kauffman polynomials D K (a = −q 1−N , z = q − q −1 ) do not run out all the original ones, but constitute a smaller subset depending on the fact that a assumes only discrete values depending on N (which generally is thought in N). The consistency check up to the 1/k order proposed in [3] is intrinsically satisfied using the quadratic Casimir operator of so(N) : ½(N −1)/4. Moreover the variational first order approach, can be generalised to subsequents orders with the same arguments presented in [12] and [13] for SU(N) groups. But we will prefer explore the subsequent order of the expansion (see section 4) through a different method based on the standard quantum field theory of perturbations. Finally note that the original Jones polynomial a −w(K) D K (ā = −q 3 ,z = q − q −1 ) is not included in this sub-class of Kauffman polynomial, unless choosing unconventionally N = −2 (once the polynomial is analytic continued for all integers values of N). Negative dimensions group theory is a powerful technique, first introduced by Penrose, to calculate algebraic invariants (see [15], [16] and [17]). In that case it relates the Casimirs and Young tableau of SO(-2) to the ones of Sp (2). Some speculation about this possibility are done in the next subsection, while a more rigorous treatment is done on section 4. One may be puzzled not to come across Jones polynomial for the SO(3) group which is locally isomorphic to SU (2) where this relation holds. The reason for this mismatch is based on the fact that in this context, more than groups similarities, the Lie algebras invariants play a key rôle. Actually, as also for SL(2,R) generators the same SU(2) Fierz identity for the C operator holds, Jones polynomial can be recovered with the same procedure of [3]. It is not surprising because sl(2, R) is the real split form of the A 1 algebra (known also as the sl(2, C) algebra by an abuse of notation), while su(2) is the real compact one.
Sp(N) skein relations and Jones Polynomial for Sp(2)
In this section we consider the Symplectic group Sp(N), for even N; apart from the relation with SO(-N) it is an interesting case for itself. Its Fierz identity (see again [8]) for the generators in the fundamental representation is: As the fundamental representation of this group is pseudoreal, unlike SO(N), the orientation should not be neglected as it is shown in figure 4. 8 Plugging this Fierz identity for Sp(N) into eq. (3.2) one fits the same skein relation of [10] which is obtained by a totally different approach. 9 There is a particular case where those computation are easily 10 carried on till get its knot invariant: N=2, just the one suspected to be related to the Jones polynomial, as we saw in section 3.1. In fact for Sp (2) the antisymmetric matrix f ij may be straight interpreted, without losing generality, as the Levi-Civita tensor ǫ ij and its inverse f ij = −ǫ ij . Hence the algebraic (eq. (3.5)) and diagrammatic ( fig. 5) representations of the C operator appear respectively as follows: In [10] another approach (which has the advantage that leaves the Wilson lines unoriented) is also presented, but not preferred as requires the specific choice of a "time" direction, which breaks the topological invariance because it is no longer possible to freely rotate the Wilson lines. 9 We refer to the one drawn in figure 17 of [10] 10 Even without the oriented diagram notation which is unnecessary heavy for Sp (2). One might work, in a complete compatible way, with the arrowed diagrams but paying the price of redefining appropriate oriented Reidemeister moves and oriented Kauffman state bracket as described in cap 6 0 of [5] and [10]. Now substituting the Fierz identity for Sp(2) into (3.2) we have: Where q is the same of section 3.1, while it is definedz : . Again we are considering at this stage k >> 1, i.e these equalities hold up to first order in the inverse coupling constant of the theory 11 . Closing the path in the previous skein relation as done for SO(N) we will be able to get a constraint that reduces one variable dependence: As before the second root aq = −x −2 leads exactly to the same results. So at large values of k for a normalised (to be a) expectation value P (K) = a −w(K) W (K) / W ( ) the original one variable Jones polynomial follows directly: So actually the estimation suggested by negative dimension group theory seems to work reliably. As it's here proved the Sp(2) Chern-Simons expectation values of a Wilson knot-link gives the Jones polynomial invariant for the same link.
Perturbative Quantum Field approach
It's worth analysing the heuristic previous section's results in a more carefully way. We opt for the standard quantum field theory of perturbation as developed for the SU(N) group in [2], which maybe got the disadvantage of being less qualitative from a geometrical point of view but got the benefit of being more analytically quantitative. The fact of being, in principle, a different approach also adds some guaranties on the consistency of the check.
Not least this method let us push the expansion, in the inverse coupling constant k, to one order further. Note that for this procedure a framing of the knot is needed; in this paper is always used the vertical frame defined as the one that got linking number equal to the writhe of the knot ϕ f (K) = w(K). Framed knots can be thought as bands, so in this picture a writhe for a knot represents a band twist. As Kauffman polynomial are regular isotopy invariant, twisted bands are the most suitable objects to be described with. The expectation value for the Wilson loop computed along a vertical framed, m-component (C 1 , C 2 , ..., C m ) knotlink K in a Chern-Simons theory for a generic semisimple group G is given at second order by: where T stands for the fundamental representation, χ(C k , C ℓ ) is the Gauss linking number between the two curves C k and C ℓ , c 2 (T ) j i = a (T a ) k i (T a ) j k is the quadratic Casimir in the fundamental representation, c v the quadratic Casimir in the adjoint representation, ρ(C) is an ambient isotopy invariant characteristic of the knot under consideration. ρ(C) represents the second coefficient of the Alexander-Conway polynomial and is related with Arf-and Casson-invariants; in practise it is not easy to compute apart from small knots. Our aim is now, with the help of (4.1), to find the value of a appearing in (2.1-ii) in terms of its expansion in (2π/k). The effect of changing the frame of a link component C i by ∆ϕ f (C i ) = ∆w(C i ) = ±1 (or adding a twist in the band picture) reflects in the entire Wilson loop expectantion value by: So we find a ±1 = α (±) , taking into account D K = W (K) / W ( ) as previously defined on section 3.1. While (2.1-iii) is trivially satisfied, is possible to extract the value of z from (2.1-i), for instance applying it to the Hopf-link HL. That is closing the skein relation (2.1)-i as shown above one gets the following expression: written in term of relatively easy objects that can be computed directly from (4.1), using as in [2], ρ( ) = −1/12: An alternative way to find z is imposing the equality between Kauffman D K (a, z) polynomials obtained from the skein relations (2.1) with the expansion of W (K) / W ( ) coming from (4.1). But this could be done just for the few simple knots where ρ(K) can be calculated, so may be here regarded as a self-consistency check.
That's the point where the algebraic properties of the gauge groups come out; for the groups we are interested in, they are summarised in the following table: hence, from (4.2), we get respectively for SO(N) and Sp(N) the following values for a while for both orthogonal and symplectic groups the value found for z is: These results are consistent with the ones found in the previous section by means of the variational method both for SO(N) and Sp (2). Moreover (4.4) and (4.5) extend the series expansion in 2π/k up the second order. The fact that z has not the quadratic contribution could be guessed from the very beginning because of the peculiar property of the Chern-Simons Lagrangian: the inversion symmetry. This implies that a change in the sign of the coupling constant k is compensated by the inversion of the orientating of the manifold. When a knot K is embedded in M 3 the change of orientation of the manifold corresponds to a π rotation or its mirror imageK, so W (K) k = W (K) −k . On the other hand from skein relations (2.1) is easy to see that D K (a, z) = DK(a −1 , −z); combining it with the inversion symmetry one gets some restriction on the k-functional dependence of the variables a and z: Furthermore observe that in the groups table there is a value of N for whom two lines match: for N = 2 all the values for Sp (2) and SU(2) coincide. So the expectation value of a Wilson loop along a generic knot K agrees in both cases. This special point is the one where the HOMFLY and Kauffman polynomials overlap to give the Jones polynomial. This is exactly the same result we have found with the variational approach in section 3.2, but now extended to the second order. Another interesting feature that can be read from the table is the analogy between the quantities of SO(-N) and Sp(N), in particular one can note in (4.1) as Wilson loop expectation values of a SO(-N)-Chern-Simons theory for a knot K correspond to the ones of its mirror imageK for a Sp(N)-CS theory: For odd-multicomponent knots-links the correspondence hold up to a global sign, where m is the number of components. The mirror imageK is needed in order to have opposite the chirality in framing that compensate a sign in the odd terms expansion. In terms of Dubrovnik polynomial (4.7) became D K | SO(−N ) = DK| Sp(N ) , at least for proper knots. So again what suggested by the variational approach can be coherently recovered and extended by the perturbative one. The ambient isotopic Dubrovnik-Kauffman polynomial is obtained, as usual, from the regular one thanks to a writhe normalisation: a −w(K) D K . Another remarkable feature of the variational and perturbative approaches is that allow us to generalise at once the present treatment also to the non-compact groups such as SO(m,n), which are the more interesting ones for describe general relativity in 2+1 dimensions by the Chern-Simons theory. Although from a classical point of view locally isomorphic groups represent the same gauge theory, we have seen as at the quantum level expectation values even of simple knots differ. Thus in case one wants to take advance of the Chern-Simons formalism to study quantum properties of gravity he will have to consider the issue of which is the "good" group election. Actually the values of the fundamental quantities as the Casimirs c 2 , c v , the group's dimension dimG and the fundamental representation dimension dim(T ) of SO(m,n) are not different from the SO(N) ones, whenever m + n = N. Hence the topological quantity W (K) (4.1) is not affected by the signature change of the Cartan-Killing metric 12 . Up the author knowledge invariant knot polynomials for SO(m,n) groups are not found by means of any other methods; could be interesting to verify it with the help of more rigorous mathematical tools such as quantum groups. Moreover the SO(m,n) Chern-Simons theory got a richer structure than the SU(N) one. In fact others non-equivalent Chern-Simons Lagrangian can be built from their Chern's characteristic classes apart from the Pontryagin; for instance is possible to use also the Euler or Nieh-Yan topological invariants (see [23] for a review). The expectation values of knotted Wilson loops weighted by this Chern-Simons density remains a topological invariant, but possibly of different kind.
Correspondence with the Potts Model
In this section we try to build a bridge between the previous results about first order expectation values of quantum holonomies along a knotted path and some statistical system such as the Potts Model. Of course it is clear that an exact equality can not hold since the Chern-Simons observables are knot invariants while the Potts partition functions are not. Nevertheless something can be said, but at the price of renouncing to the knot topological invariance. First let us remind some fundamental facts about the Potts model that we will be used afterwords. It is found in [19] that the partition function of the Q-Potts Model of a statistical lattice represented by a graph G is the Potts state bracket {K(G)} of the knot-link K dual to the graph G. That's because this state bracket expansion coincides exactly with the dichromatic polynomial, or the Tutte polynomial, of the graph G. We remember the definition of the Potts state bracket: To be more precise for any alternating knot or link K it is possible to construct a graph lattice G(K) checkerboard shading its planar diagram and assigning to each shadow a vertex and for each crossing a bound, as shown in figure 7. Vice-versa for any two dimensional graph G one can associate its dual knot K(G). Note that this is a one-toone 13 mapping between planar graphs and alternate knots and note that any knot got its alternate representative, that is can be drawn as an alternate planar diagram. Thus the Q-Potts partition function for a certain statistical lattice P G (Q, t) is given by the dichromatic polynomial Z G (Q, v) of its graph G (whenever v = e J/kt − 1) or by the Potts state bracket of its associated knot {K} as follows: where V is the number of vertex of the graph (i.e. the number of the lattice's sites or rather the number of shaded region of the knot), t is the temperature, k B the Boltzmann's constant, σ n is one of the Q possible states of the nth vertex and J = ±1 according to the ferromagnetic or anti-ferromagnetic case.
SO(-2) & Sp(2) Holonomies and Q=4 Potts Model
First we consider a special case, that is when the Kauffman polynomial reduces to the Kauffman state bracket [K](q) (or to the Jones Polynomial whether writhe normalised), which occurs for the SO(-2), Sp(2) 14 or SU(2) Chern-Simons theory, as we have seen in section 3.2 and 4: . 13 When the white region is left outside. 14 Correlated by (4.7) Then we perform a shift in the q-variable: [K] q c(K) [K], where c(K) is the number of crossing in the knot K diagram. This shift is the point where regular isotopical invariance of the Kauffman polynomial is broken. So focusing just on the first order approximation, one gets the following bracket q c(K) The analogy with the Potts state bracket (5.1) is now evident: Let now concentrate on the SO(-2) case, such that once the q-shift is reabsorbed one recovers knot invariance, so Q = N 2 = 4. Using (5.2) and (5.4) it is easy to see that −2 V ≪ K ≫ represents the Q=4 Potts partition function for the lattice graph associated to the knot K. In terms of the first order Wilson loops expansion it reads: An example may get things clearer: consider a 2x2 lattice graph G of figure 7 and its dual knot-link K(G) (with V = 4). From skein relations (5.1) (or equally from the deletioncontraction rule that define the dichromatic polynomial Z G (4, v)) one gets the Q=4 Potts partition function for the graph G(K): while from the skein relations (5.3) one get the expectation value of the holonomy along the knot K(G), up to O(1/k 2 ): It's easy to see that (5.5) is fullfilled imposing v = −2 + i2π/k in (5.6). So the first order expectation value of the Wilson loop along a knotted path K for a SO(-2)/Sp(2) Chern Simons theory can be extracted from the partition function of a Q=4 Potts model of a lattice graph G(K) dual to the knot K, and vice-versa. This correspondence works well for any two dimensional lattice graph, not just for regular ones like the sample presented in figure 7. Even thought W (K) 1 st −order and P G (K) are not exactly the same they share some features, for instance their zeroes. So W (K) 1 st 's zeros can be interpreted as the Fisher zeros of the statistical lattice associated to K, which encode many important physical properties of the system. Also the critical temperature t c (when the statistical system acquires conformal invariance) of the Potts model can be easily read: In the knot formalism it occurs where W (K) = W (K) , that is when 1 − iπ/k = 1, so in the limit k → ∞, It's worth remark at this point that the SO(-2)/Sp(2) group (or even SU(2)) gives rise to the Jones polynomial too. This polynomial (at the non-perturbative level) is known to describe the partition function of a particular kind of Potts model with two Boltzmann factor, which is of different kind respect to the standard Potts model considered here (see [14] and [20]). The correspondence holds also at the following orders of the perturbative expansion, basically in the same way it works at the first order. For instance one can obtain W (K) 2 sd −order from the Q=4 Potts partition function identifying v and Q as follows: The simple relation between Q and N is now spoiled and moreover this fact makes the analogy between the two models purely formal because choosing a particular Q imply fixing at the same time the temperature to a constant value.
Sp(N) holonomies and Q-Potts Model
We would like to do something similar to previous subsection, but for generic N. Now that procedure is less direct because the Kauffman polynomial can not be cast in a simple form such as the state bracket [K]. To connect the two theories, in particular to give the Q-Potts partition function a similar structure to the Dubrovnik polynomial one, we can introduce a new bracket polynomial K (Q, v) defined by the following skein relations: The Q-Potts partition function, in character of the dichromatic polynomial Z G(K) (Q, v), has the following form in term of K : Even in this form K is not a isotopical invariant of the knots, as W (K) because the two coefficients in (5.7-ii) are not reciprocal and (5.7-iv) does not satisfy the second Reidemeister move. However there is a point where both (5.7-ii, iv) becomes invariant, that is for v = (−Q ± Q 2 − 4Q)/2. This value of the temperature is exactly the one that relates the Potts model to the Khovanov homology [21]. Comparing the K (Q, v) bracket with the first order expectation value of the holonomy W (K) 1 st−ord one has to impose Q = N 2 and v = N(1 − iπ/k). So the K (N, k) invariance occurs, in terms of the Chern-Simons coupling constant k and the fundamental representation dimension N, just for N = −2, i.e the previous case we analysed in section 5.1. Therefore for a generic Q = N 2 = 4 is not possible to pass from the Potts partition function to the first order Wilson loop expectation value as we did for the SO(N)/Sp (2) case. What can be done at most is define a generic bracket polynomial which include both P G and W (K) and specialises to one or the another for some values of its variables. This is done in appendix A.
Comments and Conclusions
In this paper is analysed the relation between expectation values of Wilson loop in threedimensional SO(N) Chern-Simons field theory and an isotopic invariant of knots, the Kauffman polynomial. This equivalence is achieved in a simple intuitive knot variational approach borrowed by [3]'s and [5]'s scheme which was elaborated for obtaining the Witten result: HOMFLY polynomial from the SU(N) gauge group. The key point of this construction is based on the existence of a Fierz identity for the infinitesimal generators of the group in certain representations. With precisely the same interpretation of the expectation value's path variations and no other extra assumptions respect to the original work, here we exactly get the conformal field theory known result for SO(N): Kauffman polynomial. It suggests that the easy variational knot approach, expressly built for SU(N), works well also for different gauge group theories as SO(N). So its heuristic geometrical assumptions are endorsed. Convinced of all that and encouraged by negative dimension group theory suggestion we explored also the Sp(N) group getting the exact skein relation. In particular in the simple Sp(2) case we are able to find its isotopic invariant: the original Jones Polynomial. Furthermore to enforce and extend those results, an independent procedure has been performed, the quantum field theory method can not only full recover the variational approach but also: improve its outcomes precision of an order of magnitude, extend to groups with semi-definite Cartan-Killing metric as well Sp(N) with N = 2 and most of all prove, up to O(1/k 3 ), the correspondence between isotopy invariant polynomials from SO(N) and Sp(-N) Chern-Simons theories.
To sum up, these procedures give for SU(N), SO(N)/Sp(N) and Sp(2) the famous HOM-FLY, Kauffman and Jones polynomials respectively. Hence they may be used for other groups or representations to find new link invariants, both based on skein relations or not. This could give new insights into knots theory, which is still looking for a link invariant able to distinguish conclusively knots isotopic equivalence. From a physical point of view it's interesting to note that not only the Jones polynomial, at non perturbative level, correspond to the partition function of the Potts model with two Boltzmann weight factors, but also its first order perturbation expansion, in the realm of the Chern-Simons theory, gives the standard Q=4 Potts partition function (and viceversa). Moreover the connection between the quantum holonomies of Sp(2) Chern-Simons theory and the Q = 4 Potts partition function opens the possibility to relate apparently disconnected physical systems. This is actually the main motivation of the author. In fact, since [22], it is well known that Sp (2)×Sp (2) Chern-Simons theory describes 2+1 gravity with negative cosmological constant. Furthermore the first terms in the Kauffman bracket expansion give states of 3+1 quantum gravity in the loop representation [6]. This feature of knot theory may represent the tip of an iceberg that links discrete statistical models with the expectation value of holonomies of gravitational theories. Work in this direction is in progress. | 8,002.4 | 2010-05-21T00:00:00.000 | [
"Physics",
"Mathematics"
] |
The Modified Quasi-geostrophic Barotropic Models Based on Unsteady Topography
water equations based on barotropic fluids. In the paper, to discuss the irregular topography with different magnitudes, especially considering the condition of the vast terrain, some modified quasi-geostrophic barotropic models were obtained. The unsteady terrain is more suitable to describe the motion of the fluid state of the earth because of the change of global climate and environment, so the modified models are more rational potential vorticity equations. If we do not consider the influence of topography and other factors, the models degenerate to the general quasi-geostrophic barotropic equations in the previous studies.
Introduction
In recent decades, many scholars have conducted extensive research on large-scale atmospheric and oceanic dynamics. Among them, the topographic effect has great influences on the dynamic mechanism of potential vorticity equations (Pedlosky, 1974;Collings, 1980;Pedlosky, 1980;Treguier, 1989). In the atmosphere, the topographic height plays an importance role in atmospheric cyclone and anticyclone changes. Luo (1990) pointed out the topographic effect was also an important factor of forming atmospheric blocking, and the global atmospheric circulation would even get affected by the topographic effect. Lu (1987) discussed the effects of topographic height and shaped on Rossby wave activation and the influences of topographic south-north and east-west slopes on waveform and energy propagation. Chen (1998) and Jiang (2000) derived the quasi-geostrophic potential vorticity equation with large-scale topography, friction, and heating under the barotropic model, and the large-scale effects of Qinghai-Tibet Plateau on atmosphere were discussed. In addition, the oceanic topography is very complex, such as The North West Shelf of Australia (Holloway, 1997), Portugal Shelf Sea Area (Sherwin, 2002), etc. The relationship between topography and ocean circulation was pointed out in the literature (Roslee et al., 2017b;Kamsani, 2017;La, 1990;Marshall, 1995;Alvarez, 1994;Sou, 1996). Cessi (1986) discussed the important role of topography in ocean circulation. Holloway (1992) introduced the interaction of eddies with seafloor topography and argued that ocean circulations would be significant interaction between turbulent vortices and topography rather than gravity wave drag. Then, general expressions for the eddy-topographic force, eddy viscosity, and stochastic backscatter, as well as a residual Jacobian term, are derived for barotropic flow over mean topography by Frederiksen (1999). All the above researches, Actually the topography height also changes with time in the earth fluid. Changes in topography can lead to tsunamis, floods and natural disasters (Abdullah, 2017;Elfithri, 2017;Rahim, 2017). Yang (2011Yang ( ,2012 and Song (2012Song ( , 2013 discussed topography changes over time discussed topography changes over time t, the influence of the nonlinear long wave amplitude and waveform. Da (2013) discussed the shallow water equation forms when underlying surface slowly changes with time and obtained the vorticity equation with an underlying surface. This model considered the actual circumstances that the topography changes with time-space (Erfen et al., 2017;Roslee, 2017a) . In this paper, we discuss different magnitude topography under spatial-temporal variable and obtain some modified models, which have important effects on the future discussion about the waveform changes of nonlinear long waves. This paper is organized as follows: In Section 2, starting from the rotating shallow water equation set, we give the bottom topography which is not smooth boundary and simplify the equation set with unsteady topography. Section 3 is given scale analysis and perturbation methods; then we obtain new equation set by topographic conditions with different magnitudes. Later we derive the equations with different orders and get some modified models in Section 4. Finally, make the relevant conclusions in Section 5.
Basic equations
Assuming the static equilibrium condition, the fluid can be regarded as barotropic, incompressible, frictionless state. The upper boundary height is ( , , ) h x y t , and , free surface pressure intensity is constant. The basic equation set can be written as (Pedlosky, 1987) Eqs. (5) and (6) indicate that the horizontal pressure gradient under the barotropic model can be expressed by the gradient of the free surface gravitational potential (Taharin & Roslee, 2017).
We assume that the preliminary horizontal velocity is independent of the z , Eqs. (1a) and (1b) assuming that the free surface height H is constant when the fluid is static (Pedlosky, 1987).
Scale analysis, perturbation methods
Making dimensionless analysis on the Eqs.
Derive the barotropic models
Making classified discussion on the magnitude of λ.
λ ~ 1 magnitude
The scale of the ϕ B is consistent with the small amplitude function ϕ, most of the topography parameters following with this situation in the real world. λ R is the Rossby radius of deformation, Eq. (25) is a modified model.
λ ~ 10 magnitude
Large-scale atmospheric motion of large topography is suitable for such conditions (L ~ 10 6 m, U ~ 10m/s, f 0 ~ 10 -4 s -1 ). For example, a case study of Tibetan Plateau topography, the height is 3 4 ×10 3 m approximately which is suit for the situation. (Liu, 1991) Eq. (40) is a classics dynamics model used by the large-scale atmospheric and oceanic motions.
Conclusion
(a). Under the unsteady topography, some new modified models (25), (35) are derived. These models meet the general rule that the topography changes with time in reality. When the topography has nothing to do with the time, Eq. (35) degenerates into a dynamics model (36), Eq. (25) degenerates into Eq. (38) which is a quasi-geostrophic barotropic model under the spatial topography.
(b). The modified models under the topographic effect with different magnitudes are presented, we can see the pattern under the condition of large terrain, which is the improvement of the model. After the above-modified models are given, we will also derive the mathematical model for Rossby wave in the further study, and the further exploration of the large-scale factual influences of topography on atmosphere and ocean are required. | 1,341.2 | 2017-01-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Nonbilayer Phospholipid Arrangements Are Toll-Like Receptor-2/6 and TLR-4 Agonists and Trigger Inflammation in a Mouse Model Resembling Human Lupus
Systemic lupus erythematosus is characterized by dysregulated activation of T and B cells and autoantibodies to nuclear antigens and, in some cases, lipid antigens. Liposomes with nonbilayer phospholipid arrangements induce a disease resembling human lupus in mice, including IgM and IgG antibodies against nonbilayer phospholipid arrangements. As the effect of these liposomes on the innate immune response is unknown and innate immune system activation is necessary for efficient antibody formation, we evaluated the effect of these liposomes on Toll-like receptor (TLR) signaling, cytokine production, proinflammatory gene expression, and T, NKT, dendritic, and B cells. Liposomes induce TLR-4- and, to a lesser extent, TLR-2/TLR-6-dependent signaling in TLR-expressing human embryonic kidney (HEK) cells and bone marrow-derived macrophages. Mice with the lupus-like disease had increased serum concentrations of proinflammatory cytokines, C3a and C5a; they also had more TLR-4-expressing splenocytes, a higher expression of genes associated with TRIF-dependent TLR-4-signaling and complement activation, and a lower expression of apoptosis-related genes, compared to healthy mice. The percentage of NKT and the percentage and activation of dendritic and B2 cells were also increased. Thus, TLR-4 and TLR-2/TLR-6 activation by nonbilayer phospholipid arrangements triggers an inflammatory response that could contribute to autoantibody production and the generation of a lupus-like disease in mice.
Introduction
Systemic lupus erythematosus (SLE) is a systemic autoimmune disease characterized by a loss of tolerance to nuclear antigens and by dysregulated activation of T and B cells. Polyclonal activation of B cells leads to the production of large quantities of autoreactive antibodies and the formation of immune complexes, which causes tissue damage. In some SLE patients, it has been shown that bone marrow mesenchymal stem cells exhibit impaired capacities for proliferation, differentiation, migration [1], and immune modulation [2]. 2 Journal of Immunology Research Animal models of SLE include lupus-prone mice, which spontaneously develop lupus, and normal mice that develop lupus after injection of lymphocytes from lupus-prone mice, immunization with prototypical lupus antigens (DNA-and RNA-protein complexes), or injection of pristane (2,6,10,14tetramethylpentadecane) [3,7]. The most commonly used lupus-prone mice are the F 1 hybrids of New Zealand black (NZB) and NZ white (NZB/NZW F 1 ) mice, the Murphy-Roths large/lymphoproliferative locus (MLR/lpr) mice, and the recombinant C57BL/6 female and SB/Le male strain/Ylinked autoimmune accelerator (BXSB/Yaa) mice [3,8,9]. Our group has also developed a mouse model of autoimmune disease resembling human lupus that can be induced in normal mice [10]. In this model, the disease is triggered by liposomes with nonbilayer phospholipid arrangements. Liposomes are model membranes made of cylindrical phospholipids, such as phosphatidylcholine, and H II -preferring (conical shaped) phospholipids, such as phosphatidic acid, phosphatidylserine, or cardiolipin [11]. Conical phospholipids can form molecular associations distinct to lipid bilayers, known as nonbilayer phospholipid arrangements, in the presence of inducers such as Mn 2+ [12,13] or the drugs chlorpromazine and procainamide, which can trigger DILE in humans [10]. Nonbilayer phospholipid arrangements are formed by an inverted micelle (made of conical phospholipids with their polar heads towards the center of the micelle, where the inducer is also located) inserted into and distorting the shape of the phospholipid bilayer (Figure 1(a)). We demonstrated that liposomes with nonbilayer phospholipid arrangements induced by Mn 2+ , chlorpromazine, or procainamide cause an autoimmune disease resembling human lupus in mice. A similar disease is produced by treating mice directly with Mn 2+ , chlorpromazine, or procainamide (which induce nonbilayer phospholipid arrangements on mouse cells) or by injecting the monoclonal antibody H308 (which binds specifically to nonbilayer phospholipid arrangements and stabilizes these arrangements on mouse cells) [10,14].
IgM and IgG antibodies against nonbilayer phospholipid arrangements are found in the sera of mice with the autoimmune disease resembling human lupus, and also in the sera of patients with lupus [10,15]. Usually, the efficient production of IgG antibodies requires an activation of the innate immune response. Therefore we hypothesized that nonbilayer phospholipid arrangements could be Tolllike receptor-(TLR-) 4/MD-2 agonists, as their molecular structure is similar to that of the lipid A from bacterial lipopolysaccharide (LPS). Lipid A is formed by a -1,6-D-glucosamine disaccharide with two (negatively charged) phosphates and six saturated acyl chains in an asymmetric distribution (four chains are bound to the nonreducing and two to the reducing glucosamine). Hexaacylated asymmetric lipid A molecules have a conical molecular shape, because the cross section of the hydrophobic region is larger than that of the hydrophilic region (Figure 1(b)). Hexaacylated symmetric lipid A (with three acyl chains bound to the nonreducing and three to the reducing glucosamine) and penta-and tetraacylated lipid A molecules have a cylindrical molecular shape, and they do not have biological activity [16,17].
The intrinsic conformation of lipid A is not altered when saccharide groups are added, as in LPS. The LPS molecules form multimeric aggregates in water: if the lipid A is cylindrical, they form a smooth bilayer arrangement, but conical lipid A molecules form a nonbilayer or hexagonal (H II ) arrangement [17]. LPS-binding protein (LBP) is a plasma protein that facilitates the transfer of LPS molecules from these hexagonal (H II ) arrangements to CD14, and membrane-bound CD14 delivers LPS to TLR-4/MD2 [18]. Since the conical molecular shape of lipid A is a requirement for TLR-4/MD-2 triggering [16][17][18][19], we hypothesized that liposomes with nonbilayer phospholipid arrangements, but not smooth liposomes (with phospholipids in a bilayer arrangement), could trigger TLR-4/MD-2 signaling.
In this study, we investigated whether liposomes with nonbilayer phospholipid arrangements are TLR-4/MD-2 agonists, because the activation of this innate immune receptor leads to the production of proinflammatory cytokines. We also looked for proinflammatory cytokines in the sera of mice with the autoimmune disease triggered by liposomes with nonbilayer phospholipid arrangements, and we determined the gene expression profile in the spleens of these mice, focusing on the expression of proinflammatory genes. In addition, we determined the relative percentage and activation of T, NKT, dendritic, and B cells in the spleen of mice with the disease. This study contributes to the understanding of the pathological and genetic features of a novel mouse model of human lupus.
Preparation and Characterization of Liposomes.
Egg-yolk L--phosphatidic acid, bovine brain L--phosphatidylserine, egg-yolk L--phosphatidylcholine, chlorpromazine, procainamide, and chloroquine were purchased from Sigma (St. Louis, MO, USA). Liposomes contained the cylindrical shaped phospholipid phosphatidylcholine and a conical phospholipid (phosphatidic acid or phosphatidylserine). The molar ratios (phosphatidylcholine/phosphatidic acid 2 : 1, phosphatidylcholine/phosphatidylserine 4 : 1) were optimized for the induction of nonbilayer phospholipid arrangements [14]. Nine micromoles of phospholipid mixture was dissolved in 1 mL diethyl ether and 330 L of TS buffer (10 mM Tris-HCl, 1 mM NaCl, pH 7), mixed and sonicated three times in a G112SPI sonicator (Laboratory Supplies, Hicksville, NY, USA). The diethyl ether was then removed under a stream of oxygen-free dry nitrogen at reduced pressure, using a rotary evaporator at 37 ∘ C. The liposomes were filtered through 0.45 m MF-Millipore membranes (Billerica, MA, USA) to homogenize their size.
To induce the formation of nonbilayer phospholipid arrangements, liposomes in TS buffer were incubated for 30 min at 37 ∘ C in the presence of 0.5-4 mM MnCl 2 , 0.5-3 mM chlorpromazine, and 4-32 mM procainamide [14]. All of the final preparations of liposomes were negative for LPS contamination, as assessed by the gel clot LAL method (Charles River Endosafe, Charleston, SC, USA). (c) Journal of Immunology Research The detection of nonbilayer phospholipid arrangements by flow cytometry was previously validated by freeze-fracture electron microscopy and 31 P-NMR spectroscopy [10,14,15]. Therefore, in this study we only used flow cytometry to demonstrate the formation of these arrangements on liposomes. Liposomes and liposomes with nonbilayer phospholipid arrangements in TS buffer were analyzed with a FACSCalibur flow cytometer (Becton Dickinson, San Jose, CA, USA) with CellQuest software. Ten thousand events were acquired for each sample.
To assess TLR activation, the cell lines were incubated in the presence of liposomes made of phosphatidylcholine/phosphatidic acid (2 : 1), alone or with nonbilayer phospholipid arrangements induced by Mn 2+ (2-4 mM). As a negative control, the liposomes with nonbilayer arrangements were previously incubated with 0.1 mM chloroquine. For the positive controls, the cell lines were incubated in the presence of their known TLR agonists: 100 ng/mL Escherichia coli 0111:B4 LPS for HEK-TLR-4/MD2/CD14, 1 g/mL FSL-1 (a synthetic lipoprotein derived from Mycoplasma salivarium) for HEK-TLR-2/TLR-6, 1 g/mL Salmonella typhimurium flagellin for HEK-TLR-5, and 2.5 g/mL ssRNA40 (a 20 mer phosphorothioateprotected single-stranded RNA oligonucleotide containing a GU-rich sequence) for HEK-TLR-8. All TLR agonists were sourced from InvivoGen. After 24 h, the cell culture supernatants were harvested and assayed for IL-8 production (BD OptEIA Set Human IL-8, BD Biosciences, San Diego, CA, USA). NF-B activation was assayed in cell culture extracts using the reporter plasmid pNiFty-Luc (Promega Corporation, Madison, WI, USA).
In order to determine if chloroquine affects the viability of HEK293 cells, the LIVE/DEAD Fixable Violet Dead Cell Stain Kit (Invitrogen) was used. HEK293 cells were incubated with 0.05, 0.1, and 0.5 mM of chloroquine for 24 h at 37 ∘ C and 5% CO 2 . The cells were then transferred to a tube and stained with 50 L of LIVE/DEAD diluted 1 : 100 in distilled water and incubated for 15 min at room temperature in the dark. FACS lysis buffer (1 mL; Becton Dickinson) was added for erythrocyte lysis, and the cells were incubated for 10 min at room temperature in the dark. The cells were washed with 2 mL of phosphate-buffered saline (PBS) and resuspended in 300 L of PBS and analyzed by flow cytometry. Forty thousand events were acquired for each sample with a LSR Fortessa cytometer (Becton-Dickinson).
To evaluate whether chloroquine can induce apoptosis of HEK293 cells, the Annexin V-propidium iodide staining method was used. HEK293 cells were incubated with 0.05, 0.1, and 0.5 mM of chloroquine for 24 h at 37 ∘ C and 5% CO 2 . The cells were then transferred to a tube and washed with 1 mL of Annexin V-binding buffer (eBioscience, San Diego, CA, USA). 100 L of 2 g/mL Annexin V-APC (eBioscience) in Annexin V-binding buffer was added, and the cells were incubated for 15 min at room temperature in the dark. The cells were washed with 1 mL of Annexin V-binding buffer, resuspended in 100 L of the same buffer containing 1 g of propidium iodide (BioLegend, San Diego CA, USA) and incubated for 15 min at room temperature in the dark. The cells were washed and resuspended in the Annexin V-binding buffer and analyzed immediately by flow cytometry. Forty thousand events were acquired for each sample in a LSR Fortessa cytometer (Becton-Dickinson).
Mouse Model of Autoimmune Disease Resembling Human
Lupus. Forty female 2-month-old specific-pathogen-free BALB/c mice were divided into four groups. The first and the second groups were injected intrasplenically, on days 1 and 15, with phosphatidylcholine/phosphatidic acid (2 : 1) liposomes that had been incubated with 5 mM MnCl 2 (Mn group) or 3 mM chlorpromazine (CPZ group). Mice received the same amount of liposomes by intraperitoneal injection on day 30 and then every week for 6 months [10]. The negative control groups consisted of 10 mice treated in the same way but using TS buffer alone (Control group I), or liposomes made of phosphatidylcholine/phosphatidic acid (2 : 1) alone (Control group II).
Blood was taken from mice before liposome injection and each month after the first intraperitoneal injection, for a total Journal of Immunology Research 5 of 6 months. Sera were heated at 56 ∘ C for 30 min to inactivate complement and frozen in aliquots at −70 ∘ C. To confirm that these mice developed the disease resembling human lupus, we measured anti-nonbilayer phospholipid arrangements, anti-cardiolipin, anti-histone, and anti-coagulant antibodies in their sera. Anti-nonbilayer phospholipid arrangements antibodies were measured by ELISA where the wells were coated with liposomes with or without nonbilayer phospholipid arrangements [14]. Anti-cardiolipin and anti-histone antibodies were also measured by ELISA. Results are reported as arbitrary units (AU) calculated as (AsP − AsW)/(AsH − AsW), where AsP is the absorbance obtained with the sera of mice injected with the liposomes, AsH is the absorbance obtained with the sera of mice before the injection of liposomes, and AsW is absorbance of controls without sera [10]. A modification of the kaolin-activated thromboplastin time test was used to determine the anti-coagulant antibodies; results are reported as the coagulation time in seconds [21].
Three mice from each of the four groups indicated above were euthanized 4 months after the first injection of nonbilayer phospholipid arrangements, when they had the highest titers of anti-nonbilayer phospholipid arrangements, anti-cardiolipin, anti-histone, and anti-coagulant antibodies, and their spleens were used for gene and protein expression studies. The experimental protocols for animal care and use were reviewed and approved by the Bioethics Committee of our Institution according to the "Guide for the Care and Use of Laboratory Animals," which was published by the US National Institute of Health [22].
Evaluation of Gene and Protein Expression in Mouse
Spleens. Mouse spleens were sectioned and placed in two cryotubes, one with RNAlater (Invitrogen) for RNA expression studies and one with Tissue-Tek (Sakura Finetek, Torrance, CA, USA) for protein analysis. The cryotubes were stored at −70 ∘ C until use. To isolate RNA, the tissue stored in RNAlater was thawed and disaggregated at 15,000 rpm with a TissueRuptor (Qiagen, Valencia, CA, USA), and total RNA was extracted from the tissue homogenates using an RNeasy Mini Kit (Qiagen). The quality and quantity of the RNA samples were assessed in an Agilent BioAnalyzer 2100 (Agilent, Palo Alto, CA, USA) and a NanoDrop 2000 (Thermo Fisher Scientific, Auburn, AL, USA), respectively; only RNA samples with a RNA integrity number (RIN) ≥ 7 were used for the gene expression analysis.
Total RNA (400 ng) was amplified and labeled using the Quick Amp Labeling Kit (Agilent), and the cyanine-3-or cyanine-5-labeled cRNA was purified with an RNeasy Mini Kit (Qiagen). The cRNA were hybridized to 4 × 44 K whole mouse genome microarray chips (Agilent, G4122F); the microarrays were scanned with an Agilent Microarray scanner (G2565BA) and the data were extracted with Agilent Feature Extraction software (v.9.5.3.1). Normalization was performed with GeneSpring GX 11.0 software (Agilent). The cutoff for over-and underexpressed genes was set at a mean fold change log 2 ratio greater than +2 or lower than −2, as assessed by two-way analysis of variance (ANOVA; Partek Pro software, Partek Inc., St. Charles, MO, USA) with < 0.01 [23].
To evaluate protein expression, the spleen samples stored in Tissue-Tek were thawed, rinsed with PBS, and disaggregated at 15,000 rpm with a TissueRuptor (Qiagen). The homogenates were centrifuged at 5,000 ×g for 5 min at 4 ∘ C, and the supernatants were used to measure C3 (ELISA Kit TLR-4 was measured by flow cytometry in cells obtained from fresh spleens, which were disaggregated and passed through a 70 m nylon mesh. The cells were labeled with a fluorescein isothiocyanate-(FITC-) conjugated anti-F4/80 antibody (BioLegend), a PE-conjugated rat anti-mouse TLR-4 antibody (BioLegend), and Fixable Viability Dye 450 (eBiosciences) and acquired in a FACSCalibur flow cytometer. Single viable cells were analyzed, and the percentage of F4/80 + TLR-4 + cells of the total live cells was determined.
Evaluation of T, NKT, Dendritic, and B Cells in Mouse
Spleens. The spleens of three mice from the groups injected intrasplenically with liposomes without nonbilayer phospholipid arrangements or with liposomes bearing nonbilayer phospholipid arrangements were placed in fluorescenceactivated cell sorting (FACS) buffer containing 0.1% BSA and 0.01% sodium azide (Sigma Aldrich). Spleens were disaggregated and passed through a 70 m nylon mesh. Red blood cells were lysed and spleen cells were resuspended in FACS buffer. Before staining, cells were incubated with Universal Blocking Reagent (Block Biogenex, San Ramón, CA, USA) in PBS for 10 min at 4 ∘ C and then washed.
Lipopolysaccharide Increases the Complexity of Liposomes.
We had previously shown that the presence of nonbilayer phospholipid arrangements can be detected by flow cytometry as an increase in side scatter (SSC) value [10,14,15]. Thus, the increase in SSC signal after the addition of Mn 2+ , chlorpromazine, or procainamide to liposomes made of phosphatidylcholine/phosphatidic acid or phosphatidylcholine/phosphatidylserine indicated the presence of nonbilayer phospholipid arrangements (Figures 1(c)-1(d) and 1(f)-1(g)).
As a negative control, we added 5 mM of Mg 2+ to liposomes (Figures 1(c), 1(d), and 1(e)); Mg 2+ does not induce the formation of nonbilayer phospholipid arrangements, as was previously shown for phosphatidylcholine/phosphatidic acid liposomes [15]. Liposomes made of the cylindrical lipid phosphatidylcholine, without any conical lipid, did not increase in complexity in the presence of Mn 2+ or Mg 2+ (Figure 1(e)), chlorpromazine, or procainamide (data not shown). The addition of LPS caused an increase in SSC signal when it was used alone (Figure 1(h)) or in combination with Mn 2+ (Figure 1(i)), which suggests that LPS modifies the lipid bilayer. The addition of 0.1 mM chloroquine, a drug that blocks or reverses the formation of nonbilayer phospholipid arrangements [14], decreased the liposome complexity induced by procainamide or Mn 2+ (Figures 1(g)-1(j)) or chlorpromazine (data not shown).
The production of IL-8 by HEK-TLR-4/MD2/CD14 or HEK-TLR-2/TLR-6 cells in response to liposomes with Mn 2+ -induced nonbilayer phospholipid arrangements was dose-dependent, and the effect was inhibited by chloroquine (Figure 2(b)). Cell viability in the presence of chloroquine was 90% or higher, and chloroquine did not induce apoptosis of these cells at the tested concentrations (see Supplementary Figure 1 in Supplementary Material available online at http://dx.doi.org/10.1155/2015/369462). Thus, the effects observed in the presence of chloroquine can be attributed to a reversion of Mn 2+ -induced nonbilayer phospholipid arrangements by this drug.
Additionally, we found that nonbilayer phospholipid arrangements induce the production of the proinflammatory cytokine TNF-by BMDM from BALB/c mice. The production of TNF-induced by smooth liposomes was significantly lower. Furthermore, anti-TLR-2 and anti-TLR-4 antibodies blocked the production of TNF-by BMDM in response to nonbilayer phospholipid arrangements (Figures 2(c)-2(d)).
Proinflammatory Cytokines Are Found in the Sera of
Mice with a Disease Resembling Human Lupus. Liposomes with nonbilayer phospholipid arrangements induced by Mn 2+ or chlorpromazine were used to produce an autoimmune disease resembling human lupus in mice. Antibodies against nonbilayer phospholipid arrangements were detected 1 month after the first injection of liposomes with nonbilayer phospholipid arrangements, and the titers in mice injected with chlorpromazine-induced nonbilayer phospholipid arrangements were higher than in those injected with Mn 2+ -induced nonbilayer phospholipid arrangements ( < 0.001). These antibodies appeared 1 month before the anticardiolipin, anti-histone, and anti-coagulant antibodies (Figures 3(a), 3(b), 3(c), and 3(d)). The presence of the four autoantibodies confirmed that the disease had been developed in the mice. Control mice injected with TS buffer or with liposomes without nonbilayer phospholipid arrangements did not generate any of the four autoantibodies.
IL-6, IL-10, IL-12p70, IFN-, TNF-, and MCP-1 were found in the sera of mice injected with liposomes with Mn 2+induced nonbilayer phospholipid arrangements; IL-6, IFN-, and TNF-appeared 1 month after treatment, while IL-10, IL-12p70, and MCP-1 were found after 2 months. These cytokines were also found in the sera of mice injected with chlorpromazine-induced nonbilayer phospholipid arrangements; IL-6, TNF-, and MCP-1 appeared 1 month after treatment, while IFN-and IL-12p70 were found 4 months after treatment. None of the tested cytokines were found in the sera of mice treated with smooth liposomes (Figure 4).
C3, C5, TLR-4, and TLR-4-Signaling Molecules and IFN-Are Overexpressed in the Spleens of Mice with an
Autoimmune Disease Resembling Human Lupus. We evaluated gene expression in the spleens of mice from the four treatment groups: group 1, mice injected with TS buffer alone (Control I); group 2, mice injected with smooth liposomes (liposomes without nonbilayer phospholipid arrangements, Control II); group 3, mice that received liposomes with Mn 2+ -induced nonbilayer phospholipid arrangements (Mn group); and group 4, mice that received liposomes with Journal of Immunology Research chlorpromazine-induced nonbilayer phospholipid arrangements (CPZ group). Spleens were collected 4 months after treatment, and the cRNA derived from the spleens of three mice from each group were pooled and hybridized to a whole mouse genome microarray chip. No significant differences were found between Control I and Control II groups; 426 genes were overexpressed and 62 genes were underexpressed in the Mn group, compared with the Control II group; 542 genes were overexpressed and 73 genes were underexpressed in the CPZ group, compared with the Control II group; and 383 genes were overexpressed and 44 genes were underexpressed in the CPZ group, compared with the Mn group. Table 1 shows a list of genes that were overexpressed in both the Mn and CPZ groups, compared with the Control II group. This includes genes for complement components (C3 and C5), molecules involved in the presentation of exogenous antigens, in the production of antibodies, and in TLR-4 and NOD-2 signaling. Table 1 also shows a list of genes that were underexpressed in both the Mn and CPZ groups, compared with the Control II group. These are genes for molecules that are involved in apoptosis and in NK cell recognition. The C3 and C5 complement proteins were increased in the Control I and Control II groups, compared with the Mn and CPZ groups. However, C3a and C5a, two active fragments that are produced by C3 and C5 cleavage, were increased in the Mn and the CPZ groups, compared with the Control I and Control II groups (Figures 5(a)-5(b)). IFNwas also increased in the spleens of mice with the autoimmune disease, compared with healthy mice (Figure 5(c)). The number of cells expressing TLR-4 increased in the Mn and the CPZ groups, compared with the Control I and Control II groups ( Figure 5(d)).
NKT, Dendritic, and B Cells Are Increased in the Spleens of
Mice with an Autoimmune Disease Resembling Human Lupus. Genes that were over-or underexpressed in mice injected with Mn-induced nonbilayer phospholipid arrangements (Mn group) or chlorpromazine-induced nonbilayer phospholipid arrangements (CPZ group), compared with mice injected with liposomes without nonbilayer phospholipid arrangements (Control II). The cutoff for over-and underexpressed genes was set as mean fold change log 2 ratio greater than +2 or lower than −2, as assessed by two-way ANOVA, with p < 0.01. Activated CD4 and CD8 T cells (Figures 6(a), 6(b), 6(c), and 6(d)), NKT cells (Figures 6(e)-6(f)), activated dendritic cells (Figures 6(g)-6(h)), and activated and TLR4 expressing B1 and B2 cells (Figures 6(i), 6(j), and 6(k)) were identified by flow cytometry in the spleens of mice. Fifteen days after the mice were injected intrasplenically with liposomes bearing nonbilayer phospholipids arrangements induced by Mn or chlorpromazine, the percentage and activation of CD4 and CD8 T cells were not increased, compared with the control mice that received TS buffer or liposomes alone (Figures 6(l)-6(m)). In contrast, the percentage of NKT, dendritic, and B2 cells was increased (Figures 6(n), 6(o), and 6(q)), and the activation of dendritic and B2 cells was also increased (Figures 6(o)-6(q)). An increase in TLR4 expression was also observed in B2 cells (Figure 6(q)). B1 cells did not increase in percentage, but the number of activated and TLR4 expressing B1 cells did increase (Figure 6(p)).
Discussion
SLE is a systemic autoimmune disease of unknown etiology characterized by B and T cell hyperactivity, by defects in the clearance of apoptotic cells and immune complexes, and by production of a complex mixture of various cytokines, chemokines, signaling molecules, and pattern-recognition receptors involved in immunity [4,24]. We have previously demonstrated that liposomes with nonbilayer phospholipid arrangements trigger a disease that resembles human lupus in mice and that IgM and IgG specific to nonbilayer phospholipid arrangements are produced in these mice. Now, we demonstrate that nonbilayer phospholipid arrangements are agonists for TLR-4/MD-2. The activation of this innate immune receptor leads to the production of proinflammatory cytokines; a proinflammatory environment is needed for efficient activation of the adaptive immune response and the production of IgG antibodies. These findings were supported by the increase in the percentage of NKT cells and by the increase in the percentage and activation of dendritic and B2 cells. In addition, the activation of TLR-4/MD2/CD14 by liposomes with Mn 2+ -induced nonbilayer phospholipid arrangements supports our hypothesis on the similarity of the structure of conical phospholipids, which form an inverted micelle inside the nonbilayer arrangement, with the conical association of the acyl chains of the lipid A moiety of LPS. The importance of the lipid A moiety of LPS was taken into account in the design of glucopyranosyl lipid A (GLA), a synthetic lipid A with six acyl chains and a single phosphate group. GLA as a stable oil-in-water-emulsion (GLA-SE) is a TLR-4 agonist, which signals through MyD88 and TRIF and drives a polyclonal T H 1 response in vivo, characterized by IFN-, TNF-, and IL-2 producing cells and IgG2c isotype switching [25,26].
We performed our TLR activation assays in HEK cells transfected with various human TLRs. Interestingly, we also found that nonbilayer phospholipid arrangements induce the production of the proinflammatory cytokine TNFby BALB/c mouse BMDM. Furthermore, anti-TLR-2 and anti-TLR-4 antibodies blocked the production of TNF-by these macrophages in response to nonbilayer phospholipid arrangements. These findings confirmed our observations with the HEK cells transfected with human TLRs, which also showed that nonbilayer phospholipid arrangements are agonists for TLR-4/MD-2 and TLR-2/TLR-6.
We observed that liposomes with nonbilayer phospholipid arrangements were agonists for TLR-2/TLR-6, but the activation was 3-fold lower than for TLR-4/MD2/CD14. Bacterial macroamphiphilic molecules, such as lipoproteins (including the synthetic lipoprotein FSL-1), lipoteichoic acids, lipoglycans, glycolipids, and lipoarabinomannans, are anchored on bacterial envelopes through a lipidic structure, which is usually a diacylglyceryl moiety. These amphiphilic molecules are mainly recognized via their lipid anchor through TLR-2, alone or as a heterodimer with TLR-1 or TLR-6 [27,28]. Because the liposomes bearing nonbilayer phospholipid arrangements are made of phosphatidylcholine and phosphatidate, which also have the diacylglyceryl moiety, it is possible that this lipid moiety activated the TLR-2/TLR-6 heterodimer.
TLRs not only recognize pathogen-associated molecular patterns, such as LPS, but also recognize damage-associated molecular patterns, which are released by cells that are either under stress or undergoing apoptosis or necrosis [29]. Examples of damage-associated molecular patterns that are TLR-4 agonists include heat-shock protein 60, fibronectin, fibrinogen, -defensins, and hyaluronan. The molecular structure of these agonists is different from that of LPS, but they all have hydrophobic regions, which are probably recognized by TLR-4 [30]. The modification of the lipid bilayer of cell membranes could be a signal of cell stress: nonbilayer phospholipid arrangements are normally transitory, but if they are stabilized by Mn 2+ or by the drugs chlorpromazine or procainamide, they could activate the innate immune response via TLRs and then induce the production of antibodies, with the subsequent development of an autoimmune disease.
TLR-4 signaling leads to the activation of NF-B and the production of proinflammatory cytokines, including TNF-, IL-12, and IFN-, and chemokines, such as MCP-1. We found these cytokines and chemokines in the sera of mice treated with liposomes with Mn 2+ -or chlorpromazineinduced nonbilayer phospholipid arrangements. The increase in the concentration of the proinflammatory cytokines IL-6 and TNF-correlated with the appearance of anti-nonbilayer phospholipid arrangement antibodies 1 month after the first injection of mice with nonbilayer phospholipid arrangements, and this also corresponds to the period of disease onset. The chemokine MCP-1 and the proinflammatory cytokines INF-and IL-12p70 increased between months 2 and 4 and correlated with the development and establishment of the disease, given by an increase in the titers of antinonbilayer phospholipid arrangement antibodies and the presence of anti-cardiolipin, anti-histone, and anti-coagulant antibodies. IL-10 was only detected in mice that received Mn 2+ -induced nonbilayer phospholipid arrangements. The proinflammatory cytokines IL-1, IL-6, IFN-, and TNF-and the immunomodulatory cytokines IL-10 and tumor growth factor-(TGF-) have been identified as important players Liposomes alone Figure 6: Dendritic, B1, and B2 cells are activated in mice with an autoimmune disease resembling human lupus. To analyze the percentage and activation of immune cells, cell suspensions from the spleens of mice injected with TS buffer, liposomes alone, or liposomes incubated with Mn or CPZ were labeled with antibodies and analyzed by flow cytometry. Gating strategy for the identification of activated CD4 (CD3 + , CD4 + , CD8 − , and CD69 + ) (a-c) and CD8 (CD3 + , CD4 − , CD8 + , and CD69 + ) (a, b, and d) T cells; NKT cells (CD3 + , NK1.1 + ) (e, f); activated dendritic cells (MHCII + , CD11c ++ , CD80 + , and CD86 + ) (g, h); activated B1 (CD19 + , CD5 + , and CD69 + ) and B2 (CD19 + , CD5 − , and CD69 + ) cells; and expression of TLR-4 (i-k). Percentage of total NKT (n); total and activated CD4 (l) and CD8 (m) T cells, dendritic cells (o), and B1 (p) and B2 (q) cells. The expression of TLR-4 was evaluated on B1 (p) and B2 (q) cells. Kruskal-Wallis test with Dunn's post-test was used for statistical analysis; significance was set at < 0.05. Asterisks represent statistically significant differences between the indicated groups ( * < 0.05, * * < 0.01).
in the development of SLE [31,32]. The cytokine pattern that we report indicates another similarity of this mouse model with the human disease.
TLR-4 was increased at the mRNA level and the number of cells that express TLR-4 increased in the spleens of mice that received liposomes with nonbilayer phospholipid arrangements. Other genes associated with TLR-4 signaling, such as Tram, Trif, Tbk1, and Irf3, were also increased at the mRNA level in these mice. These genes are associated with TRIF-dependent, but not MyD88-dependent, TLR-4 signaling [33]. TRIF-dependent TLR-4 signaling leads to the production of IFN-and . IFN-was increased in the spleens of mice that had received liposomes with nonbilayer phospholipid arrangements compared with healthy mice, and increased levels of IFN-and are reported in patients with SLE [34].
The expression of genes associated with the classical pathway of complement activation (C1ra, C1s, C1q, C3, C5, and C7) was increased in mice that had received liposomes with nonbilayer phospholipid arrangements, and this mRNA increase correlated with the detection of C3a and C5a proteins in the spleens of mice with the autoimmune disease.
Complement has an important role in the immune response, but it also has the potential to cause tissue damage, as has been reported in SLE and other autoimmune diseases [35,36]. It will be interesting to evaluate the role of complement in the tissue damage that is observed in this mouse model of autoimmune disease.
In contrast, the expression of genes associated with NK cell activation (Klrb1a, Klrb1c, Klra23, Klra7, Gzmb, and Klra22) was decreased in mice that received liposomes with nonbilayer phospholipid arrangements. This decrease could reflect a reduction in the absolute number of NK cells or a lower activation of the existing NK cells. The expression of genes associated with apoptosis (Casp8, Cycs, Apaf1, and Aaifm1) was also decreased in mice that received liposomes with nonbilayer phospholipid arrangements. This could be relevant for disease development, since deficient apoptosis could favor the survival of autoreactive T cells.
An important additional support for our hypothesis on the effect of nonbilayer phospholipid arrangements on the innate immune response is our finding that mice with the autoimmune disease resembling human lupus have an increase in NKT and dendritic cell percentages, together with increased dendritic cell activation. These cells could recruit and activate B1 and B2 cells, which are the precursors of plasma cells that produce antibodies against nonbilayer phospholipid arrangements.
Conclusions
The findings reported in this paper are consistent with a mouse model in which nonbilayer phospholipid arrangements directly activate TLR-4 and TLR-2/TLR-6 and lead to the production of proinflammatory cytokines. The proinflammatory environment leads to the efficient activation of the adaptive immune response to the production of IgG antibodies specific for nonbilayer phospholipid arrangements. These antibodies bind to the nonbilayer phospholipid arrangements that are transitorily formed on the surface of many cells and cause cell lysis; the exposure of intracellular antigens could then lead to the formation of anti-cardiolipin, anti-histone, and anti-coagulant antibodies. Furthermore, the inflammatory environment can cause complement-mediated tissue damage and IFN-production. Thus, this mouse model of autoimmune disease recapitulates many features of human lupus. | 7,467.6 | 2015-10-19T00:00:00.000 | [
"Biology"
] |
Distribution of Human Papillomavirus Genotypes among the Women of South Andaman Island, India
Background: Human Papillomavirus (HPV) causes various types of cancer in both men and women. Woman with HPV infection has a risk of developing invasive cervical cancer. Globally, HPV 16 and 18 were predominant. This study aims to find the distribution of various HPV types in South Andaman. Methods: A cross-sectional study was conducted among women in South Andaman, where cervical scrapes were collected after collecting written informed consent. Detection of HPV genotypes was carried out by using a PCR assay. Further, sequencing analysis was performed using MEGA11 to identify various genotypes in this territory. Result: Of these 1000 samples, 32 were positive for HR-HPV 16, and four were positive for HR-HPV 18. Fifteen HPV genotypes were detected using molecular evolutionary analysis. Six cases were identified with multiple genotypes. The most prevalent genotype is HPV 16 which belongs to Lineage-A and sub-lineage A2. HPV 18 identified in South Andaman belonged to the lineage A1 to A5. Discussion: Various HPV types were identified among women in South Andaman. Global burden of cervical cancer associated with various HPV sub-lineages. HPV-16 A1 sub-lineage was globally widespread, whereas sub-lineages A1, A2 and D1 prevailed in South Andaman. Conclusions: HR-HPV identified in this study enlightens the importance of HPV vaccination among women in remote places. These findings will help to strengthen public health awareness programs and prevention strategies for women in remote areas.
Background
HPV is responsible for most human reproductive tract viral infections. Most HPV infections are asymptomatic and self-limiting; chronic infections can progress into warts in precancerous, cervical, anogenital or oropharyngeal regions in men and women. Cervical cancer was the most frequent HPV associated disease. Though the majority of HPV precancerous lesions have a tendency to disappear on their own, there remains a risk for every woman with HPV infection to become persistent and pre-cancerous leading to invasive cervical cancer [1].
Cervical cancer is the leading cause of death in women. According to Global Cancer Observatory (GLOBOCAN) 2020, the incidence rate of cervical cancer was 15.6%, and the mortality rate was 8.8% worldwide. The age-specific standardized rate of cervical cancer was 13.3%. In Asia, the incidence rate of cervical cancer was 58.2 per cent. The five-year prevalence of cervical cancer was 59.5% in Asia. In India, the incidence and mortality rates were 16.2% and 9.5%, respectively. The proportion of cervical cancer in India was 7.9 per 100,000 [2]. In India, 22% of women have undergone cervical screening examinations based on aNational Family Health Survey (NFHS) report [3]. According to the World Health Organization (WHO), 99% of cervical cancer cases were associated with high-risk human papillomavirus (HR-HPV) [1]. Among the Indian population, the prevalence of cervical cancer was higher among sex workers in an urban slum in Mumbai and HIV-positive women. HPV 16 and 18 were observed in 56% of cases in the West Indian region [4,5]. HPV belongs to the family Papillomaviridae family, and it is a small, non-enveloped circular double-stranded DNA virus. The DNA molecule is 8000 base pairs in size, and the genome has six early (E) E1-E2, E4-E7 regions and two late (L) L1 and L2 regions [6].
Andaman & Nicobar Islands are situated in the southern regions of the Bay of Bengal in the Indian Ocean, closer to Indonesia and Thailand. According to the Census of India (2011), the territory's population was 380,581, and the female population was 177,710 (46.7%). The literacy rate of the Andaman and Nicobar Islands is 77.3% [11].
Our previous study in Andaman and Nicobar Islands reported the HR-HPV types (HPV 16 and 18). This was the first of its kind study to find HPV types in these islands [12]. HPV variants were not studied in detail among the population in this region so far. It is necessary that the public health system should be aware of the circulating variants of local strain patterns of HPV to frame recommendations for developing appropriate broadspectrumvaccines aiming at HR-HPV variants.To our knowledge, this study was the first cross-sectional survey conducted among a large population across Andaman and Nicobar Islands. Further, the current study aims to know the variants of HPV among married women in the Andaman and Nicobar Islands.
Study Population
A community-based cross-sectional study was conducted among married women of reproductive age (18-59 years) residing in the South Andaman District of the Andaman and Nicobar Islands, India.
Exclusion Criteria
Patients were excluded if there was evidence of pregnancy, severe gynaecological bleeding, hysterectomy or previous history of the disease, including cancer, warts and other cutaneous manifestations.
Ethical Approval
This study has been approved by the Institutional Human Ethics Committee (IHEC) of the Indian Council of Medical Research-Regional Medical Research Centre (ICMR-RMRC), Port Blair [IEC No: 03/RMRC/29/06/2017].
Sampling and Sample Size
The target population was chosen via cluster sampling, and the sampling units were villages or municipal wards. After stratifying the sampling units into rural/urban strata, the required sample size was determined by random selection of the required sample size's units. Based on the Andaman population ratio, the study participants were drawn from rural villages and urban wards in a ratio of 2.5:1, yielding a sample of 700 from the rural and 300 from the urban.
Awareness Programmes
Initially, awareness programmes were conducted in each selected village/ward at the Anganwadi centres/community hall. The health care team (clinician along with trained nurses) were detailed about the health issues like cervical cancer, its symptoms, and genital hygiene. In addition, the need for the study was also explained and requested for written informed consent before enrollment.
Sample Collection and Storage
The enrolled women were called to the field clinics, which were held in the sub-centre, Primary Health Centre (PHC), Community Health Centre (CHC) and District Hospitals catering for the population of the particular village/wards. The cervical scrapes were collected using the standard procedure from the ectocervix or surface of the cervical portion using a cytobrush.Specimens were collected in a tube containing phosphate-buffered saline (pH 8.6) and transported to the laboratory in ICMR-RMRC, Port Blair, by maintaining a cold chain.
Sample Processing
Once the samples arrived at the laboratory, the specimen tubes were vortexed, cytobrushes were discarded, and tubes were centrifuged to pellet the cells, which were suspended in 1 mL of phosphate-buffered saline. Aliquots of each fresh specimen were made and stored for a short duration at −20 • C until further processing.The analysis of HPV DNA was performed in the molecular biology laboratory of ICMR-RMRC, Port Blair.
DNA Extraction
The total DNA was extracted with the QIAamp DNA Minikit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. The DNA was eluted in 45 µL of elution buffer.
PCR Assays
The isolated DNA was amplified with ß-globin (internal control) to ensure the purity of the DNA extractions, as described previously [13].
To confirm the HPV infection, the DNA of all the samples were subjected to PCR amplification targeting the L1 consensus gene by a standard procedure reported previously. The results were recorded as positive if amplicon size specific to the 450 bp DNA band was observed in agarose gel electrophoresis.
Detection of HPV 16 & 18
PCR for the detection of type-specific HR-HPV 16 & 18 in the predominant genotypes was performed [13,14]. In addition, the E6 and E7 genes of HPV 16 and the E6 gene of HPV 18 were also amplified to identify the lineages and sub-lineages of HR-HPV 16 & 18 in the South Andaman Islands [14].
PCR Sequencing
The DNA sequence analysis was carried out to confirm the HPV types distributed in South Andaman. L1 gene PCR amplicons of all the samples negative for HPV 16 and 18.as well as 7 samples of HPV 16 confirmed, were subjected to DNA sequence analysis. In addition, the E6 gene and E7 gene PCR amplicons of HPV 16 and the E6 gene of HPV 18 were also subjected to DNA sequencing.DNA sequencing was carried out by the Sanger sequencing method with corresponding primer sets [15].
Phylogenetic Analysis
The DNA sequences were assembled using the MEGA11 software tool and were analysed together with worldwide diverse HPV sequences using ClustalW multiple alignments and pairwise alignment for phylogenetic analysis and were subsequently analysed using Kimura's two-parameter model as a method of substitution and neighbour-joining to reconstruct the phylogenetic tree. The statistical significance of the relationships obtained was estimated by bootstrap resampling analysis (1000 repetitions). A similar analysis was performed for the E6 and E7 genes of HPV 16 and E6 gene HPV 18.
The phylogenetic trees depicting the evolutionary relationship between taxonomic groups were generated for L1 genes of HPVs, E6 and E7 genes HPV 16, and E6 gene of HPV 18 sequences using molecular evolutionary genomic analyser software MEGA 11 [16]. Genetic distances were calculated by using the Kimura 2 parameter (K2P) model at the nucleotide level, and phylogenetic trees were constructed by using the neighbour-joining method. The reliability of the phylogenetic trees was tested using the bootstrap test with 1000 bootstrap replications.
Results
All of the cervical samples tested positive for the β-globin gene, indicating that there were adequate cells in the samples. Out of 1000 samples screened, 50 specimens tested positive for HPV L1 gene amplification. Subsequently, type-specific PCR for HR-HPV 16 and 18 identified 32 patients positive for HR-HPV 16 and four patients positive for HR-HPV 18. DNA sequencing for the L1 region was successful for 24 samples. The molecular evolutionary genetic analysis of sequences from South Andaman and worldwide was performed, and the pairwise genetic distances between the closely related HPV types from worldwide are specified in Table 1, given below.
The distribution of the multiple genotypes of HPV in South Andaman could be determined by a combined analysis that uses a specific PCR and DNA sequence analysis. Table 2. lists the genotype, risk group, frequency, and percentage of HPV detected in South Andaman Island. The molecular evolutionary genetic analysis could identify the distribution of HPV types 16, 52, 58, 66, 33, 18, 73, 53, 30, 6, 61, 71, 81, 84 and 87. There were high-risk, as well as LR-HPV types prevalent among the women in South Andaman Island. Table 1).
The distribution of the multiple genotypes of HPV in South Andaman could be determined by a combined analysis that uses a specific PCR and DNA sequence analysis. Table 2. lists the genotype, risk group, frequency, and percentage of HPV detected in South Andaman Island. The molecular evolutionary genetic analysis could identify the distribution of HPV types 16, 52, 58, 66, 33, 18, 73, 53, 30, 6, 61, 71, 81, 84 and 87. There were high-risk, as well as LR-HPV types prevalent among the women in South Andaman Island. All the co-infected cases were found to have at least one HR-HPV type association. However, HPV type could not be identified in 2 samples due to exhaustion of specimens for repeated experiments.
Phylogenetic Analysis of E6 Gene
The majority of the HPV types distributed in South Andaman were found to be HR-HPV 16, followed by HR-HPV 18. The distribution of various lineages and sub-lineages in South Andaman Island was revealed by phylogenetic analysis of the HPV 16 partial E6 gene (Figure 2A). The phylogenetic analysis revealed that thirteen of the fourteen sequences were found to be associated with lineage A, and the remaining one was associated with lineage D. The majority (11) of HPV16 sequences were grouped with AF536179 which belongs to the sub-lineage A2 (K2P = 0.002%). The pairwise genetic distance between the two isolates from South Andaman (AN GP-20 and AN PV-59) and the European isolate (K02718) was found that the South Andaman isolates belong to the A1 lineage (0.000%). One isolate (ANMT-19) identified from South Andaman was grouped with the HQ644257, which belongs to the D1 lineage (K2P = 0.00%).
Analysis to identify the HPV18 sub-lineages revealed that the partial E6 gene of HPV18 had close genetic relatedness with reference sequences of HPV18 Lineages A. The E6 gene of HPV18 identified from South Andaman was associated with the lineage A1 to A5 (K2P = 0.00%) ( Figure 2B). However, there were no genetic differences within the lineages of the E6 partial gene region sequenced to identify the sub-lineage.
Phylogenetic Analysis of E7 Gene
The phylogenetic analysis of the E7 gene of HPV16 from the South Andaman district showed maximum genetic relatedness with lineage A (Figure 3). Hence the predominant lineage circulating in South Andaman was identified as lineage A. However, the analysis based on genetic distances between the sequences did not show apparent sub-lineage differentiation in the E7 gene, as seen in the E6 gene of HPV 16. The phylogenetic analysis revealed that thirteen of the fourteen sequences were found to be associated with lineage A, and the remaining one was associated with lineage D. The majority (11) of HPV 16 sequences were grouped with AF536179 which belongs to the sub-lineage A2 (K2P = 0.002%). The pairwise genetic distance between the two isolates from South Andaman (AN GP-20 and AN PV-59) and the European isolate (K02718) was found that the South Andaman isolates belong to the A1 lineage (0.000%). One isolate (ANMT-19) identified from South Andaman was grouped with the HQ644257, which belongs to the D1 lineage (K2P = 0.00%).
Analysis to identify the HPV 18 sub-lineages revealed that the partial E6 gene of HPV 18 had close genetic relatedness with reference sequences of HPV 18 Lineages A. The E6 gene of HPV 18 identified from South Andaman was associated with the lineage A1 to A5 (K2P = 0.00%) ( Figure 2B). However, there were no genetic differences within the lineages of the E6 partial gene region sequenced to identify the sub-lineage.
Phylogenetic Analysis of E7 Gene
The phylogenetic analysis of the E7 gene of HPV 16 from the South Andaman district showed maximum genetic relatedness with lineage A (Figure 3). Hence the predominant lineage circulating in South Andaman was identified as lineage A. However, the analysis based on genetic distances between the sequences did not show apparent sub-lineage differentiation in the E7 gene, as seen in the E6 gene of HPV 16.
Discussion
The current study provided the diversity of HPV among women in South Andaman Island. It is essential to comprehend the spectrum of HPV genotypes because data on the distribution of HPV genotypes are relevant to vaccine development. HPV16 and HPV18 cause more than 70 percent of cervical cancer cases, with the remaining cervical cancers caused by other HR-HPV genotypes [17].
HPV16 genetic variation may have a significant impact on cervical cancer risk. However, the global burden of cervical cancer associated with various sub-lineages is predominantly driven by past HPV16 sub-lineage distribution. HPV-16 A1 sub-lineage
Discussion
The current study provided the diversity of HPV among women in South Andaman Island. It is essential to comprehend the spectrum of HPV genotypes because data on the distribution of HPV genotypes are relevant to vaccine development. HPV 16 and HPV 18 cause more than 70 percent of cervical cancer cases, with the remaining cervical cancers caused by other HR-HPV genotypes [17].
HPV 16 genetic variation may have a significant impact on cervical cancer risk. However, the global burden of cervical cancer associated with various sub-lineages is predominantly driven by past HPV 16 sub-lineage distribution. HPV-16 A1 sub-lineage was globally widespread. However, sub-lineages A3 and A4 were common in Asia. Sub-lineages A3, A4 and lineage D were common in regions like East Asia and North America.
Sub-lineage A4 was associated with more severe disease status than A1-3 sub-lineages in Chinese females and a higher risk of cancer. These lineages were highly cancer-risk associated [23]. In addition, lineage A of HPV 16 was found to be the prevailing strain in Spain. Lineage D of HPV 16 was linked to a higher risk of CIN3+ and otherhigh-grade lesions [24]. A study conducted in Eastern India revealed the existence of A1, D1 and D2 lineages. Of these lineages, A1 sub-lineage was predominant among women with cervical carcinoma [25]. The previous studies in India revealed the HPV 16 A1 (European) sublineage was predominant among cervical carcinoma patients compared to D1 (North American) and D2 (Asian-American-1) [15,25]. The current study found sub-lineages A1, A2, and D1 to be prevailing in South Andaman. The majority of the isolates from the current study belonged to the A2 sub-lineage.
A study revealed that the A1 sub-lineage of HPV 18 was predominant in Central Asia, Northern America and Eastern Asia. Nevertheless, A2 sub-lineages of HPV 18 were predominant in Europe, North America, Northern Africa and South/Central Asia. In addition, B1 and B2 sub-lineages were predominant only in Sub-Saharan Africa. Further, C-lineages were also observed in the African region [26]. HPV 18 sub-lineage distribution in China belonged to A1 to A7 [27,28]. Another study from Iran found that the prevalence of sub-lineage A4 was high compared to other sub-lineages [29]. In Spain, the study revealed Lineage B of HPV 18 was related to the burden for CIN3+ compared with lineage A [24]. In the current study, the South Andaman isolates belonged to lineage A of HPV 18.
The identification of the genetic underpinnings responsible for the distinct carcinogenic properties exhibited by certain lineages of HPV 16 and HPV 18 has the potential to shed light on the intricate interactions between the viral agents and the human host. Such insights hold promise for enhancing our ability to effectively manage HPV infections and mitigate the incidence of cervical cancer.
Diversity in the distribution of HPV types gives rise to a challenge to vaccine strategies. Molecular surveillance of HPV is needed for the detection of new strains or types emerging among symptomatic and asymptomatic populations in these remote islands. This will help policymakers to implement preventive measures against HPV-associated cervical cancer. A study on the specificity of HPV variants will be helpful in developing broad-spectrum vaccines aiming at HR-HPV variants.
Conclusions
This is the first-ever community-based cross-sectional study conducted in the Andaman and Nicobar Islands which interestingly revealed the prevalence of a wide range of the genotype distribution of HPV among women in this small island. HPV 16 was the most predominant high-risk type found in the Andaman Islands. Various high and low-risk types were also revealed in this study. Phylogenetic analysis of the E6 gene found lineages A and D of HPV 16 in the Andaman Islands. Moreover, lineage A of HPV 18 was also identified through the phylogenetic analysis.Sequencing analysis of the HPV 16 E6 gene revealed that A2 sub-lineages of HPV 16 were predominantly reported as compared to other lineages. The findings in the current study provide sufficient data to highlight the importance of screening for cervical cancer and promote vaccination and vaccine awareness in women living in remote geographical locations. These findings also emphasise and help to initiate stronger public health awareness programs and prevention strategies for the women of the Andaman and Nicobar Islands.
Author Contributions:
The study concept and study design were contributed by M.N. and R.P. Data collection and sample collection were contributed by R.P. Data analysis, interpretation and critical evaluation were all contributed by R.P., M.N., P.V. and H.K. Manuscript writing and correction of the manuscript were by all the authors. The final article was approved by all the authors. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets of the current study are available from the corresponding author upon reasonable request. | 4,527.6 | 2023-08-25T00:00:00.000 | [
"Biology"
] |
A risk perspective of estimating portfolio weights of the global minimum-variance portfolio
The problem of how to determine portfolio weights so that the variance of portfolio returns is minimized has been given considerable attention in the literature, and several methods have been proposed. Some properties of these estimators, however, remain unknown, and many of their relative strengths and weaknesses are therefore difficult to assess for users. This paper contributes to the field by comparing and contrasting the risk functions used to derive efficient portfolio weight estimators. It is argued that risk functions commonly used to derive and evaluate estimators may be inadequate and that alternative quality criteria should be considered instead. The theoretical discussions are supported by a Monte Carlo simulation and two empirical applications where particular focus is set on cases where the number of assets (p) is close to the number of observations (n).
Introduction
developed the theoretical foundation for the modern portfolio theory 1 providing investors with a tool to solve a key issue of how to distribute their wealth among a set of available assets. The problem was postulated as a choice of a portfolio mean return and variance of portfolio returns. This led to two principles where an investor for a given level of portfolio variance maximizes the portfolio return, or likewise for a given portfolio return minimizes the portfolio variance. Hence, according to Markowitz, an investor under such constraints needs only to be concerned about two moments of the assets multivariate distribution: the mean vector and the covariance matrix. In practical situations, these two quantities are unknown and must be estimated in order to perform the optimization. Many authors have shown that this optimization procedure often fails in practice (Bai et al. 2009;Best and Grauer 1991;Kempf et al. 2002;Merton 1980). Some authors go so far as to argue that the estimation error (sampling variance) dominates the procedure to the extent that the equally weighted non-random portfolio performs better than those optimized from data (Frankfurter et al. 1971;DeMiguel et al. 2009;Michaud 1989).
The estimation problem becomes particularly serious when the numbers of assets (say p) are close to the number of observations (n). This is mainly so because the sample covariance matrix becomes stochastically unstable and may not even be invertible. It then comes natural that the standard "plug-in" estimators, defined by simply replacing the unknown mean vector and covariance matrix by the standard text book estimators, should be replaced by some improved covariance estimator. A significant number of improvements over the plug-in estimator have been developed over the last decades (Frost and Savarino 1986;Wolf 2003, 2004). The vast majority of these fall into two categorizes or families of estimators. The first category is based on the fact that the sample covariance matrix is a poor approximation of the true covariance matrix and, therefore, the estimation problem is concerned with developing improved estimators of the covariance. These improved estimators are then simply substituted for the unknown parameter within Markowitz's optimal weight function. The other approach, which appears to be the more common one during recent years, relies on principles developed by Stein (1956) and James and Stein (1961). With such estimators, the standard estimator is weighted ("shrunken") toward a non-random target quantity. This type of estimators seems to have found a new renaissance in portfolio estimation theory, for which they appear to be particularly well suited. A sample of some recent developments includes Bodnar et al. (2018), Frahm and Memmel (2010), Golosnoy and Okhrin (2007), Kempf and Memmel (2006), Okhrin and Schmid (2007). Each of these methods naturally has its own merits and uses.
Investors want to know the basic properties and the relative risk of favoring one estimator over another before applying any specific method. There is, however, no consensus about how the concept of "risk" should be defined. From a statistical point of view, risk refers to some measure of the difference between a quantity of interest (which could be either random or fixed) and our inference target. Risk is usually expressed through moments of differences, such as the mean squared error (MSE), but could also involve forecast bias, or angles between true and estimated vectors. It is obvious that the optimality properties of any estimator depend on the specific risk measure, or quality criteria, being used to describe it. Indeed, it is well known that the ranking of estimators' performance, such as estimators of the inverse covariance matrix, may change or even reverse when evaluated with alternative loss functions (Haff 1979;Muirhead 1982).
While most recent papers in portfolio optimization theory have been concerned with the extremely important problem of developing efficient estimators of portfolio weights and related quantities, this paper focuses on the risks associated with these estimators. Specifically, the purpose of this paper is to compare and contrast a number of risk measures of the GMVP estimator to give investors and developers of statistical methods a fair understanding of their differences and similarities and, hence, a foundation for determining the weight estimator that is best suited for a given specific problem.
The paper proceeds as follows. In Sect. 2, the problem of minimum-variance portfolio estimation is stated, Sect. 3 introduces the concept of risk function, and Sect. 4 classifies different GMVP estimators. Section 5 describes the Monte Carlo study design and provides a discussion of the derived results. Section 6 outlines two empirical applications, and Sect. 7 summarizes the findings and concludes.
Preliminaries
We consider the problem of constructing an investment portfolio , defined as a weighted sum of risky assets, R = R 1 , . . . , R p . In order to construct a portfolio of assets, we define a vector of weights v ∈ R p under the common constraint v 1 = 1, where 1: p × 1 is a vector of ones. The mean and variance of R are defined by μ := E [R] and := Cov [R], respectively, where ∈ R p× p is positive definite by assumption. The variance of the portfolio excess return is uniquely minimized by the global minimum-variance portfolio (GMVP) which is given by solving the following minimization problem: and the well-known solution to (1) is given by: The expected return and the return variance of the global minimum-variance (MV) portfolio are given by The weight vector w depends on the unknown parameter −1 which needs to be estimated from data. The classical, or plug-in estimator, of the weight vector w is obtained by replacing in (2) by the inverse of the sample covariance matrix S := We define this estimator asŵ The distribution ofŵ I when sampling from a Gaussian distribution is well established (Okhrin and Schmid 2006;Bodnar and Zabolotskyy 2017). In particular, the estimator S −1 is, by the law of large numbers, a consistent estimator of −1 and the consistency of (3) follows directly. However, it is well known that S −1 is a poor approximation of −1 when the number of assets p is large relative to the number of observations n, and as a consequenceŵ I will not adequately approximate w. Because of this, a number of authors have suggested improved estimators of the GMVP weights. An obvious solution is to simply replace S −1 with a more efficient estimator of −1 . Another important family of improved estimators is given by Stein-type estimators, which, in terms of our portfolio estimation problem, are of the formŵ S = aŵ + bw 0 , where a and b are constants, and w 0 is a pre-determined reference portfolio, which is usually defined non-random. If ŵ − w 0 is small, the improvement in terms of the mean squared error E ŵ − w ŵ − w may be considerable.
The risk of portfolio estimators
Generally speaking, there are several ways to view the weight vector w and the formulation of the inference problem. When deriving statistical estimators and inference procedures for portfolio weights, there is no consensus regarding which quantity to optimize. For example, since w ∈ R p , it is natural to think of estimators in terms of estimating a parameter vector. Alternatively, upon noting that w = 1 −1 1 −1 −1 1 only depends on the unknown parameter −1 ∈ R p× p , the inference problem may be thought of as one concerned with estimating a matrix, a problem rather different from that of estimating a vector. Yet another view of the inference problem is the out-of-sample prediction variance (Frahm and Memmel 2010). It is obvious that the properties of any estimatorŵ will depend on the quality criteria, or risk function, being used to judge it and that no single estimator can optimize all relevant properties simultaneously. In fact, the performance ranking of estimators of w may be changed or even reversed when evaluated on alternative loss functions (Haff 1979;Muirhead 1982). An investor searching the literature for "the best" estimator of the GMVP is likely to end up with a battery of proposed estimators, each being "optimal" in some sense. In this section, we will present and discuss similarities and differences between a number of risk functions for the GMVP problem, some of which are commonly used while others appears to be new in the GMVP context. The L 2 -norm risk is commonly used to assess estimators of −1 alone, i.e., without respect to the bigger problem in which it is an ingredient. It is defined as follows: The 0 ˆ −1 risk is clearly naïve, in the sense that it is only indirectly related to the actual problem of estimating the optimal weight vector w. In other words, an estimator of −1 which is optimal with respect to 0 need not perform very well when substituted into Eq. (2).
A risk function that is more adequate for the portfolio weight problem may be derived as follows: The denominator of w = −1 1 1 −1 1 is merely a scaling factor used to impose the length-one condition. The quantity −1 1 contains all necessary information for determining w and the task of estimating w reduces to that of estimating −1 1. A risk function for the minimum-variance portfolio estimator may accordingly be defined by where is a positive semi-definite non-random matrix. Two important special cases are given by Frahm and Memmel (2010) utilize a somewhat different risk function defined by 2 ŵ, The 2 risk function has been used frequently in the context of estimating mean value vectors and in regression analysis (Anderson 2003;Efron and Morris 1976;James and Stein 1961;Muirhead 1982;Serdobolskii 2000;Srivastava 2002). This risk function explicitly evaluates the second-order moment properties (variance plus squared bias) of an estimatorŵ. It allows us to conveniently split mean square errors in different directions to w by appropriate choice of . Yet another risk criterion, the out-of-sample variance ofŵ R m , advocated by Frahm and Memmel (2010), is somewhat different from 1 and 2 . For some R m not included in the estimate ofŵ, this variance is determined by Following Frahm and Memmel (2010), this variance may be decomposed as follows: The out-of-sample variance may accordingly be decomposed into three terms. The first one, 1 1 −1 1 , is the risk due to randomness of assets and hence not subject to estimation issues. Frahm and Memmel (2010) argue that the third term, μ R Cov ŵ μ R , is negligible in most practical situations, and hence that the term Eŵ ŵ Rŵ is the one of main interest to us. The decomposition of 3 specified above is, however, not necessarily the most versatile one. An important concern with 3 is that it is a measure of the variance ofŵ R m but does not depend on the actual value that R m assigns.
A different expression of the out-of-sample variance may be obtained by conditioning on R m . We define the conditional out-of-sample variance as follows: Although 3 and 4 both describe the out-of-sample variance ofŵ R m , the difference between them is that the latter conditions on R m = r m and also emphasizes the bias. It therefore allows for investigation of the portfolio risk when R m assigns some specific value. The risk of a portfolio as a function of some estimatorŵ will be different when R m assigns a value close to E [R m ] compared to a scenario when R m assigns a value far from E [R m ]. Hence, 4 allows us to investigate the risk when, for example, the market reacts dramatically to a specific event. This matter becomes particularly important when using an estimator which is biased because In contrast, the forecast bias conditioned on E ŵ − w r m may be considerable since it increases linearly with r m . The concept of explicitly conditioning on a specific return value r m provides us with the possibility of assessing the behavior of a weight estimator under certain scenarios. For example, investors may be particularly interested in cases when the market is turbulent ( r m is large relative to σ 2 R ), the market reacts additively to an event (say r m → a + r m , a ∈ R p + ) or when the market is stable Remark (i) (Directional risks) Any estimator of the GMVP may be decomposed into components orthogonal and parallel to w: Let A + denote the Moore-Penrose pseudoinverse of some matrix A. Then, the component ofŵ parallel to w is given by v = w +ŵ w = w ŵ w w w and the component orthogonal to w is determined by u = ⊥ŵ where ⊥ = I − ww w w is a projection matrix (Rao 2008, pp. 46-47). We can thus decomposeŵ according toŵ = v + u, where v is parallel to w and u is orthogonal to w. Some special cases of the above-defined risk functions in the direction orthogonal to the GMVP, which is our direction of main interest, are then given by 1 ˆ −1 , ⊥ , 2 ˆ −1 , ⊥ , 3 (u) and 4 (u, r m ). Although the risk in a certain direction to w alone is of limited interest, it does provide some insight into the relative performance of one estimator to another. For example, it may be Remark (ii) (Implicit Covariance Matrix) There always exists a p.d. diagonal matrix such that P1 = Pw 0 or, equivalently, w 0 = P −1 P 1, where w 0 is any reference portfolio (Frahm and Memmel 2010, Theorem 8). The Stein-type estimator defined byŵ S = (1 − α)ŵ I + αw 0 is therefore associated with an "implicit" covariance matrix estimator in the sense that there exists a matrixˆ S may, or may not, be positive definite. If we define our implicit covariance matrix by This identity allows us to investigate the risk ofŵ S with respect to any risk function designed for estimators of the (inverse) covariance matrices. For example, although 0 ˆ −1 is a function ofˆ
Families of GMVP weight estimators
We will consider a few estimators for further investigation. The first one is the "standard estimator" defined in Eq.
(3). The next is suggested by Frahm and Memmel (2010) and is defined asŵ where . The estimator in Eq. (5) may be thought of as a weighted average between the traditional estimator and a reference portfolio, here represented by w 0 := p −1 1, although the reference portfolio could essentially be any non-random portfolio such that w 0 1 = 1.
An alternative estimator within the same family was proposed by Bodnar et al. (2018) who suggested the estimator Although the estimatorsŵ II andŵ III have shown great potential in improving the standard estimatorŵ I , improved estimators can be developed from a variety of different points of view. In particular, resolvent-type estimators, defined byˆ −1 k = (S + kI) −1 , k ∈ R + , have shown great potential in estimating the precision matrix, particularly in high-dimensional settings (Holgersson and Karlsson 2012;Serdobolskii 1985).
These estimators add a small constant to the eigenvalues before inversion, thereby creating a more stable estimator. They play an important role in spectral analysis (Serdobolskii 1985) but have also proved to be efficient in more applied problems (Holgersson and Karlsson 2012). The "regularizing" coefficient k imposes a (small) bias on estimators of the precision matrix and hence offers a form of variance-bias trade-off rather different from the Stein-type estimators. Since the poor performance of the standard plug-in estimator is largely due to high sample variance in the precision matrix, the resolvent estimators are interesting candidates for improved estimation of the GMVP.
While the Stein-type estimators depend on the regularizing coefficient κ, the esti-matorˆ −1 k depends on the coefficient k, which usually has to be determined from data. We will use the 0 risk to derive the optimal value of k, i.e., we search for the value of k which minimize 0 ˆ −1 has been derived by Serdobolskii (2000), defined as follows: which is obtained numerically. A feasible resolvent-type portfolio estimator is then defined bŷ Defined by Eqs.
(3)-(6), we have a set of different estimators of the GMVP weights. These will be used in a Monte Carlo simulation of the next section in order to compare the risk functions and the performances of estimators.
Monte Carlo study
To investigate the efficiency of GMVP estimators from a risk perspective, we conduct Monte Carlo experiments using three different data generating processes (DGP I, DGP II and DGP III). DGP I is based on a multivariate normal distribution with different covariance structures and zero mean vector, and DGP II is based on a multivariate skewed t distribution with mean vector equal to a zero vector. DGP III is based on a skewed distribution with nonzero mean vector. DGP I is specified as follows: where R t is a vector of p different assets returns in time period t, {R t } n t=1 is independent and identically distributed (IID), where n corresponds to the number of observations. The specification of will assign different values in the simulations. We will use a Toeplitz covariance structure given by and also a covariance matrix estimated from stock return data. For DGP II, we first define the distribution of the vector of returns as where λ ∈ R p and γ ∈ R p are parameter vectors, ∈ R p× p is positive definite and υ > 4. When γ = 0, this yields a p dimensional skewed t distribution. There exists a number of different multivariate distributions in the literature that all share the name multivariate skewed t distribution (Kotz and Nadarajah 2004). In this paper, we use the larger class of multivariate normal mixture distributions to get a skewed multivariate distribution which is referred to as a skewed multivariate t distribution (Demarta and McNeil 2005). This is achieved by setting where X t ∼ N p (0, ) and W −1 t ∼ InverseGamma (ν/2, ν/2) which is independent of X t . The first moments of R t are given by (For details refer to Appendix B).
We assume that all stock returns are skewed to the same extent which in turn is achieved by setting γ = β1 p . Further, it is also of interest to have the DGP centered at the zero vector and this is achieved by λ = βυ/(υ − 2)1 p , hence the DGP II is specified as Following the procedure of Holgersson and Mansoor (2013), DGP III is specified as follows: Let Z 0 ∼ χ 2 (1) , Q j ∼ χ 2 (1) , U j ∼ χ 2 (1) , j = 1, . . . , p, where all variables are mutually and individually independent. Then, each variable R it in R t is equal to R it = Z 0t + Q jt + U jt , and hence, each variable in R t has a χ 2 Tables 1 and 2. Finally, as performance measures we take the five risk functions ( 0 − 4 ), and for each estimator ŵ II ,ŵ III ,ŵ IV we divide its risk by the corresponding risk for w I to get their relative performance. Furthermore, for 4 we choose three different conditioned returns, specified in Table 3.
Results from Monte Carlo simulations
Based on the results from the Monte Carlo experiments displayed in Tables 4, 5, 6 and 7, the estimatorŵ IV performs well if c is larger than 0.1 for both DGP I with covariance structure from real data and for DGP II. But for DGP I with a Toeplitz covariance structure,ŵ IV performs well only for c close to one. On the other hand,
Risk
Conditioned on observation R t 4 ŵ j |R t single random draw from specified DGP and R t is kept fixed over all replicates 4 ŵ j |R t + l vector l has ones in its p/2 upper rows and zeros elsewhere Columns (1)-(5) show risk i (ŵ j ) of estimatorŵ j relative to i (ŵ I ), where i ∈ {0, 1, 2, 3, 4}, columns (6) and (7) show 4 ŵ j | R t + l relative to 4 ŵ I | R t + l and 4 ŵ j | 2R t relative to 4 ŵ I | 2R t , respectivelyŵ II : Frahm and Memmel (2010),ŵ III : Bodnar et al. (2018),ŵ IV : resolvent-type estimator estimatorŵ III performs best among the four estimators. This holds for all investigated values of c. Furthermore,ŵ II is a good estimator if c is not close to one, and its performance is close toŵ III . However, as c gets close to one,ŵ II is outperformed both byŵ III andŵ IV . If we examine the estimators performance under 4 (R t + l) and 4 (2R t ), both of which could reflect a shock in the financial market, the result for the estimators remains in the same internal ordering as indicated by 3 . Thus, with regard to the discussed results, the recommendation is that one should primarily considerŵ III because it performs well regardless of the value on c, unless c is very close to one, in which caseŵ IV is dominating. It should, however, be stressed that the above performance rankings are made only on basis of point estimations. While more general inferences such as interval estimation lies outside the scope of this paper, it should be mentioned that the estimators' performances in terms of, for example, coverage rates need not correspond to their point estimation efficiency.
Empirical study
The empirical evaluation of the investigated estimators of the weights in the GMVP is achieved through a moving window approach on two different data sets for which we use different sampling methods. In the first method (fixed sampling method), we simply apply the estimators on all available assets, and in the second approach (random sampling method), we repeatedly randomly pick a given number of assets and then evaluate the estimators performance by the one-period out-of-sample returns. The reason for applying a moving window is that the mean-variance portfolio theory was developed as a one-period model. Table 4 6.1 Fixed sample The evaluation procedure in the fixed sampling method is as follows: For each stock listed on the stock exchange, we take n observations starting at time point t − n and ending at time point t. We then calculate monthly returns, and based on these observations, each estimator presented in this paper is used to estimate the weights of the global minimum-variance portfolio. For each estimator, the return on the GMVP is calculated for the first out-of-sample observation, i.e., the observation at time period Table 4 t + 1 for each stock. We repeat this procedure, but the starting point is moved one step forward in time (starting at time point t − n + 1 and ending at time point t + 1). The procedure is repeated until 10 sample returns are generated. Thus, for each estimated portfolio weight vector, the one-period out-of-sample portfolio return is calculated as: where R t+1 is a p × 1 vector of stock excess returns observed in time period t + 1, w j is a p × 1 vector of estimated weights for the GMVP andŵ V corresponds to an equally weighted portfolio. The evaluation of each estimator of the GMVP weights is based on the portfolio risk measured by the standard deviations of the portfolio returns: In addition, we also calculate the out-of-sample Sharpe ratiô Example 1 Stocks listed on the Nasdaq stock exchange For this empirical application, 89 stocks with complete past values are selected from SP100 over the period 1997-04 to 2010-07 (159 monthly returns). Note that gross Fig. 1 Smoothing coefficients for estimatorsŵ II ,ŵ III applied to data from Nasdaq and Stockholm OMX stock exchange returns are used, i.e., the risk-free return is not subtracted. A moving window approach is employed with a length of 149 months, giving 10 out-of-sample returns.
Example 2 Stocks listed on the Stockholm stock exchange
In this empirical application, 112 stocks are selected out of 283 stocks listed on the Stockholm OMX stock exchange over the period 1997-01 to 2010-06 (161 monthly returns). The same procedure as described above is used with a moving window length of 151 months. The results of the empirical applications (Table 8) confirm what was already established in the Monte Carlo simulations of Sect. 5. That is, in the empirical application using data from the Nasdaq stock exchange in which c is around 0.6, the performances ofσ portfolio, j forŵ I ,ŵ II ,ŵ III andŵ IV are relatively close to each other. However, for the Stockholm stock exchange in which c is around 0.74, we find that bothŵ III and w IV yield portfolios with much lowerσ portfolio, j compared toŵ I andŵ II . In both settings,ŵ I is outperformed by all other estimators. On the other hand, if we shift measure from out-of-sample standard deviation to the out-of-sample Sharpe ratio, the performance is somewhat reversed in thatŵ II performs better thanŵ IV and the equally weighted portfolio,ŵ V , is then outperforming all investigated estimators. To get a better understanding of the relative behavior of the two Stein-type estimators, i.e.,ŵ II andŵ III , their smoothing coefficients are displayed in Fig. 1. The values of the smoothing coefficients are approximately the same, but the estimatorŵ II tends to weight heavier toward the traditional estimator (ŵ I ), while the estimatorŵ III puts less weight on the traditional estimator.
Random samples
The evaluation procedure in the random sampling method is similar to the fixed sampling method, with the difference that we are now randomly selecting without replacement p = 20, 50, 80 stocks and apply the moving window approach. This procedure is then repeated 100 times which results in 100 time-series of 10 one-period out-of-sample returns of the GMVP for each estimator. Note that c is kept constant when shifting from the different portfolio sizes by adjusting n. Based on Eqs. (7) and (8) In addition to the above we also calculate the mean out-of-sample Sharpe ratio as
Example 3 Stocks listed on the Nasdaq stock exchange (random sampling method)
For this empirical application p = 20, 50, 80 stock are randomly selected out of 89 stocks with complete past values from SP100 over the period 1997-04 to 2010-07 (159 monthly returns). A moving window approach is employed with c = 0.537 (n = 159, 103, 47), giving 10 out-of-sample returns. This procedure is repeated 100 times.
Example 4 Stocks listed on the Stockholm stock exchange (random sampling method) In this empirical application p = 20, 50, 80 stocks are randomly selected out of 112 stocks used in Example 2. The same procedure as described before is used where (7) and (8) a moving window is employed with c = 0.53 (n = 161, 104, 48). The results of the empirical applications based on random sampling method (Tables 9 and 10) also confirm what was established in the empirical applications based on fixed sampling method. Namely, using data from the Nasdaq stock exchange in which c is around 0.5, the performances ofσ portfolio, j forŵ I ,ŵ II ,ŵ III andŵ IV are relatively close to each other butŵ II ,ŵ III andŵ IV outperformŵ I andŵ V . Shifting the evaluation measure to the outof-sample Sharpe ratio yields a different picture, since nowŵ V is outperforming the other estimators whileŵ I yields consistently the lowest result. But for the Stockholm stock exchange in which c is also around 0.5, we find thatŵ III yield portfolios with lowerσ portfolio, j compared toŵ IV andŵ II , and all three estimators are outperforming the regular estimatorŵ I . Hence, in both settings,ŵ I is outperformed byŵ II ,ŵ III and w IV . On the other hand, if we shift measure from out-of-sample standard deviation to out-of-sample Sharpe ratio, the estimatorŵ V is still outperforming all the other estimators and the regular estimatorŵ I yields the lowest Sharpe ratio. It is also found that amongŵ II ,ŵ III andŵ IV , the performance is similar, butŵ III has the highest Sharpe ratio.
Summary
The global minimum-variance portfolio (GMVP) solution developed by Markowitz is considered to be a fundamental concept in portfolio theory. The early researchers investigating this matter usually applied a simple plug-in estimator for estimating the weights and paid very little attention to the distributional property of the estimator. Table 9 More recently, the full distribution of the standard estimator has been derived (Okhrin and Schmid 2006), and it is now recognized that the standard estimator offers a poor approximation of the true GMVP. Within a relatively short period of time, a variety of improvements to the standard estimator have been developed. Naturally, each of these improvements has its pros and cons, but there does not seem to be a consensus about how to evaluate the performance, or efficiency, of GMVP estimators. Perhaps this is because there are, in fact, several possible measures one can use for assessing the properties of a portfolio estimator. In this paper, we discuss a number of different risk functions for the weight estimator. These include: risk functions of covariance matrix estimators, forecast mean square errors, directional risks and conditional risks. The risk functions are labeled with an index determined by the degree to which they are specialized for portfolio estimation: 2 is generally preferred over 1 which is preferred over 0 etc. However, this ordering does not mean that 2 is uniformly better than 1 and 0 . For example, 4 does not exist in closed form for the regularized portfolio estimator used in this paper. Hence,ŵ I V has to be optimized through 0 rather than 4 . In other words, one would typically use 4 or 3 as a tool for deriving an estimator of w GMVP , but there are settings where risk functions of lower rank-order must be used because of their simpler functional form. A selection of recent GMVP estimators is used in a Monte Carlo simulation for purposes of: (i) comparing different risk measures for a given estimator and (ii) comparing different estimators for a given risk. Moreover, a new estimator, based on a resolvent estimator, is proposed. The analysis focuses on asset data where the number of observations (n) is comparable to the number of assets ( p). This case is important because investors might be reluctant to use long data sets as the economy is not expected to be stable over long time periods, and hence, investors are encountering a high-dimensional setting. The simulations are complemented by an analysis of two real data sets: One data set is drawn from the Nasdaq stock exchange, and the other one employs Stockholm OMX data. The general finding of the paper is that no estimator dominates uniformly over all risk functions. We can, however, establish that there are dominating tendencies, in the sense that some estimators tend to perform better with respect to most risk aspects. A Stein-type estimator developed by Frahm and Memmel (2010) is found to perform well in cases when n p, whereas another Stein-type estimator proposed by Bodnar et al. (2018) dominates when n is proportional to p. A resolvent-type estimator is found to perform surprisingly well over a large number of settings. While this paper is restricted to properties of point estimators, future research could involve more general inferential aspects, such as hypotheses testing.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
B Appendix
Let the random vector R t be equal to R t = λ + W −1 t γ + W −0.5 t X t , where X t ∼ N p (0, ) and W −1 t ∼ InverseGamma (ν/2, ν/2) which is independent of X t , and let γ = β1 p . Then, the moment of this distribution is given by 2) The first part of the r.h.s in (B.1) is equal to The second part of the r.h.s in (B.2) is equal to Cov 11 . (B.5) | 7,827.4 | 2019-02-04T00:00:00.000 | [
"Economics"
] |
Assessment Of Current State And Impact Of REDD+ On Livelihood Of Local People In Rungwe District, Tanzania
A climate change mitigation mechanism, Reducing Emission from Deforestation and forest Degradation (REDD+) is anticipated to affect livelihoods of forest dependent communities. This study was conducted to establish this impact on livelihoods of local people in Rungwe District, Tanzania. Data were collected through questionnaires, group discussions, and interviews from three villages: Syukula, Ilolo and Kibisi. Results showed that households’ annual income and crop production are higher after REDD+ implementation. The older respondents (>40 years old) considered REDD+ to be important for forest management compared to younger generation (<40 years old) (p<0.05). Similarly, the older respondents considered wood forest products such as fuelwood, charcoal, timber and poles to be reduced. There was a widespread awareness about REDD+’s ob jectives among household respondents. Therefore, REDD+ proponents should implement alternative sources of livelihoods to help local people improve their income and reduce dependence on the forest resources and eventually decrease deforestation and forest degradation.
Introduction
REDD+ is an international mechanism of the United Nations Framework Convention on Climate Change (UNFCCC) which targets to reduce emissions from deforestation and forest degradation (DD), foster rural development and increase climate resilience in developing countries [1,2]. It functions by creating financial incentives to encourage developing countries to reduce carbon emissions by conserving their forests, enhancing carbon stocks and improve livelihoods of local communities [3][4][5]. It is the world's largest payment for ecosystem service which gives carbon stored in the forest a financial value [4]. Developing countries are continuing with implementation of REDD+ activities since 2008 and those that effectively protect their forests and enhancing carbon stocks receive results-based compensation of carbon payments built on the measurement and reporting of emissions. The measurement of carbon, reporting and verification (MRV) is subjected to domestic and international MRV procedures with respect to guidelines developed under the convention [6][7][8]. In order REDD+ project to be put in action, during the 15th conference of the parties (COP15), Copenhagen Accord, parties agreed on the need of resources and financial mobilisation from developed countries to support REDD+ initiatives [2][3][4]. The developing countries in COP17 made voluntary action to prepare and implement National Appropriate Mitigation Actions (NAMA), the policies and actions that developing countries agree to take to reduce their GHG emissions under the Cancun agreements [5]. COPs 16 and 17 encouraged developing countries to stop reduce and reverse forest cover and carbon loss by reducing human pressure on forests by means of addressing the drivers of deforestation and issues related to land tenure, forest governance, gender and equal participation of stakeholders [5][6][7]. The REDD+ funds come from private, public, bilateral, and multilateral sources including Big International NGOs (BINGOs) [6]. REDD+ payment is either market-based or nonmarket based [7]. Under the market-based system, a developing country trades its generated carbon credits from REDD+ on the international market, whereas a non-market based approach involves payments by developed to developing countries under a REDD+ project [7]. Climate change is one of the biggest global challenges to sustainable livelihoods and economic development which is driven by human pressure on natural resources [6,7]. These humanbased activities cause DD which, amongst others, lead to high levels of greenhouse gas (GHG) emissions [8][9][10]. Africa has the highest net loss of forest following South America. Its net loss is about 3.4 million hectares per year [11]. Tanzania's deforestation rate is estimated between 130,000 and 500,000 ha per year [9,11]. This is due to agricultural expansion, livestock grazing, wildfires, over-exploitation and unsustainable utilization of wood resources, mining, and other human activities mostly in the general or common lands [3,12]. Despite the challenges the Tanzania government is facing to conserve and manage its forests, yet is trying to protect its forests by implementing REDD+ policy as a mechanism to stop global climate change [13]. However, it has to guarantee sustainable development of local people whose livelihood depends on forest resources and agricultural practices [14]. This is because the REDD+ mechanism reduces communities' access to forest resources, such as extraction of timber, poles and fuelwood, charcoal production and farming practices, mostly slush and burn agriculture [15].Though there are funds put in REDD+ projects as incentives to support livelihoods of local people, it is likely that there would be an impact to local communities who perceive the forests to be the sole source of their income and sustainable development [8,16]. As Brown [17] points out, the present and future livelihoods of more than 1.6 billion forest-dependent people are potentially at stake under REDD+ if proponents get it wrong. This study was therefore conducted to establish the existing impact of REDD+ projects on livelihood of local people in Rungwe district, Tanzania, by comparing households' income and crops production before and after REDD+ implementation using three study villages: Syukula, Ilolo and Kibisi. The following were the specific objectives:(i) to assess the impact of REDD+ 289 IJSTR©2015 www.ijstr.org project on annual income and crop production of households, fuelwood, charcoal, and building materials after REDD+ implementation in the study villages and(ii) to assess households' perception and awareness towards REDD+, and willingness to support REDD+ mechanism in the study villages. Although the REDD+ mechanism is new and many projects have only been active since a few years, this study can serve as the basis for future research to validate the impact of REDD+ projects on livelihoods of local people. As suggested by Angelsen et al. [8] and Silayo et al. [18] that since REDD+ policy is a continuous process, there is a need to assess the strengths and weaknesses of the global REDD+ mechanism to ensure its sustainability.
The description of the study area
This study was carried out between July and August 2013 in Rungwe district, Mbeya regional. The area lies between 8˚30' and 9˚30'S and 33˚ and 34˚E in south-west Tanzania [19]. The district has a total area of 1,231.86 km 2 , and 339,157 people; the climate is tropical with dry and wet seasons with up to 3,000 mm of rainfall a year. The mean annual temperature averages range between 16˚C in the highlands and 25˚C in the lowland areas [20]. Three study villages: Syukula, Kibisi and Ilolo ( Figure 1) were selected based on the following criteria: their vicinity to project area, the Rungwe Forest Nature Reserve (RFNR); their dependency on forest resources and crop cultivation for livelihood is high; and their participation in REDD+ mechanism. REDD+ project activities are implemented by the NGO called Wildlife Conservation Society
Research methodology
About 10% of the total number of households in each village were surveyed using questionnaires (n=180). The sampling method was adapted from WCS [19] because it was used in similar villages to assess local people's REDD+ readyness.
The sampling methods are also comparable to that described in St-Laurent et al. [10], Silayo et al. [18], Majule [20], and Majule and Lema [21]. Three groups one in each village participated in focus group discussion (n = 39). Village leaders and staffs from WCS were interviewed. Other facts were collected through literature reviews related to REDD+ mechanism and forests; and field observations.
Data analysis
Households' perception about the status of the RFNR, importance of REDD+, and availability of forest wood products, and willingness to support REDD+ activities were analysed using the χ2-test. Quantitative data analysis was performed using STATISTICA [22] and were tested for normality using kolmogorov-smirnov. Data transformations were performed using Box-Cox transformation. Crop production and income data used before REDD+ implementation pertained to the period between 2006 and 2009 while those after REDD+ implementation were between 2010 and 2013. These periods were selected because REDD+ project in the study villages started in 2010 which makes four years period until this study was carried out. To make unbiased comparison, a period of exactly four years before REDD+ was compared to that after REDD+ implementation. The annual crop production for seven crops (sunflower, cassava, tea, maize, bananas, potatoes, and beans) was compared for the two periods. Crops were selected because they contribute to the households' annual income. Difference in households' income, and crop production per year before and after REDD+ implementation was tested using Wilcoxon Matched Pairs test, and paired two-sample t-test respectively [23]. Both secondary and primary data were used for analysis. Household respondents were grouped into older (>40 years old) and younger (<40 years old) people.
The impact of REDD+ project on households' income and crop production, fuelwoods, charcoal and building materials
Results showed that households' annual income ( Figure 2) and average crop production ( Figure 3) are higher after implementation of REDD+ activities. The households (82% in Kibisi, 75% in Ilolo, and 93% in Syukula) rated their access to and use of forest products such as extraction of fuelwood, logging, poles, timber, and charcoal production as much reduced following implementation of REDD+ ( Figure 4).The rating differed significantly between older and younger people (χ2 = 5.227; df = 1; p < 0.05) with a large percentage (59.3%) of older people considering the access and use of forest wood products as much reduced or reduced. 290 IJSTR©2015 www.ijstr.org
Awareness, perception and willingness or readiness of household respondents for REDD+
Household respondents showed a widespread awareness about the objectives of REDD+ when were questioned about the meaning of REDD+ and its objectives ( Figure 5).Many household respondents were willing to support REDD+ activities ( Figure 6). The willingness did not differ between older and younger people (χ2 = 0.290; df = 1; p > 0.05). Fig.7 shows perception of households' respondent on the importance of REDD+ for conservation and management of forest reserve in the study villages. The older respondents considered the REDD+ mechanism to be very important or important for forest conservation compared to younger generation (χ2 = 5.644; df = 1; p < 0.05).
The impact of REDD+ project on households' income, crop production, fuelwood, charcoal and building materials
In most cases REDD+ payments and compensations may not lead to substantial increases in local people's income. Therefore many REDD+ projects have responded by developing programs to create alternative livelihoods and increase incomes through better farming practices, beekeeping, improved stoves, and other income generating activities [27]. In the study villages, the average households' annual income before REDD+ project implementation is lower than after REDD+ implementation ( Figure 2). This difference can be contributed to REDD+ incentives such as woodlots, honey bee schemes and use of efficient energy stoves. These incentives might have lessened the consumption of fuelwood and charcoal, and therefore buffering the effect of REDD+ on households' income. The beekeeping programme under the REDD+ project subsidizes the income of households in the study villages. For instance, in 2012 more than 55 litres of pure honey were harvested from 120 hives and sold at 7,000 Tsh/litre [19]. This perhaps contributed to households' income. According to interviewed WCS staffs the WCS-REDD+ initiatives did not grab land from local residents to be included in the REDD+ project. Therefore, people kept their land size while increasing crop production with support of fertilizer, improved seeds, pesticides supply, and better agricultural techniques from the WCS and Rungwe district council in the favour of REDD+ projects [19]. Because of the incentives in agricultural sector, mean annual crop production is higher after REDD+ implementation (Figure 3). Therefore, having enough agricultural land and support from WCS possibly improved their crop production and income. Since the households' income depends on crop production, thus, if the REDD+ activities affects the crop production, their income will be affected too. Therefore, this study shows that currently the REDD+ project in the study villages has little negative effect on the households' income and crop production. However, for the fuelwood, charcoal production, and building materials such as timber and poles, REDD+ mechanism has decreased their availability (Figure 4).This perception is strong among the older people as compared to the younger ones. This difference in perception is attributed to accumulated knowledge and experience about the RFNR and availability of the wood resources in the past. Moreover, households' income is independent from REDD+ payment. No household respondent said to have been compensated since the REDD+ project began, despite their participation in REDD+ activities, such as planting trees in the project area. The absence of payment may be due to poor land tenure and management system in three villages which is Joint Forest Management (JFM). Under JFM local communities have no right to compensation [8,28,29]. The carbon payment and compensation in study villages is unclear and people are unaware about the payment. For instance, Silayo et al. [18] claim that the lack of compensation is the focus of the WCS; he also reports that 95% of the respondents had no user rights over the resources in the project area in the Rungwe district. Hence, REDD+ compensation appears to play a relatively weak role to improve local people's income in the study villages. TFCG [30] the NGO implementing the REDD+ projects in Kilosa and Lindi rural districts in Tanzania, and Brown [17] believe that individual payments are the best choice for REDD+ because the DD is caused by rural community members who clear forests for small scale agriculture, timber, firewood or charcoal. Therefore paying them could make a contribution to reducing deforestation and cash transfers to poorest people in the country [24]. The TFCG paid the participating households, and acknowledged that the payment increased community trust and participation in project activities; it also contributed to improve households' livelihoods, and some started a livestock keeping and small businesses [30]. Also, a study to investigate the system of payment in Brazil, Mexico and Namibia as described in TFCG [24] found that giving little amount of money to rural poor households helped them to start new livelihood, also improved child health and school attendance, eventually reduced illegal exploitation of forest resources. However, in Colombia, for example, the Choco-Darien Conservation Corridor REDD+ project has focused on collective benefits rather than payments to households, and has provided capacity building and new employment opportunities for an Afro-Colombian community [24]. Henceforth, REDD+ payments can be imperative source of funds that some rural members use to improve their farm productivity or allowing others to switch to other economic activities thereby reducing dependence on forest resources [3,17,25,26]. 292 IJSTR©2015 www.ijstr.org
Awareness, perception and willingness or readiness of household respondents for REDD+
The widespread awareness of REDD+ ( Figure 5) explains an effort made by WCS in raising REDD+ awareness in the study villages. More than 50% of respondents were supporting REDD+ project because they understood its importance ( Figure 6). Understanding the willingness of local people towards REDD+ mechanism is important for REDD+ success [6]. Likewise, understanding the perceptions of stakeholders towards REDD+ is important for successful REDD+ implementation. For instance, St-Laurent et al. [10] state that knowing the perceptions of the civil society and the local people in Panama was vital to learn the possibility of successfully implementing REDD+ mechanism with colonist farmers. In this study, majority of the households (>70%) perceive the REDD+ to be very important for the management of forest and reduction of carbon emissions (Figure 7).This perception differed between younger and older people. A large number of older people (n=89) perceives the REDD+ mechanism to be very important or important for the management of the RFNR. For instance, they stated that illegal harvesting of woods, charcoal production and other causes of DD has declined. They further stated that the RFNR is now in a good condition and less degraded compared to the past years when REDD+ mechanism was not yet on the ground. Additionally, some of the interviewed households see the potential of forests to store carbon as important for their health and livelihoods. Mayers et al. [31] claim that understanding the local communities' REDD+ willingness is essential for REDD+ sustainability. In this study, the willingness of local people to support REDD+ mechanism and its activities in the study villages was positive and did not differ between younger and older people, and between females and males. This is because the knowledge of the local people about forest loss, climate change and its consequences on the environment and livelihood appears to be good. Therefore, their willingness may enhance the sustainability of REDD+ projects in the study villages. Nevertheless, more work is required to motivate people's willingness to support REDD+ projects including their activities in the study villages. This study reveals that though local people participate in REDD+ activities, they see little hopes for positive social and economic benefits from the REDD+ initiatives because there has been neither a payment nor compensations. Despite the scant hope for positive social and economic benefits from the REDD+ initiatives, as experienced by households, they would like to see more effort be put on forest management, ensuring transparency, equal benefit sharing and less corruption.
Conclusions and recommendations
Forest dependent communities are potentially at stake under REDD+ projects as their livelihoods are threatened with REDD+ mechanism. It is important for REDD+ proponents to identify these problems and implement alternative sources of livelihood and create employment opportunities to help local people improve their income and reduce DD. Besides the potential risks, REDD+ also has the potential to deliver significant social and environmental benefits (such as biodiversity conservation and poverty reduction) in addition to reducing carbon emissions. This study recommends that in order to achieve REDD+ objectives and local development needs, important investments must be made in agriculture sector which is a main source of income for many local people. It further recommends that REDD+ payments to forest dependent communities be established with an equitable level of certainty and transparency so that local communities and indigenous people can participate fully in REDD+ activities. | 4,106.2 | 2015-09-30T00:00:00.000 | [
"Economics"
] |
Assessing the Implementation of Digital Innovations in Response to the COVID-19 Pandemic to Address Key Public Health Functions: Scoping Review of Academic and Nonacademic Literature
Background Digital technologies have been central to efforts to respond to the COVID-19 pandemic. In this context, a range of literature has reported on developments regarding the implementation of new digital technologies for COVID-19–related surveillance, prevention, and control. Objective In this study, scoping reviews of academic and nonacademic literature were undertaken to obtain an overview of the evidence regarding digital innovations implemented to address key public health functions in the context of the COVID-19 pandemic. This study aimed to expand on the work of existing reviews by drawing on additional data sources (including nonacademic sources) by considering literature published over a longer time frame and analyzing data in terms of the number of unique digital innovations. Methods We conducted a scoping review of the academic literature published between January 1, 2020, and September 15, 2020, supplemented by a further scoping review of selected nonacademic literature published between January 1, 2020, and October 13, 2020. Both reviews followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) approach. Results A total of 226 academic articles and 406 nonacademic articles were included. The included articles provided evidence of 561 (academic literature) and 497 (nonacademic literature) unique digital innovations. The most common implementation settings for digital innovations were the United States, China, India, and the United Kingdom. Technologies most commonly used by digital innovations were those belonging to the high-level technology group of integrated and ubiquitous fixed and mobile networks. The key public health functions most commonly addressed by digital innovations were communication and collaboration and surveillance and monitoring. Conclusions Digital innovations implemented in response to the COVID-19 pandemic have been wide ranging in terms of their implementation settings, the digital technologies used, and the public health functions addressed. However, evidence gathered through this study also points to a range of barriers that have affected the successful implementation of digital technologies for public health functions. It is also evident that many digital innovations implemented in response to the COVID-19 pandemic are yet to be formally evaluated or assessed.
Background
Digital technologies, such as artificial intelligence (AI), robotics, and wearables, have been widely used in worldwide efforts to respond to the COVID-19 pandemic. In this context, a range of studies has reported on developments regarding the implementation of new digital technologies for COVID-19-related surveillance, prevention, and control. To consolidate this literature, several reviews have been undertaken [1][2][3][4]. Broadly, the aim of these reviews has been to describe the characteristics of digital technologies that have been reported on within the early scientific literature. Golinelli et al [2] searched MEDLINE and medRxiv to identify the relevant literature on the use of digital technologies in health care during the COVID-19 pandemic. The included papers were then analyzed in terms of article characteristics and the type of technology and patient needs addressed. A review conducted by Budd et al [1] provided a qualitative overview of the breadth of digital innovations introduced as part of the global public health response to the COVID-19 pandemic, the types of public health activities they addressed, and the key potential barriers to their implementation. Vargo et al [4] further reviewed digital technology use during the COVID-19 pandemic based on searches of 4 databases: Web of Science, Scopus, PubMed, and Google Scholar. The review synthesized the evidence from included papers in relation to 4 key areas of technologies, users, activities, and effects within the spheres of health care, education, work, and daily life [4]. More recently, Mbunge et al [3] undertook a critical review of emerging technologies for tackling the COVID-19 pandemic, focusing on prevention, surveillance, and containment, based on searches of the following sources: Google Scholar, Scopus, ScienceDirect, PubMed, IEEE Xplore Digital Library, ACM Digital Library, Wiley Library, and SpringerLink [3]. Although providing valuable overviews of the digital response to the COVID-19 pandemic, existing reviews have also been limited by a focus on academic sources, thereby potentially missing developments reported in wider nonacademic literature while also tending to focus on the early period of the pandemic.
Study Aims
In this study, we present the findings of a further scoping review on the implementation of digital technologies in response to the COVID-19 pandemic. The scoping review expands on the work of existing reviews by drawing on additional data sources while also considering literature published over a longer time frame (ie, January to September 2020). Although focusing on academic literature, the scoping review also goes beyond existing reviews by presenting evidence from a complementary review of nonacademic sources, including web-based technology-related news sources and news feeds (covering news articles, press releases, and blogs). The incorporation of wider nonacademic sources into this review allows for the consideration of technological developments in the private or public sector, which are not necessarily oriented toward research publications, thus helping to capture more up-to-date information on the implementation of digital technologies in response to the COVID-19 pandemic.
This scoping review goes beyond existing reviews by using the concept of digital innovations. By digital innovations, we refer to the application ≥1 digital technology to address COVID-19-related key public health functions within a single application in a specific context. An example of a digital innovation captured by this scoping review is Austria's contact-tracing app Stopp Corona. The app combines 2 digital technologies of interest in this review-smartphone apps and Bluetooth-into a single digital innovation [5,6]. By analyzing data regarding the number of implemented digital innovations and their characteristics, this study goes another step beyond existing reviews, all of which have analyzed digital technology trends by considering the number of papers reporting on different technology types and functions [1][2][3][4] The specific research questions addressed by this scoping review were as follows:
Overview
The scoping review followed the approach specified in PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist [7]. A completed PRISMA-ScR checklist for the review is presented in Multimedia Appendix 1. A study protocol was developed presenting key elements of the proposed approach and methods to be used. The approach comprised 2 parallel methodological approaches-one for the review of academic literature and the other for the review of nonacademic literature.
Search Strategy
For the academic literature search, we developed and ran a search strategy in 2 bibliographic databases-EMBASE and Scopus-using the same strategy for both databases. terms used in this study are presented in Multimedia Appendix 2. The search was limited to articles published between January 1 and September 15, 2020 (the date of the search), and included English-language and non-English-language articles. The search strategy drew on a strategy developed for a previous scoping review on digital technologies for infectious disease surveillance, prevention, and control, which was peer-reviewed using the Peer-Review of Electronic Search Strategies approach [8]. In addition to the database searches, we also conducted targeted searches using Google Scholar to identify a small number of additional academic articles where the results of the database search strategies revealed evidence gaps.
For the nonacademic literature search, we ran a search strategy using the news aggregation software Feedly [9]. To conduct the Feedly search, we identified relevant information sources covering digital technological innovation and health innovation based on expert consultation and internal piloting. The search strategy applied to these information sources was broadly aligned with that used for the academic literature search. However, because of the limitations of the Feedly search function, the search string used was shorter and more generic than that used for the academic search.
Study Selection
Articles captured by both searches were screened against defined inclusion and exclusion criteria to determine their eligibility for the study. Multimedia Appendix 3 presents the used inclusion and exclusion criteria. In the review of academic literature, non-English articles were included in the study selection but only if an English-language abstract or summary was available. Screening was undertaken by 2 study teams, each comprising 2 researchers-one study team for the review of academic literature and one study team for the review of nonacademic literature. Before commencing the study selection, both study teams engaged in pilot screening exercises for 100 articles to ensure consistency in the application of the eligibility criteria.
The 2 reviewers discussed the areas of uncertainty or disagreement until full agreement on inclusion or exclusion was reached. To further ensure consistency across the study teams, we held weekly cross-project meetings. During these meetings, any articles for which a reviewer was unsure were marked and discussed with the other study team to determine inclusion or exclusion. A shared log of the inclusion and exclusion decisions was maintained across the 2 study teams.
Data Extraction
We extracted data from the included articles using Microsoft Excel extraction templates: one template for the review of academic literature and one for the review of nonacademic literature. Both extraction templates included columns to capture information relating to the core research questions regarding the types and nature of implemented digital innovations, as well as broader information regarding the article type and identified barriers to implementing digital innovations in the discussed countries and regions. Where possible, drop-down menus were used to limit the range of responses that could be submitted, thereby facilitating data filtering and analysis. To ensure a consistent extraction approach, the 2 project teams conducted pilot extraction exercises using a small number of articles. The used extraction templates and drop-down menus are presented in Multimedia Appendix 4.
Data Analysis
To analyze the extracted data, we used the software package R. Descriptive quantitative analysis focused on statistical and graphical summaries for each column of data captured using drop-down menus in the extraction template, together with relevant cross-analyses. Data extracted on barriers to the implementation of digital innovations were analyzed qualitatively.
High-Level Technology Groups and Specific Digital Technologies
In extracting data on digital innovations, we coded data on the specific digital technologies that have been used within these innovations, as well as the technology groups to which these technologies belong. The used coding approach drew on an earlier scoping review of the use of digital technologies for the prevention, surveillance, and control of infectious diseases. In this study, specific technologies identified in the literature were clustered into high-level technology groups of similar or conceptually related digital technologies [10]. For this study, definitions for each specific digital technology and each high-level technology group were established using the European Commission's Digital Single Market glossary, supplemented, where necessary, by definitions from relevant academic literature [11]. The coding approach is presented in Textbox 1.
Key Public Health Functions
We also coded each digital innovation as fulfilling ≥1 of the following seven key public health functions: (1) screening and diagnostics, (2) surveillance and monitoring, (3) contact tracing, (4) forecasting, (5) signal or outbreak detection and validation, (6) pandemic response, and (7) communication and collaboration. The use of these public health key functions followed a mapping of the European Centre for Disease Prevention and Control's priorities against the 10 essential public health operations of the World Health Organization's Regional Office for Europe [12] and the US Center for Disease Control's 10 essential public health services [13]. On the basis of this mapping exercise, the public health key functions used in this study were also refined to suit the COVID-19 context. For example, to better reflect the diverse range of activities undertaken to ensure safe access to and management of essential resources during the COVID-19 pandemic, including at the population level, pandemic response was included as a key public health function. Similarly, to reflect its centrality in response to the pandemic, contact tracing was also included as a distinct key public health function. Textbox 2 presents the high-level definitions of these key public health functions for the purposes of this study. The key public health functions used in this study were neither exhaustive nor definitive. For example, the used functions do not cover the application of emerging digital technologies to the development of treatments or vaccines. Other studies may adopt alternative approaches to identifying and defining key public health functions.
As with the classification of technologies, our data extraction template included columns to record instances in which a digital innovation addressed >1 key public health function. For example, digital innovations performing surveillance and monitoring functions and signal or outbreak detection and validation were coded with both these public health functions. The coding of key public health functions was based on data extracted from the academic or nonacademic sources being reviewed. The emphasis within the article guided the assessment of how the codes were applied. The collation of data from multiple sources on the same innovations allowed us to capture where digital innovations addressed >2 key public health functions.
Textbox 2. Key public health functions and definitions.
Screening and diagnostics
• Identifying (including self-identifying) COVID-19 symptoms and the presence of SARS-CoV-2 in individuals Surveillance and monitoring • Systematic collection and analysis of relevant data such as SARS-CoV-2 infection rates and excess deaths along with ongoing monitoring of COVID-19 symptoms or adherence to COVID-19 restrictions at the individual and population levels
Contact tracing
• Identifying and alerting people who have been in contact with someone diagnosed with COVID-19 and who are therefore at high risk of having been exposed to SARS-CoV-2, so that they can take appropriate and sometimes mandated action (eg, self-isolating)
Forecasting
• Predicting COVID-19 infections or health outcomes at the individual and population levels
Signal or outbreak detection and validation
•
Detecting and validating outbreaks of COVID-19
Pandemic response • Responses to the pandemic that have helped widen safe access to and management of essential resources required by individuals and populations for COVID-19 prevention and response
Communication and collaboration
• Communication involves informing, educating, and empowering individuals and populations about COVID-19, and collaboration refers to working together across disciplines or sectors to share knowledge and improve the collective COVID-19 response
Search Results
The search of academic databases returned a total of 5018 articles, of which 1408 (28.06%) were duplicates. Title and abstract screening of the remaining 3610 articles resulted in 3309 (91.66%) articles being excluded, with 301 (8.34%) deemed eligible for full-text review. Through targeted searches, we also included an additional 6 articles. Of these 307 articles, 81 (26.4%) were excluded during data extraction and analysis, resulting in 226 (73.6%) unique articles being included in the review of the academic literature.
For the review of nonacademic literature, the Feedly-based literature search returned a total of 4537 articles, of which 144 (3.17%) were duplicates. Title screening of the remaining 4393 articles resulted in 3904 (88.87%) articles being excluded, with 489 (11.13%) included for full-text review. We also included an additional 23 articles through targeted searching (n=10, 43% articles) and web scraping (n=13, 57% articles). Of these 512 articles, 107 (20.9%) were excluded during the data extraction and analysis. This resulted in 79.2% (406/512) unique nonacademic articles being included. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagrams for the 2 scoping reviews are presented in Figure 1.
Number of Digital Innovations Implemented in Response to the COVID-19 Pandemic
Through our review of the academic literature, we identified 561 instances of the implementation of digital innovations to tackle the COVID-19 pandemic. Our review of the nonacademic literature identified 497 digital innovations. The 2 reviews were conducted independently. As such, the digital innovations identified by the review of nonacademic literature are not necessarily presented as unique from those identified by the review of academic literature. Although there is likely to be some crossover between the innovations captured by the 2 reviews, it is also the case that the more experimental review of nonacademic literature has captured some innovations, in particular those developed by private companies, that have not been captured within the review of academic literature, particularly as developments occurred rapidly during the first few months of the pandemic in 2020.
Geographic Context of Digital Innovations Implemented in Response to the COVID-19 Pandemic
The identified digital innovations were analyzed by the geographical context in which they were implemented at both the regional and country levels. Table 3).
Types of Digital Innovations Implemented in Response to the COVID-19 Pandemic
The digital innovations were also analyzed in terms of the types of digital technology they incorporated. As explained previously, we analyzed innovations in terms of the specific digital technologies they incorporated and the high-level technology groups to which these specific technologies belonged.
In the academic literature, the most commonly implemented high-level technology group was integrated and ubiquitous fixed and mobile networks (185/561, 33% of digital innovations incorporated at least one digital technology falling under this high-level group; Figure 2,
Key Public Health Functions Addressed by Digital Innovations Implemented in Response to the COVID-19 Pandemic
The key public health function addressed by the highest number of digital innovations in the academic literature was communication and collaboration (264/561, 47.1% of digital innovations addressed this public health function; Table 4).
Other key public health functions addressed by a large number of digital innovations were surveillance and monitoring (199/561, 35.5% of digital innovations), pandemic response (126/561, 22.5% of digital innovations), and screening and diagnostics (103/561, 18.4% of digital innovations). Signal or outbreak detection and validation a For both the academic and nonacademic review, the number of digital innovations add up to more than the overall sample size (N) and the percentages add up to more than 100. This is because each digital innovation could be assigned more than one key public health function in our review.
The public health function addressed most commonly by digital innovations in the nonacademic literature was surveillance and monitoring (197/497, 39.6% of digital innovations), followed by pandemic response (169/497, 34% of digital innovations) and screening and diagnostics (167/497, 33.6% of digital innovations; Table 4). Compared with the academic literature, the function of communication and collaboration was addressed by a smaller number of innovations (130/497, 26.2% of digital innovations).
Cross-analysis
For each country in which digital innovations were implemented, we cross-analyzed the number of high-level technology groups with which implemented innovations were associated. According to the academic literature, China had implemented digital innovations across the largest number of high-level technology groups (digital innovations were implemented with technologies from 15/17, 88% of the high-level technology groups). This was followed by the United States (14/17, 82% of high-level technology groups) and the United Kingdom (13/17, 76% of high-level technology groups). The EU or EEA countries with digital innovations covering the highest number of technology groups were France (11/17, 65% of high-level technology groups) and Italy (9/17, 53% of high-level technology groups). In the nonacademic literature review, the countries implementing technologies covering the largest number of high-level technology groups were the United States (15/17, 88% of high-level technology groups), the United Kingdom (12/17, 71% of high-level technology groups), China (10/17, 59% of high-level technology groups), and Italy (10/17, 59% of high-level technology groups). Tables presenting a further analysis of the implementation setting and high-level technology groups are presented in Multimedia Appendix 5.
For each key public health function, we cross-analyzed the high-level technology groups with which digital innovations were most associated. For communication and collaboration (the public health function addressed most frequently by digital innovations in the academic literature), most digital innovations incorporated technologies within the following high-level technology groups: integrated and ubiquitous fixed and mobile networks (88/561, 15.7% of digital innovations), social media platforms (56/561, 10% of digital innovations) data analytics(including big data) (42/561, 7.5% of digital innovations), and web-based tools and platforms (38/561, 6.8% of digital innovations). For surveillance and monitoring (the public health function addressed most frequently by digital innovations in the nonacademic literature), most digital innovations incorporated technologies within the following high-level technology groups: integrated and ubiquitous fixed and mobile networks (60/497, 12.1% of digital innovations), web-based tools and platforms (44/497, 8.9% of digital innovations), and wearables (including ingestibles (28/497, 5.6% of digital innovations). Tables presenting a further analysis of key public health functions and high-level technology groups are presented in Multimedia Appendix 5.
Summary of Key Findings
Following the COVID-19 pandemic, actors worldwide turned to digital technologies to assist the public health response. This study suggests that the most common implementation settings for digital innovations implemented to tackle the COVID-19 pandemic were the United States, the United Kingdom, China, and India. Meanwhile, within the EU/EEA region, Italy, Spain, Germany, and France were the most common implementation settings. The study suggested that a high number of digital innovations implemented in response to the COVID-19 pandemic used technologies within the integrated and ubiquitous fixed and mobile network technology group, including cellular networks, smartphone and tablet computing devices, smartphone apps, and Bluetooth. Smartphone apps have been the specific technology most used by digital innovations in response to the COVID-19 pandemic, with uses ranging from contact-tracing apps [5,6,[14][15][16][17][18][19][20][21][22][23] and self-assessment apps [24] to apps supporting population surveillance and monitoring of regulation compliance [25]. The study also found that data analytics (including big data) and cognitive technologies, the latter including AI and machine learning, have also been incorporated into many COVID-19-related digital innovations. According to the results of this study, communication and collaboration and surveillance and monitoring have been the public health functions most commonly addressed by digital innovations implemented in response to the COVID-19 pandemic. Both these functions have been addressed by a wide range of technologies, covering nearly all high-level technology groups used in this study. Other functions addressed by a large number of innovations were screening and diagnostics and pandemic response.
Comparison With Other Studies
To the best of our knowledge, this study is the first to provide a broad overview of the geographical context in which digital technologies have been implemented in response to the COVID-19 pandemic. Mbunge et al [3] reviewed the evidence regarding leading countries in the application of AI models for COVID-19, finding that China was the country with the highest frequency in this respect, but did not review the geographic distribution of wider forms of digital innovation [3]. Although most digital innovations identified in our study were implemented in non-EU/EEA countries, it is also worth noting that in this study, EU/EEA countries featured more prominently as implementation settings when than an earlier scoping review that examined digital technologies implemented for infectious disease surveillance, prevention, and control more broadly [10].
This study's findings on the types of digital technologies used by digital innovations have commonality with the results of other reviews on digital technologies and public health in response to the COVID-19 pandemic. In their review, for example, Vargo et al [4] found that the types of technological hardware most reported in relation to the health care sector were computerized tomography machines (in most cases discussed in combination with AI-based learning approaches) and mobile devices, with computers or mobile apps being among the most prominent forms of software used. Similarly, a review of digital technologies in health care conducted by Golinelli et al [2] found that many studies reported on the use of AI tools, big data analytics, mobile apps, and mobile tracing. More broadly, the wider array of digital technologies identified in this scoping review aligns with the digital technologies identified in other reviews. For example, in their review of digital technologies for COVID-19 prevention, surveillance, and containment, Mbunge et al [3] identified the following emerging technologies to be relevant in tackling COVID-19: AI, social media platforms, Internet of Medical Things, virtual or augmented reality, blockchain, additive manufacturing, 5G cellular technology and smart applications, geographic information systems, big data, and autonomous robots.
The findings of this study also illustrate certain differences from other reviews. For example, in the study by Vargo et al [4], video-based communication platforms were found to be a commonly reported on technological software. Meanwhile, Golinelli et al [2] reported a high number of articles reporting on telehealth or telemedicine. The difference between the high reportage of such technologies in other reviews and the relatively lower numbers found in this review may perhaps be explained by the fact that this study included only telemedicine-based innovations when the innovations had been implemented specifically to tackle the COVID-19 pandemic.
The study's finding that digital technologies introduced in response to the COVID-19 pandemic are principally oriented toward 4 public health functions-communication and collaboration, surveillance and monitoring, screening and diagnostics, and pandemic response-is also in line with the findings of other reviews [1,2]. For example, Golinelli et al [2] identified the following 4 key patient needs that were addressed by technologies cited within the early scientific literature: diagnosis, surveillance, prevention, and treatment. Meanwhile, the review conducted by Budd et al [1] identified 4 overarching public health functions performed by technologies: digital epidemiological surveillance, rapid case identification, interrupting community transmission, and public communication. Each of these taxonomies broadly mirrors the functions most commonly addressed in this review, with the exception that, unlike the study by Golinelli et al [2], this study did not consider technologies related to COVID-19 treatment or therapeutics. In the sense that they emphasize the communicative or collaborative function of many COVID-19 digital innovations, the findings of this study are more closely aligned with the review conducted by Budd et al [1]. This emphasis on communication provides support for the broader literature on the role of social media in public health communication during the pandemic [26]. The literature has highlighted the role of social media in facilitating forms of communication such as scientific exchange and the transmission of information from formal public health agencies and other bodies, as well as the potential for such platforms to act as vectors for the spread of misinformation [1,26].
In conducting this scoping review, we encountered evidence of a range of barriers to the successful implementation of digital innovation, including potential risks. In addition to the limitations of the technologies themselves, these include investment and financial barriers, infrastructural barriers (including a lack of required physical and network infrastructure), human resource barriers, data availability and quality barriers, social barriers (including low uptake and low access to technologies), ethical barriers (including privacy concerns and risks of increased socioeconomic inequality), security and safety barriers, and legal or regulatory barriers. In their review, Budd et al [1] identified similar legal, ethical, and privacy barriers, as well as organizational and workforce barriers, to the implementation of technologies for the COVID-19 pandemic. The extent to which these factors present an obstacle to the implementation of technologies depends on the specific contexts (eg, geographical, cultural, political, and economic) within which technologies are developed and implemented. Therefore, the literature suggests that an effective rollout of technologies will require interventions tailored to the specific characteristics of target regions, recognizing both barriers and enablers that may exist [27]. For instance, in regions without the necessary infrastructure to support cellular and data coverage, automated applications that do not require continuous network access may be more appropriate than other applications [27].
Although this study presents evidence regarding the technologies used by digital innovations, the public health functions addressed, and barriers to implementation, it has not systematically examined the performance of individual technologies or the extent to which technologies have been evaluated or comparatively assessed (see the Limitations section). The wider literature provides examples of evaluative studies in specific contexts, including statistical evaluations of diagnostic accuracy [28], epidemic modeling [29,30], and qualitative evaluations [31], which demonstrate that the performance of digital technologies implemented in response to the COVID-19 pandemic can vary significantly, depending not only on endogenous technological factors but also on broader exogenous factors, including legal, infrastructural, and social issues [28,29,30,31]. It is also evident that a large number of the technologies introduced in response to the COVID-19 pandemic have not yet been formally evaluated or assessed. Therefore, the rapid proliferation of digital public health technologies in response to the COVID-19 pandemic has underscored the need for further studies to evaluate the performance of emerging digital technologies, as well as rigorous oversight mechanisms [1,3]. Such approaches should help not only to verify the performance of new technologies but also to identify the underpinning barriers that stand in the way of those technologies realizing their potential. At the same time, oversight mechanisms should also help to strike a balance between the opportunities presented by new innovations and potential risks, such as ethical and privacy risks, that they may pose [32].
Limitations
This study has sought to provide a broad characterization of the evidence regarding the implementation of digital innovations to tackle the COVID-19 pandemic. This study followed a systematic approach in line with the PRISMA checklist for scoping reviews. Drawing on 2 scoping reviews-a review of academic literature, supported by a supplementary, experimental review of nonacademic literature-the review considers evidence from a wide range of sources, from peer-reviewed publications to news articles, press releases, and blogs. Although the methodological approach is well suited to the objectives of the study, it is also subject to several limitations.
The first limitation of the study relates to the scope of information sources. For the review of academic literature, we relied on 2 databases, EMBASE and Scopus, supplemented by structured targeted searches using Google Scholar. Similarly, our review of nonacademic literature was also limited in that it only considered articles published by a selected set of information sources. For example, in focusing on information sources available within Feedly, the nonacademic search strategy did not include national public health institute websites (although our search strategy identified several digital innovations developed and implemented by public health institutes). We cannot rule out the possibility that running the searches in additional databases, including those used by other reviews of digital technology use for the COVID-19 pandemic, might have led us to identify further examples of digital innovations implemented in response to the COVID-19 pandemic.
We adopted broad inclusion criteria during study selection to maximize the scope of the included evidence. However, it was also necessary to limit the review's scope to keep it manageable within the resources available for this study. Our decision to focus on implemented digital innovations, thereby excluding innovations still at the conceptual stage, was an example of this. Another was our focus on specific key public health functions, meaning that innovations oriented toward other functions were excluded. In both cases, such decisions led to the inevitable exclusion of digital innovations developed in response to the COVID-19 pandemic.
Another limitation of this scoping review was related to the categories used to code technologies and key public health functions. To extract data, we used drop-down menus to classify digital innovations by technology (specific digital technology and high-level technology groups) and by key public health function. The categories used for the drop-down menus (described earlier as key study variables) were carefully selected after several discussions between team members and drew on earlier studies [10]. Although these categories helped classify and organize the data for the purposes of quantitative analysis, inevitably, there is also an element of subjectivity in the application of these categories to digital innovations. It is also not claimed that these categories are definitive or exhaustive in any way. They represent only one approach to classifying implemented technologies and the role they have performed in supporting public health efforts.
We also faced some technical limitations in analyzing data on the types of nonacademic sources included in the review. Initially, the sources were categorized as news articles, blog posts, or press releases. However, as most sources were news articles, it was decided to merge these 3 categories into one during the analysis stage. The decision also reflected the challenges faced during the export of the included articles into a Microsoft Excel file (a measure taken to mitigate the potential risk of URL changes). With the article content exported to Microsoft Excel, it was not always possible to determine the original format (eg, whether a source was an original news article or an article based on a press release).
Finally, while reviewing evidence on the technologies used by digital innovations, the public health functions addressed, and the key barriers to implementation, a systematic evaluation of the performance of individual technologies and innovations is beyond the scope of this study. The incorporation of an evaluative aspect into the study was not feasible because of the limited amount of information on the performance of digital technologies within the reviewed sources, including the lack of evidence of formal evaluation or assessments undertaken. This study highlights the need for further evaluative studies and oversight mechanisms moving forward.
Conclusions
In this study, scoping reviews of academic and nonacademic sources were used to obtain an overview of the evidence regarding implemented digital innovations to tackle the COVID-19 pandemic. This scoping review sought to gain an understanding of the characteristics of the literature reporting on digital technology use for COVID-19 and an understanding of the number, nature, and geographical distribution of digital innovations implemented during the first 10 months of the COVID-19 pandemic. This study built on the evidence base established by existing reviews by incorporating new sources and approaches to analysis. This study highlighted key trends related to the implementation settings, technologies used, and public health functions addressed by COVID-19-related digital innovations. This study also identified a wide-ranging set of barriers and risks that may affect the effective implementation of digital technologies for the COVID-19 pandemic. The existence of such barriers highlights the need for contextually appropriate technological interventions. Although this study did not critically evaluate the effectiveness of digital innovations, the findings from the broader literature point to the fact that technologies introduced as part of the COVID-19 pandemic response demonstrate varying levels of performance and that, in many cases, technologies have yet to be evaluated or comparatively assessed. These findings highlight the need for further evaluation and oversight mechanisms to balance opportunities and risks. | 7,833 | 2021-11-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
The Use of Hirshfeld Surface Analysis Tools to Study the Intermolecular Interactions in Single Molecule Magnets
: Intermolecular interactions have proved to play an important role in properties of SMMs such as quantum tunneling of magnetization (QTM), and they also reduce the rate of magnetic relaxation, as through the influence they have on QTM, they quicken the reverse of magnetization. In addition, they are considered as the generative cause of the exchange-biased phenomenon. Using the Hirshfeld analysis tools, all the intermolecular interactions of a molecule and its neighbors are revealed, and this leads to a systematic study of the observed interactions, which could probably be helpful in other studies, such as theoretical calculations. In addition, they could be helpful to design new systems because intermolecular interactions in SMMs have been proposed as a probable tool to monitor their properties. The observation of characteristic patterns on the Hirshfeld Surfaces (HS) decorated with different properties makes easier the recognition of possible structural pathways for the different types of interactions of a molecule with its surrounding.
Introduction
The advent of single molecule magnets (SMMs) opened a new field in the synthesis of molecular materials, because the bottom-up approach used in their synthesis gives great chemical flexibility with the use of different ligands for tailoring their magnetic properties.The magnetic properties of SMMs have been studied extensively and a large number of complexes have been produced, presenting the characteristic properties of these compounds, i.e., slow magnetic relaxation of magnetization below a blocking temperature, quantum tunneling of magnetization (QTM), etc. [1,2].The characteristic SMMs properties are revealed by studying them on single crystals or on oriented powders, because in the solid state they are a collection of identical oriented nanomagnets and their large number, i.e., with the order of magnitude of Avogadro's number, makes possible the observation of quantum properties of a single molecule, on a macroscopic scale [3,4].The magnetic properties of SMMs originate from the properties of each molecule and not from long-range ordering phenomena within the crystal lattice [5].The intermolecular interactions between neighboring SMMs in the crystal lattice influence the magnetic and quantum properties of each molecule, and their presence has characterized a new family of SMMs, the so-called exchange-biased one.Therefore, their presence is not considered as a drawback, but it has been considered as a probe to monitor QTM properties [6].In a recent publication, we discussed the role played by the intermolecular interactions on the behavior of SMMs, mainly through a bibliographic point of view [7].In this paper, a systematic presentation of the study of representative examples of SMMs is attempted in the light of the Hirshfeld surface (HS) analysis tools and what can be gained by using this valuable tool in studying the intermolecular interactions in crystals.The HS is calculated by considering the electron density and position of neighboring atoms inside and outside the surface and partitions the space into nonoverlapping volumes defined by a surface which surrounds each molecule [8].Thus, the HS reflects, in considerable detail, the immediate environment of a molecule in a crystal.HSs decorated with various functions/properties present the sites where the intermolecular interactions are developed [9].HS analysis tools meet the basic criterion [8], which a method must satisfy for studying intermolecular interactions in a crystal structure and to be the least biased possible, which was set by Desiraju in 1997 [10]: "To visualize a crystal structure in its entirety, not just look at selected intermolecular interactions that have been deemed to be important".HSs have been used extensively in the study of polymorphic organic materials [11], but recently they have also used in the study of transition metal complexes [12][13][14].
Materials and Methods
The HS studies of all complexes were performed by using the CrystalExplorer package V.17.5 [15].In this study, the d norm decorated HSs are mainly discussed in relation to corresponding fingerprint plots.d norm is a normalized contact distance, defined in terms of d e , d i and the Van der Waals (VdW) radii of two atoms at a distance d e outside from a point on the surface and at a distance d i inside the surface correspondingly (Spackman, 2009).It should be mentioned that the color scheme of a d norm decorated HS corresponds to the magnitude of intermolecular interactions ranging from strong (red color) to moderate (white) to weak (blue).The fingerprint plot is a 2D diagram derived from the HS and gives the frequency of occurrence of each combination of pairs d e , d i on the surface, and each such combination is interpreted as some type of interaction.The points d e > d i lie above the main diagonal of the plot and correspond to the donor atoms of the molecule, and points d e < d i , which lie below the main diagonal, correspond to the acceptor ones.π•••π and C-H•••π interactions result in easily recognizable patterns on the HSs decorated with the shape property [9].Differences in the fingerprint plots among similar compounds are attributed to differences in the packing of molecules and give valuable information for their structures [16].For the HS analysis studies, the CIF files of the corresponding compounds were used, which have been retrieved from Cambridge Structural Database (CSD) [17].All the studied compounds are listed in Table 1 with the corresponding code name as it is stored in CSD [17].
Results
The importance of intermolecular interactions on the SMM properties were studied for the first time for the compounds (Mn 4 O 3 Cl 4 (O 2 R) 3 (py) 3 ) where R = CH 2 CH 3 , and CH 3 corresponds to compounds 1 and 2, respectively [22].For these systems, step-like features are observed in the hysteresis loops of magnetization versus applied field measurements, at 40 mK.The observed minor differences could be explained with an intradimer superexchange interaction J and an interdimer one J', which is negligible in the case of compound 1 and is present in the case of 2. This model could also explain the absence of the quantum tunneling at zero field in both cases and the observation of fine structure features in the hysteresis loop of compound 2. Both interactions are antiferromagnetic and the intradimer one is stronger than the interdimer one.This phenomenon has been characterized as exchange-biased quantum tunneling [3] and has been considered to open new perspectives in the use of supramolecular chemistry to modulate the quantum physics of these molecular nanomagnets [3,22].In Figures 1a and 2a, a dimer is presented for compounds 1 and 2, respectively, and the intradimer and the interdimer intermolecular interactions are clearly seen on the d norm decorated HS.The percentage contribution of the different type of interactions is as follows for the most important ones (compound 1, compound 2): are about 2.75 Å in the case of 1 and 2.9 Å in the case of 2, which is less than the sum of VdW radius 2.95Å (1.2Å for H and 1.75Å for Cl) in the first case, and close to this value in the second case; this is the reason that these contact points are clearly seen in the case of 1 (Figure 1a).The Cl•••Cl contact points are longer in the case of 1, 3.9 Å, and shorted in the case of 2, 3.5 Å.Thus, in the latter case, these contact points, although faded, as they are close to the sum of VdW radius (3.5 Å), are clearly seen in the latter case.
Crystals 2021, 11, x FOR PEER REVIEW 3 of 9 these molecular nanomagnets [22,3].In Figure 1a and Figure 2a, a dimer is presented for compounds 1 and 2, respectively, and the intradimer and the interdimer intermolecular interactions are clearly seen on the dnorm decorated HS.The percentage contribution of the different type of interactions is as follows for the most important ones (compound 1, compound 2): 75 Å in the case of 1 and 2.9 Å in the case of 2, which is less than the sum of VdW radius 2.95Å (1.2Å for H and 1.75Å for Cl) in the first case, and close to this value in the second case; this is the reason that these contact points are clearly seen in the case of 1 (Figure 1a).The Cl•••Cl contact points are longer in the case of 1, 3.9 Å, and shorted in the case of 2, 3.5 Å.Thus, in the latter case, these contact points, although faded, as they are close to the sum of VdW radius (3.5 Å), are clearly seen in the latter case.There are lattice solvents in the structure of both compounds, but this characteristic is more clearly seen only in the fingerprint plot (Figure 2b) of compound 2, where the distribution of contact points is asymmetric with respect to the main diagonal in the fingerprint plot, and the contact points H•••N are among the clusters and the acetonitrile solvents (Figure 2e).The Cl•••Cl (with contribution of 0.3% for compound 1 and 0.5% for compound 2) and Cl•••H/ H•••Cl contact points have been considered as the paths of interactions that alter the characteristics of the hysteresis loops for both compounds [3,22].Although both compounds crystallize in the same space group (R- Another characteristic example is the dimers of clusters which are observed in the structure of (Fe9O4(OH)4(O2CPh)13(heenH)2) [19], where, in the hysteresis loop measurements below 1K, QTM steps are observed which are shifted relative to zero shift, but also a QTM step at zero field is observed, indicating a mixed state.This behavior has been interpreted as due to disorder of an oxygen atom on the heenH -ligand that occupies two sites with 36/64% occupancies, with 2/3 (~64%) contributing to the formation of dimers and the remaining 1/3 does not, because they form an intramolecular hydrogen bond.Thus, the 2/3 of them switches on the exchange-biased field and the rest switches it off.Based on Hirshfeld analysis (Figure 3a), only 2.4% of the interactions contribute to the hydrogen bond formation, which results in the formation of dimers, and the rest, according to the fingerprint plot (Figure 3b There are lattice solvents in the structure of both compounds, but this characteristic is more clearly seen only in the fingerprint plot (Figure 2b) of compound 2, where the distribution of contact points is asymmetric with respect to the main diagonal in the fingerprint plot, and the contact points H•••N are among the clusters and the acetonitrile solvents (Figure 2e).The Cl•••Cl (with contribution of 0.3% for compound 1 and 0.5% for compound 2) and Cl•••H/H•••Cl contact points have been considered as the paths of interactions that alter the characteristics of the hysteresis loops for both compounds [3,22].Although both compounds crystallize in the same space group (R-3) and they have almost the same unit cell dimensions (a = b = c = 13.156Å and 13.031 Å, α = β = γ = 74.56(3)• and 74.81(2) • , V = 2068.64Å 3 and 2015.93Å 3 for compound 1 and 2, respectively), the packing of dimers are quite different as it is concluded for the differences observed in fingerprint plots of these compounds (Figures 1b-d and 2b-e).
Another characteristic example is the dimers of clusters which are observed in the structure of (Fe 9 O 4 (OH) 4 (O 2 CPh) 13 (heenH) 2 ) [19], where, in the hysteresis loop measurements below 1K, QTM steps are observed which are shifted relative to zero shift, but also a QTM step at zero field is observed, indicating a mixed state.This behavior has been interpreted as due to disorder of an oxygen atom on the heenH − ligand that occupies two sites with 36/64% occupancies, with 2/3 (~64%) contributing to the formation of dimers and the remaining 1/3 does not, because they form an intramolecular hydrogen bond.Thus, the 2/3 of them switches on the exchange-biased field and the rest switches it off.Based on Hirshfeld analysis (Figure 3a), only 2.4% of the interactions contribute to the hydrogen bond formation, which results in the formation of dimers, and the rest, according to the fingerprint plot (Figure 3b In the light of these studies, compound 4 was synthesized, where the presence of BPh4 -anions around the (Mn4(Bet)4(mdea)2(mdeaH)2) + SMM cations and, in addition, the absence of solvents, make this system ideal for hysteresis loop measurements and EPR studies [20].For the calculation of the dnorm decorated HS (Figure 4a), only the SMM cation was used, and this is reflected in the distribution of contact points in the fingerprint plot (Figure 4b), where almost all the points are above the main diagonal of the plot, which means that the cation serves mostly as a donor.Both cation and anions contribute to H•••H (73.6%) type interactions, the cation serves as a donor for the C•••H (25.3%) type of contacts, as all the points for this type of interactions lie above the main diagonal, and as an acceptor for the O•••H one (1.1%), as all these contact points lie below the main diagonal.In the light of these studies, compound 4 was synthesized, where the presence of BPh 4 − anions around the (Mn 4 (Bet) 4 (mdea) 2 (mdeaH) 2 ) + SMM cations and, in addition, the absence of solvents, make this system ideal for hysteresis loop measurements and EPR studies [20].For the calculation of the d norm decorated HS (Figure 4a), only the SMM cation was used, and this is reflected in the distribution of contact points in the fingerprint plot (Figure 4b), where almost all the points are above the main diagonal of the plot, which means that the cation serves mostly as a donor.Both cation and anions contribute to H•••H (73.6%) type interactions, the cation serves as a donor for the C•••H (25.3%) type of contacts, as all the points for this type of interactions lie above the main diagonal, and as an acceptor for the O•••H one (1.1%), as all these contact points lie below the main diagonal.Table 2 lists parameters for the most important magnetic properties of all the compounds studied in this work, together with the characteristic patterns observed on the HSs.Table 2 lists parameters for the most important magnetic properties of all the compounds studied in this work, together with the characteristic patterns observed on the HSs.
Conclusions
The study of intermolecular interactions of SMM compounds with the HS tools could help to identify special characteristics in their structures which, in turn, could help to easier interpret and understand their physical properties, which are related to their structures.Special patterns on the decorated HSs or in the fingerprint plots are related to packing characteristics, which are indicative of the presence or absence of other molecules in the structure, as their presence is reflected on the surface or on the fingerprint plots.It proves to be helpful in the comparison of structures with common characteristics.Finally, as all the interactions are identified, it gives a complete overview of all types of interactions and helps to estimate the role played by each one of them in structure formation, and the relation of specific characteristics of the structure with the properties of the studied compound.
Figure 1 .
Figure 1.(a) A dimer of complexes with one molecule to be surrounded with a dnorm decorated HS and the other in a ball and stick presentation for compound 1.The six orange vectors indicate the six contact points of Cl•••H type of interactions.The red areas in the orange and cyan ellipses indicate O•••H and C•••H contact points.Contribution of each type of interactions (b) H•••H, (c) O•••H/H•••O, and (d) Cl•••H/H•••Cl derived from the fingerprint plot of compound 1.The outline of the full fingerprint contribution is shown in gray.
Figure 1 .
Figure 1.(a) A dimer of complexes with one molecule to be surrounded with a d norm decorated HS and the other in a ball and stick presentation for compound 1.The six orange vectors indicate the six contact points of Cl•••H type of interactions.The red areas in the orange and cyan ellipses indicate O•••H and C•••H contact points.Contribution of each type of interactions (b) H•••H, (c) O•••H/H•••O, and (d) Cl•••H/H•••Cl derived from the fingerprint plot of compound 1.The outline of the full fingerprint contribution is shown in gray.
Figure 2 .
Figure 2. (a) A dimer of complexes with one molecule to be surrounded with a dnorm decorated HS and the other in a ball and stick presentation for compound 2. The orange vector in the middle indicates the Cl•••Cl contact points.The red areas in the orange and cyan ellipses indicate H•••N (only donor points are seen for this view) and O•••H/H•••O contact points.Contribution of (b) H•••H, (c) O•••H/H•••O, (d) Cl•••H/H•••Cl, and (e) N•••H/H•••N interactions in the fingerprint plot diagram.The outline of the full fingerprint contribution is shown in gray.
3) and they have almost the same unit cell dimensions (a = b = c = 13.156Å and 13.031 Å, α = β = γ = 74.56(3)°and 74.81(2)°, V = 2068.64Å 3 and 2015.93Å 3 for compound 1 and 2, respectively), the packing of dimers are quite different as it is concluded for the differences observed in fingerprint plots of these compounds (Figure 1b-d and Figure 2b-e).
), are of H•••H type (78.6%) (a value which indicates that the dimers are almost isolated) and of C-H•••π type (16.8%), which contribute to the
Figure 2 .
Figure 2. (a) A dimer of complexes with one molecule to be surrounded with a d norm decorated HS and the other in a ball and stick presentation for compound 2. The orange vector in the middle indicates the Cl•••Cl contact points.The red areas in the orange and cyan ellipses indicate H•••N (only donor points are seen for this view) and O•••H/H•••O contact points.Contribution of (b) H•••H, (c) O•••H/H•••O, (d) Cl•••H/H•••Cl, and (e) N•••H/H•••N interactions in the fingerprint plot diagram.The outline of the full fingerprint contribution is shown in gray.
), are of H•••H type (78.6%) (a value which indicates that the dimers are almost isolated) and of C-H•••π type (16.8%), which contribute to the interdimer interactions, thus resulting in formation of the 3D architecture of the compound.The symmetric distribution of contact points in fingerprint plots indicates that one type of molecule exists in the unit cell (Figure 3b-d).Crystals 2021, 11, x FOR PEER REVIEW 5 of 9 interdimer interactions, thus resulting in formation of the 3D architecture of the compound.The symmetric distribution of contact points in fingerprint plots indicates that one type of molecule exists in the unit cell (Figure 3b-d).
Figure 3 .
Figure 3. (a) A dimer of complexes with one molecule to be surrounded with a dnorm decorated HS and the other in a ball and stick presentation for compound 3. Contribution of each type of interactions (b) H•••H, (c) C•••H/H•••C, and (d) O•••H/H•••O derived from the fingerprint plot of compound 3.The outline of the full fingerprint contribution is shown in gray.
Figure 3 .
Figure 3. (a) A dimer of complexes with one molecule to be surrounded with a d norm decorated HS and the other in a ball and stick presentation for compound 3. Contribution of each type of interactions (b) H•••H, (c) C•••H/H•••C, and (d) O•••H/H•••O derived from the fingerprint plot of compound 3.The outline of the full fingerprint contribution is shown in gray.
Figure 4 .
Figure 4. (a) The cation (Mn4(Bet)4(mdea)2(mdeaH)2) + is presented with a dnorm decorated HS and the anions surrounding it in a ball and stick presentation for compound 4 (left).The cation is presented in ball and stick method to the right of the figure at the same orientation.Contribution of the most important type of interactions (b) H•••H, (c) C•••H, and (d) H•••O derived from the fingerprint plot of compound 4. The outline of the full fingerprint contribution is shown in gray.Structural intermolecular interactions, such as π−π stacking, C−H•••O, and O−H•••O hydrogen bonds, and diamagnetic metal cations have been considered as pathways for magnetic superexchange noncovalent interactions [23].Special attention has been given to the π•••π one [21,24].For compound 5 [21], the contribution of H•••H, O•••H/H•••O, C•••C, and C•••H/ H•••C is 39.8, 38.1, 8.3, and 7.3%, respectively (Figure 5b-d for the first three type of intermolecular interactions).Theoretical calculations show that in both superexchange interactions through O•••H/H•••O (hydrogen bonds), C•••C (π•••π type) are antiferromagnetic, with the second to be stronger.The π•••π type of interaction are clearly seen on the shape decorated HSs where the characteristic blue and red triangles are present (Figure 5a).Table2lists parameters for the most important magnetic properties of all the compounds studied in this work, together with the characteristic patterns observed on the HSs.
Figure 4 .
Figure 4. (a) The cation (Mn4(Bet)4(mdea)2(mdeaH)2) + is presented with a d norm decorated HS and the anions surrounding it in a ball and stick presentation for compound 4 (left).The cation is presented in ball and stick method to the right of the figure at the same orientation.Contribution of the most important type of interactions (b) H•••H, (c) C•••H, and (d) H•••O derived from the fingerprint plot of compound 4. The outline of the full fingerprint contribution is shown in gray.Structural intermolecular interactions, such as π−π stacking, C−H•••O, and O−H•••O hydrogen bonds, and diamagnetic metal cations have been considered as pathways for magnetic superexchange noncovalent interactions [23].Special attention has been given to the π•••π one [21,24].For compound 5 [21], the contribution of H•••H, O•••H/H•••O, C•••C, and C•••H/H•••C is 39.8, 38.1, 8.3, and 7.3%, respectively (Figure 5b-d for the first three type of intermolecular interactions).Theoretical calculations show that in both superexchange interactions through O•••H/H•••O (hydrogen bonds), C•••C (π•••π type) are antiferromagnetic, with the second to be stronger.The π•••π type of interaction are clearly seen on the shape decorated HSs where the characteristic blue and red triangles are present (Figure 5a).Table2lists parameters for the most important magnetic properties of all the compounds studied in this work, together with the characteristic patterns observed on the HSs.
7 of 9 9 Figure 5 .
Figure 5. (a) A dimer of complexes presenting π•••π type of intermolecular interactions with one molecule to be surrounded with a shape decorated HS and the other in a ball and stick presentation for compound 5. Contribution of the most interesting types of interactions, (b) H•••H, (c) C•••H/H•••C, and (d) O•••H/H•••O derived from the fingerprint plot of compound 5.The outline of the full fingerprint contribution is shown in gray.
Figure 5 .
Figure 5. (a) A dimer of complexes presenting π•••π type of intermolecular interactions with one molecule to be surrounded with a shape decorated HS and the other in a ball and stick presentation for compound 5. Contribution of the most interesting types of interactions, (b) H•••H, (c) C•••H/H•••C, and (d) O•••H/H•••O derived from the fingerprint plot of compound 5.The outline of the full fingerprint contribution is shown in gray.
Table 2 .
Physical parameters and patterns on HSs of studied compound
Table 2 .
Physical parameters and patterns on HSs of the studied compound. | 5,290.6 | 2021-10-14T00:00:00.000 | [
"Physics"
] |
Phytotherapeutic Potentials of Synedvilla Nodiflora: In-Vitro Quantification of Phytochemical Constituents, Antioxidant Capacities and Skin Enzymes Inhibiting Activities
Quantification of Constituents, Capacities and Skin Enzymes Inhibiting Abstract Synedrella nodiflora is a useful medicinal plant that has been evaluated for its application in treatment of several ailments in sub-Saharan Africa and Bangladesh since time immemorial. In the present study, the methanol extract of the plant was investigated in-vitro for the presence of some essential phytochemicals, its antioxidant capacities, activity inhibition of skin degenerating enzymes of elastase and tyrosinase as well as its anti-lipid peroxidation activity. Determination of the total antioxidant capacities of the extract was achieved through evaluation of oxygen radical absorbance capacity (ORAC), ferric ion reducing antioxidant potential (FRAP) and iron-II induced inhibition of lipid peroxidation (LPO) assays. The quantitative analysis outcome of this study showed the relative abundance and concentration of flavanol, alkaloid, flavonol, phenolics, tannin, proanthocyanidins and saponin in Synedrella nodiflora methanol extract with the relative abundance (%) of the phytochemicals is in the order of polyphenols (19.09)>Saponin (18.61)>flavonol (18.00)> proanthocyanidins (14.52) > flavanol (10.35)>Alkaloid (10.07)>tannins (9.36). The outcome of this study showed the ability of the plant extracts to scavenge free radicals and to inhibit degradation of lipids due to oxidative damage. Thus, S. nodiflora may be used in the treatment and management of oxidative stress and its related diseases. It was also observed that the plant extract possess a mild in-vitro tyrosinase and elastase inhibiting activities which implies that the plant may find applications in cosmetic preparations for skin depigmentation and anti-wrinkle agents in
cancer etc. Such plants include Citrulus colocynthis, Acacia arabica,
Ocimum gratissimum, Azadirachta indica, Phyllanthus amarus, Eclipta alba and Vernonia amygdalina [8][9][10][11][12][13]. The ethnopharmaceutical and ethnomedicinal applications of these plants are due to their varying degree of bioactive phytochemical constituents as well as their characteristic antioxidant properties [14][15][16][17]. The enzymes inhibiting properties of medicinal plants have been found exciting, adding more value to phytotherapy and making plants an excellent alternative to conventional synthetic chemicals in biomedicine. The alpha amylase and alpha glucosidase inhibition activities of some plants have been reported [18,19]. Recent independent researches by [20,21,22] have shown the inhibitory effects of some plant materials on the enzyme tyrosinase present in the skin and they Synedrella nodiflora [23]. Synedrella nodiflora is a branched ephemeral herb that can grow up to 80 cm tall. It is characterized with a shallow root system which is usually strongly branched. The lower part of the stem may root at the nodes, especially in moist conditions while the leaves exist in opposite pairs and are usually 4-9 cm long. The presence of various bioactive constituents in the leaf extract of Synedrella nodiflora was reported after screening studies by [24,25]. Solvent extracts of Synedrella nodiflora have been shown to possess flavonoids, alkaloids, glycosides, steroids, tannins, saponins and phytosterols, triterpenoids gums and reducing sugars [26][27][28][29]. The plant is essential for the treatment of various diseases and its leaves are eaten as a vegetable by some livestock and human. It also possesses sex hormone activity [30]. The leaves can be used as Pregnant Mare Serum Gonadotrophin supplier in animal husbandry and to improve reproductive parameters in female animals [31].
Literatures on the skin enzymes inhibitory activities of the plant are still limited according to SciFinder and Dictionary of natural products. This research is therefore directed at investigating the inhibitory actions of Synedrella nodiflora on the degenerating actions of enzymes present in the surface of the skin (tyrosinase and elastase) as well as its phytochemical constituents and antioxidant properties to support and validate its acclaimed ethnopharma-cological application.
Chemicals and Reagent
The reagents and standards used for this work were all of analytical grades with high percentage purity, secured from Sigma-Al-
Samples Collection and Preparation
Aerial part of S. nodiflora was sourced from a local farm in Ikere-Ekiti (7.4991° N, 5.2319 ° E), Ekiti State Nigeria. The plant was identified by the Herbarium curator at the Department of Biological Science Koladaisi University, Ibadan, Nigeria, where a voucher specimen number Kdu/IB/034 was assigned to the S. nodiflora.
The obtained plant materials were prepared by washing them with distilled water and drying at room temperature for two weeks.
The plant parts were crushed separately using pestle and mortar.
Thereafter, the crushed parts were mixed together, pulverized by an electric blender into a homogenized powder, weighed and stored in different airtight sterile sample bottles pending analysis.
Extraction
The powdered plant material was extracted with methanol. Approximately 50g of the powdered material was soaked in 1000 mL of the solvent in a vial for 72 hours for cold extraction. The extract was filtered and concentrated at 50 0 C using rotary evaporator. The concentrate was stored in an air tight sample vial pending analysis.
Determination of Total Phenol
The total phenolic content of the aerial part of the plant was determined using the Folin-Ciocalteu method [32]. The procedure involved adding both distilled water and Folin-Ciocalteu reagent to a 125 μL of the solvent extract. The mixture was allowed to stand for 6 min before the adding sodium carbonate solution (7.0% w/v).
Thereafter, the mixture was allowed to stand for 90 min after which the absorbance was read at 760 nm on a SpectrumLab70 spectrophotometer and the result were expressed in terms of Gallic acid in mg/mL of extract.
Determination of Saponin
The saponin concentration of the plant was determined using spectrophotometric method as described by [33]. In this method, 2.0 g of the extract was weighed into a beaker containing isobutyl alcohol (butan-2-ol) was added. The mixture was stirred and filtered through No 1 Whatman filter paper into a beaker containing
Determination of Tannin
The total tannin content was assessed by the experimental protocol of [18] with slight modifications. The tannin determination was carried out as follows. Approximately 0.5 g of the extract was diluted with 90% ethanol. 0.1 mL of the diluted sample was measured and added to 2 mL of Folin Ciocalteu reagent. After 10 min, 7.5 mL of sodium carbonate (7%) solution was added and the mixture incubated for 2 hours. The absorbance of this mixture was measured at 760 nm and the tannin content was estimated using tannic acid (TA) curve as the standard.
Determination of Alkaloid
Alkaloid content of the plant's solvent extract was determined by weighing 5.0 g portion of the solvent extract into a beaker and 250 mL of 10 % acetic acid in ethanol was added and allowed to stand for 5 min. The mixture was filtered and the extract was concentrated on a water bath to one fifth of the original volume. Ammonium hydroxide solution was then added in drops to the concentrated extract until the precipitation was completed [19]. The precipitate was collected, washed severally with dilute ammonium hydroxide and filtered. The residue was dried in a desiccator and weighed.
Flavanol Content
The flavanol content of the S. nodiflora was determined spectrophotometrically in accordance to the modified method of Popoola et al. [7]. A 25 mL 0.05% 4 dimethylaminocinnam-aldehyde (DMA-CA) solution prepared by dissolving DMACA in 8% HCl prepared in methanol was added to 50 mg of the solvent extract. The mixture was allowed to stand for 30 min and the absorbance was read on a SpectrumLab70 spectrophotometer at 640 nm. The result was expressed in terms of catechin equivalents in mg CE/mg of extract.
The determination was carried out in triplicate.
Determination of Flavonol
The flavonol content of the plant extract was determined by the method described (2008) with slight modifications. 2 mL of AlCl3 prepared in ethanol and 3 mL of 50 g/L sodium acetate solution were added to 2 mL of the extracted sample in a test tube and mixed.
The mixture was incubated for three hours (3 hrs) at 20 oC. Series of stock solution of 20, 40, 60, 80, and 100 μg/mL were thereafter prepared. The absorbance of the solutions was measured at 440 nm against a blank at 593 nm using a SpectrumLab70 spectrophotometer. The total flavonol content was calculated in terms of quercetin equivalent in mg QE/mL of sample from the calibration curve.
Determination of Proanthocyanidins
The total proanthocyanidins content of the plant material was determined by the procedure of [45]. An amount 0.5 g of the meth-anol extract of the plant material was vortex mixed with 3 mL of vanillin (4 %) prepared in methanol and 1.5 mL of hydrochloric acid. The mixture was allowed to stand for 15 min at ambient temperature. Thereafter, the absorbance was read at 500 nm. The total proanthocyanidin content was expressed in terms of catechin (CE mg/g).
Ferric Reducing Antioxidants Potential (FRAP) Assay
The FRAP assay was performed according to a modified method of [19]
Inhibition Lipid Peroxidation Assay (LPO)
For the determination of the lipid peroxidation inhibition assay, a little modification was made on the adopted method of Snijman
Oxygen radical absorbance capacity (ORAC) assay
The methods of [22,35] were adopted for the ORAC assay: The 538 nm emission wavelengths. The ORAC value was calculated by dividing the sample curve-area by the trolox curve area [22]. The ORAC value of the extract was expressed as mg/mL Trolox equivalent (TE) of dry extract.
Enzymes Inhibition Activities Tyrosinase Inhibition Assay
The aerial part of the S. nodiflora was also assayed for its tyrosinase inhibition activity in a 96-well reader using a modified procedure of [36]. The anti tyrosinase activity of the plant was carried
Elastase Inhibition Assay
The Elastase inhibition activity of the plant material was assayed by modifying the method of [37]. In this method, N-succ-(Ala)3-nitroanilide (SANA) was used as the substrate the release of p-nitron-
Results and Discussion
The phytochemical analyses of the S. nodiflora extract revealed the relative abundance ( Figure 1) and concentration (Figure 2 These compounds are often referred to as powerful chain-breaking antioxidants [39][40][41]. The presence of these bioactive phytochemicals in the plant showed that the plant is a useful antioxidant material that may exert protective effects against inflammation, cancer diabetes and cardiovascular disease; they may also play a vital role in microbial inhibition. Further, the results of the present study demonstrate that the plant exhibit variations in its phytochemical constituents (Table 1). Polyhenols (44.97 ± 1.33 mg of GAE/mL) was the most abundant Table 1). The mechanism of lipid peroxidation has been suggested to proceed via a free radical chain reaction Zhiyong and Yuanzong [42] which has been associated with cell damage in biomembranes [22,43]. The damage has been shown to initiate the development cancerous cells, cardiovascular diseases and diabetes according to [44,45]. The potency of the S. nodiflora extract to inhibit the process of lipid peroxidation was investigated in this study. The methanol extract of the plant was able inhibit lipid peroxidation by 55.60% well below that of the standard EGCG (90.95%). However, the value (Table 2) recorded for this plant showed its mild ability to inhibit lipid peroxidation processes ( Table 2). The reducing power of a compound is related to its electron transfer ability and may therefore serve as a significant indicator of its potential antioxidant activity [7]. To estimate the reductive ability of the plant's extract, the ferric ion reducing antioxidant potential of the extract was assayed. The result of this study showed Fe 3+ to Fe 2+ transformation potential of the plant's extract with FRAP value of 681.10±0.13 mg/ mL which is higher than that of the standard 3.20±0.01 mg/mL as shown in Table 2. The higher reducing ability of the plant's extract implies a higher electron donating ability of the extract. Table 3).
The inhibitory effect of S. nodiflora on tyrosinase activity was investigated in-vitro and the result is presented in Table 3. The assay was carried out to show the possibility of using the plant to solve problems relating skin pigmentation. The potential of the plant's extract and kojic acid (standard) to inhibit this enzyme increased from 20 to 100μg/mL with respect to the results obtained from this study. This shows that the inhibition activity of the materials increased with increasing concentration of the extract (Figure 3,4).
The S. nodifora extract and kojic acid showed optimum inhibition (39.75 and 95.9 % respectively) at the concentration of 100 μg/ mL ( The tyrosinase inhibition activity of the plant showed its value as a promising material for solving the problems of hyperpigmentation or may be used for depigmentation purpose. Elastase inhibition activity of the S. nodiflora was also investigated to evaluate its applicability in skin treatment and problems solving involving skin wrinkling and dry skin. The performance efficiency of the plant's extract to inhibit Elastase at optimum concentration (100μg/mL) was 44.4%. The percentage by which both the extract and the standard (xanthone) inhibited the enzyme increase with their increasing concentrations respectively. It is possible that increasing the concentration of the extract beyond 100μg/mL could increase its inhibition activities. However, the plant material was able to demonstrate ability to maintain healthy skin and anti-wrinkle activities even at lower concentrations.
Conclusion
The presence of free radicals and reactive oxygen species is a major cause of skin aging and cellular oxidative stress related prob- | 3,035.2 | 2020-11-10T00:00:00.000 | [
"Biology"
] |
Super low threshold plasmonic WGM lasing from an individual ZnO hexagonal microrod on an Au substrate for plasmon lasers
We demonstrate an individual ZnO hexagonal microrod on the surface of an Au substrate which can become new sources for manufacturing miniature ZnO plasmon lasers by surface plasmon polariton coupling to whispering-gallery modes (WGMs). We also demonstrate that the rough surface of Au substrates can acquire a more satisfied enhancement of ZnO emission if the surface geometry of Au substrates is appropriate. Furthermore, we achieve high Q factor and super low threshold plasmonic WGM lasing from an individual ZnO hexagonal microrod on the surface of the Au substrate, in which Q factor can reach 5790 and threshold is 0.45 KW/cm2 which is the lowest value reported to date for ZnO nanostructures lasing, at least 10 times smaller than that of ZnO at the nanometer. Electron transfer mechanisms are proposed to understand the physical origin of quenching and enhancement of ZnO emission on the surface of Au substrates. These investigations show that this novel coupling mode holds a great potential of ZnO hexagonal micro- and nanorods for data storage, bio-sensing, optical communications as well as all-optic integrated circuits.
We demonstrate an individual ZnO hexagonal microrod on the surface of an Au substrate which can become new sources for manufacturing miniature ZnO plasmon lasers by surface plasmon polariton coupling to whispering-gallery modes (WGMs). We also demonstrate that the rough surface of Au substrates can acquire a more satisfied enhancement of ZnO emission if the surface geometry of Au substrates is appropriate. Furthermore, we achieve high Q factor and super low threshold plasmonic WGM lasing from an individual ZnO hexagonal microrod on the surface of the Au substrate, in which Q factor can reach 5790 and threshold is 0.45 KW/cm 2 which is the lowest value reported to date for ZnO nanostructures lasing, at least 10 times smaller than that of ZnO at the nanometer. Electron transfer mechanisms are proposed to understand the physical origin of quenching and enhancement of ZnO emission on the surface of Au substrates. These investigations show that this novel coupling mode holds a great potential of ZnO hexagonal micro-and nanorods for data storage, bio-sensing, optical communications as well as all-optic integrated circuits.
A lthough surface plasmon-enhanced ultraviolet random lasing of ZnO films 1 and nanowires 2,3 have been achieved, they are still traditional photonic lasers. Optical confinement in traditional photonic lasers is restricted by the diffraction limit of the light which often demands large device sizes 4,5 . Very recently, many attentions have been paid to plasmon lasers [6][7][8][9][10] , which have been fabricated by CdS nanowires 11 and nanosquares 12 on the Ag substrate. Research showed that the geometry of metal substrates and dielectric layers has a crucial influence on the performance of plasmon lasers [13][14][15] . Additionally, surface plasmon is proposed to scale down their wavelength for nanoscale photonics, but the optical modes propagation loss increase rapidly on scaling the optical mode down [16][17][18] . Well known, ZnO nanostructures have attracted considerable interest due to great potential applications in micro-and nanoscaled optoelectronic devices, thus, understanding of a ZnO nanostructure such as a nanorod on the surface of a metal substrate has been becoming interesting for developing ZnO-based plasmon lasers. However, there have not been any reports involved in the system of a ZnO nanostructure on the surface of metal substrates for plasmon lasers, so far. On the other hand, micro-and nanoscaled whispering-gallery resonators with total internal reflection have the great potential for the applications of nanolasers and other nanophotonic devices 19 . ZnO hexagonal micro-and nanorods are regarded as important building blocks for nanoscaled optoelectronic devices, because they provide high quality (Q) factors leading to strong optical feedback within the micro-and nanocavity [20][21][22][23][24][25] . Therefore, both plasmon laser 12 and whisperinggallery microcavity 26 perform potential applications in ultrasmall-mode-volume devices.
In this contribution, we report a systemic study of an individual ZnO hexagonal microrod on the surface of Au substrates and we demonstrate that the surface plasmonic modes can strongly couple with the WGMs of ZnO hexagonal microrods to enhance the photoluminescence and split the WGMs on the rough surface of Au substrates. Importantly, we achieve high Q factor and super low threshold plasmonic WGM lasing in the fabricated systems, in which the Q factor is up to 5790 and the threshold is down to less than 0.5 KW/cm 2 which is the lowest threshold value reported to date for ZnO nanostructures lasing, at least 10 times smaller than that of the most of ZnO at the nanoscale. Finally, we propose the physical understanding to the enhancement and quenching of the photoluminescence of ZnO on the surface of Au substrates. Therefore, these results are useful for design ZnO-based plasmon lasers. Note that many reports were involved in the case of ZnO nanowires covered by Au nanoparticles [27][28][29][30][31] . However, we suggest that the system of a ZnO nanostructure on the surface of Au substrates should be distinctly different from the case of ZnO nanowires covered by Au nanoparticles for plasmon lasers.
Results
Figs. 1(a-b) show the typical SEM images of the prepared ZnO hexagonal microrods in different magnification. Clearly, these rods have perfect hexagonal shape with the size about 1-2 mm diagonal and 20 mm length, and their size is uniform. Fig. 1(c-d) show an individual ZnO microrod lies on silicon and Au substrates. Fig. 2 shows the different surface roughness of four Au substrates. We can clearly see that the surfaces of A1 and A2 substrates are rough, and the roughness increases with the thickness of gold films. However, the surface of the A3 substrate is the roughest among these substrates, because the surface actually consists of isolated nanoparticles.
From Fig. 3, we can see that the different PL of ZnO microrods on Si, A1, A2, and A3 substrates, respectively. In detail, we can clearly see that the ultraviolet (UV) and defect emissions of ZnO are enhanced on A1 and A2 substrates compared with that of ZnO on the Si substrate, i.e., the rough surface of Au substrates can strongly enhance both UV and defect emissions of ZnO on the A1 and A2 substrates compared with that of the Si substrate in Fig. 3(a). Meanwhile, more WGMs of ZnO emission emerge on these substrates in Fig. 3(b-c). Therefore, these experimental observations show that the interaction between ZnO rod and substrate can greatly enhanced the WGMs of ZnO emission on the surface of Au substrates. In addition, there is a red shift both in UV and visible emissions of ZnO on these substrates. However, both UV and defect emissions of ZnO are quenching on the A3 substrate in Fig. 3(a). Thus, these results show that the enhancement of ZnO emission greatly depends on the surface roughness of Au substrates.
Further, taking an individual ZnO hexagonal microrod on the surface of the A2 substrate as an example, we study the lasing behavior and the results are shown in Fig. 4(a-c). Note that, in our case, for a ZnO rod on the bare Si substrate, no lasing on the WGM cavity modes is observed. From Fig. 4(a), we can see a strong UV lasing at 386 nm. Spectrum between 386-390 nm was enlarged in the inset of Fig. 4(a). It is clear that the mode was not cut off at 386 nm. It is just because the lasing peak is so high that makes other modes suppressed. The threshold value of lasing is an important parameter for the performance of devices. Fig. 4(b) shows the threshold values of an individual ZnO hexagonal microrod on the Au substrate by increasing the pumping energy. In detail, the lasing threshold value is typically found to be 0.45 KW/cm 2 . Remarkably, the threshold is down to less than 0.5 KW/cm 2 , which is the lowest threshold value reported to date for ZnO micro-and nanostructures lasing 39 , at least 10 times smaller than that of ZnO at the nanoscale. Fig. 4(c) shows the defect lasing experiment. No good lasing peak was observed, though it seems that two lasing mode at 546 nm and 612 nm may arise. By increasing the pumping energy, only the mode outline became clear.
For optical resonators, analyzing the observed resonance to understand the nature of the light confinement is very important. Fig. 3 shows obvious cavity modes despite being below the threshold, which indicates the excellent cavity feedback. The Q factor of ZnO microrods is given by Q 5 l/Dl, where l and Dl are the peak wavelength and its full width half maximum (FWHM). As there are many modes, we choose the strongest peak to calculate the Q factor under threshold. In the lasing condition above threshold, the Q factor of the A2 substrate is up to 5790 at the wavelength of 386 nm. Without appropriate cavity feedback, lasing phenomena could not happen. Various feedback mechanisms may account for the lasing, such as an F-P cavity, a random resonant effect or WGM. Since the lasing in our experiment is observed from a single ZnO disk, it cannot be attributed to random lasing caused by multiple scatting in a disordered medium. It is concluded in Ref. 32 that an F-P cavity provides a very low Q factor, as low as 468, which is much lower than our Q factor. It is also notable that the WGM is more preferable because of a better optical confinement. So we ascribe the observed lasing to WGM. These results suggest that the system of an individual ZnO hexagonal micro-and nanorod on the surface of Au substrates is expected to be a promising candidate for achieving ZnO-based plasmon laser.
Based on our experiment results, this hybrid system can really couple surface plasmon modes with WGMs, which can induce to a high Q factor below threshold. The Q factor of surface plasmon polaritons (SPPs) guided WGMs, Q SPP1WGM , is limited by two factors expressed as follow 27 Q SPP is the metal loss experienced by the propagating SPP mode due to penetration into the gold layer. Q WGM is the radiation loss experienced by WGM. Therefore, these plasmonic WGMs bridge the space between traditional Fabry-Pero modes, WGMs and localized void plasmons. Such coupling of SPP and WGM works when the surrounding gold film provides the delocalized plasmon 28 . These results thus provide the important information that a ZnO hexagonal microrod with total internal reflection on the surface of Au substrates has a potential for plasmon laser if the surface geometry of Au substrates is appropriate. Furthermore, the clear signature of multiple cavity mode resonances at low pump powers demonstrates sufficient material gain to achieve full laser oscillation. Meanwhile the cavity feedback is abundant.
Discussion
For the ZnO micro-and nanorods, when excited by photons, electrons in the valence band (VB) can be excited to the conduction band (CB) creating electron-hole pairs. However, metal contacted with ZnO surface can alter its emission behavior through different mechanisms. Using the radiating surface plasmon mechanism, Lakowicz 31 explain that surface plasmon resonance scattering of metal can enhance PL emission of semiconductors, while surface plasmon absorption can cause a quenching. In detail, PL enhancement in semiconductors usually occurs on rough metal surface or metal nanoparticles with large sizes, and quenching occurs on smoother metal surface or metal nanoparticles with much larger sizes. In order to gain the origin of enhancement and quenching in our experiments, the extinction spectra of the three Au prepared under different condition had been measured. It determines both absorption and scattering of the Au. Two SPR modes associated with the two axes of Au oblate spheroids were observed. The high energy band near 360 nm (out-of-plane mode) is due to the oscillation of the normal mode 33 , and the low energy band comes from the dipole oscillation parallel to the substrate plane (in-plane mode). As indicated in Fig. 5(a), the emission of ZnO at about 380 nm overlaps the longed wavelength spectral component of the out-of-plane resonance band of Au. It is known that due to the presence of the Au with proper rough and dipole-dipole interaction between the neighboring particles in the film, the SP band broadens in the longer wavelength spectral region and the scattering coefficient is raised greatly in that region 34 . Thus, by means of the effective SP radiative scatting the enhancement of the band gap emission at 380 is expected for A1 and A2. However, when the Au particles became too big and separated far from each other, the attenuation effect due to the reflection and absorption would dominate 35 . The inset in Fig. 5(a) which is the enlarged extinction spectrum of A3 shows that UV emission of A3 would not increase. For the defect emission at 560 nm, it located much further from the spectral component of the in-plane resonance band for A3. Under these conditions, the SP absorption attenuation is more effective than the radiative scattering, leading to the quenching of emission near 560 nm. However, the in-plane resonance band redshifts in A1 and A2. Consequently, the defect emission at 560 nm now approaches the wavelength spectral region of the in-plane resonance band. Then, the SP scatting at 560 nm will be raised and the emission can be effectively enhanced. It is suggested that only by proper resonance scattering of SP, the light emission can be enhanced.
In fact, electron transfer between semiconductor and metal takes place when they are in direct contact, and the direction of electron transfer depends on the band structures of semiconductors and the energy states of metal. To obtain a clear picture of the quenching and enhancement of PL in the system of ZnO hexagonal rods on the surface of Au substrates, the underlying mechanism is proposed as shown in Fig. 5(b). In drawing the band alignment, we use the data as follows. The conduction band of ZnO is located at 20.8 eV and the Fermi level of gold is 20.75 eV vs normal hydrogen electrode (NHE) 36 . As the large work function of ZnO, its Fermi energy level lies lower than the Femi energy level of gold, and the initial electron transfer between Au and ZnO will cause a band bending.
As the average particle size of the Au films deposited by sputtering is 4-100 nm, scattering dominates over absorption process. Electrons of Au films are excited to the surface plasmon level which is higher than the CB edge of ZnO, so electrons transfer to CB. The electrons accumulating in the potential well at the interface will increase substantially. Even if the photon generated by electron-hole pairs are separated at the interface, the radiactive recombination probability of electrons on CB with holes on VB will raise a lot due to the replenishment of electrons from Au, which results in an enhancement of the ultraviolet emission of ZnO. For the defect emission, since the Fermi level of Au is higher than the deep defect level of ZnO, it is therefore possible that the electrons can transfer from the Fermi level of Au to the deep defect level of ZnO. Thus the defect emission is enhanced 37 . And the wavelength of the plasmon is close to the defect wavelength. Thus, coupling of these two waves make it possible for PL enhancement. When the average size of the Au film becomes larger after being annealed, PL is quenching. Note that too much rough surface of Au film acts as an electron trap 38 . Electrons transfer from ZnO to Au, as a result, the defect emission is effectively suppressed and no enhancement for UV was observed. When the energy of surface plasmon is lower than that of excitons, energy transfers from excitons to surface plasmon occurs, which results in a red shift of the emission peaks of all.
In summary, we have demonstrated that the hybrid optical waveguide that consists of an individual ZnO hexagonal microrod on the surface of Au substrates suggests new sources that may manufacture miniature plasmon laser of ZnO by surface plasmon polariton coupling to WGMs. The lasing experiments of the fabricated ZnO hexagonal rod-Au substrates showed that the surface plasmon modes coupling with the WGMs can lead to a large Q factor and an abnormally low threshold for lasing, e.g., the Q factor is up to 5790 and the threshold is down to less than 0.5 KW/cm 2 which is the lowest value reported to date for ZnO nanostructures lasing, at least 10 times smaller than that of ZnO at the nanometer. The physical understanding was proposed to address enhancement of ZnO emission on different surface of Au substrates based on electron transfer mechanisms. These findings make this unique coupling mode promising for a variety of applications in data storage, bio-sensing, optical communications as well as all-optic integrated circuits.
Methods
Materials preparation. The preparation of ZnO hexagonal microrods is conducted in a horizontal tube furnace by chemical vapor deposition. A mixture of commercial ZnO and graphite powders with a weight ratio of 151 is loaded on a quartz boat and posited at the higher temperature of the tube. Single-crystal Si wafers are cleaned by a standard procedure and placed downstream 3 cm far from the powder source at the lower temperature to act as a deposition substrate. Then, the system is heated to 1100uC for 60 min and kept at this temperature for 30 minutes without any catalyst in the air. After it was cooled down to room temperature, we observe a gray film on the substrate when it was moved out. A little and sharp knife is used to scrape the gray film into the deionized water and sonicating it for 2 minutes. Finally, the suspension is pipetted onto the Au substrates.
Au substrates are fabricated by depositing Au films on Si wafers. Two Au coatings with different thickness are deposited on Si wafers by a sputtering technique. The thickness of Au films is controlled by a current of 40 mA with two deposition times of 60 and 180 s, respectively. These two Au substrates are named to be A1 and A2 substrates. Additionally, another A2 substrate is annealed at 600uC in the air for 30 min to changing the surface morphology, which is named A3. The thicknesses of gold films on A1 and A2 substrates are about 40 and 90 nm, respectively. And the sizes of the gold particles on the A3 substrate are about 200 nm. In order to make the ZnO microrods attach to the surface of Au substrates tightly, all the samples are annealed at 100uC for 10 min.
Characterizations. The fabricated systems were characterized by scanning electron microscopy (SEM) and the photoluminescence (PL), and lasing measurement was carried out by Renishaw inVia Raman Microscope with 325 nm excitation wavelength and the acquirement range was from 370 to 640 nm. The resolution of the microscope is in the order of mm, so we can acquire the PL and lasing from the individual ZnO microrod. The excitation laser of the m-PL system is focused to a spot of 10 mm in diameter. The excitation laser source is continuous-wave He-Cd laser. The original power of the laser is 30 mW and the power of the laser reached the sample is 9 mW. A variable neutral density filter is used to attenuate the power. All these equipments are connected to a computer and operated in a WiRE-Single scan measurement system. In this system, the percent of the measured power can be chosen. The maximum excitation is about 11.5 KW/cm 2 . Then 0.05%-9% of the maximum excitation was used to acquire the lasing spectrum. Atomic force microscope (AFM) was used to characterize the surface roughness of various Au substrates. All measurements were carried out at room temperature. | 4,600.4 | 2015-03-05T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities
Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human–computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and hand-crafted techniques. We also explain a generic architecture to recognize human actions in the real world and its current prominent research topic. At long last, we are able to offer some study analysis concepts and proposals for academics. In-depth researchers of human action recognition will find this review an effective tool.
Introduction
Researchers are showing increasing interest in human activity recognition, as shown by the growing number of research publications in the field over the last ten years (Figure 1). The reason behind this incremental trend is the many different areas in which it is used, including human-computer interaction (HCI), surveillance cameras, virtual reality (VR), and elder care. By looking at research papers published in computer vision and machine learning journals and conferences, we can see a trend in this direction. Even though the number of publications in this field has been going up, the exact rate of growth may vary, depending on things like how popular certain sub-topics are within human action recognition, changes in funding and resource allocation, and the development of new and innovative methods. However, anyone can find academic articles about human action recognition by searching online academic databases like Google Scholar, MDPI, IEEE Xplore, or ACM Digital Library. These databases track and index research publications in many fields, including human action recognition. The widespread use of computer vision for human activity identification is an important step toward implementing practical solutions. For instance, in healthcare contexts, it may make it easier for technology to monitor and analyze the progress of patients who are undergoing motion rehabilitation, by removing the demand for wearing sensors. It may allow the identification of elderly in an emergency state, such as having fallen down. It may give the essential information to alert a robot that has been trained to aid in such situations or alert an appropriate organization [1]. If we take academia as an example, this technology might be utilized to expand the capabilities of robots, giving them the ability to enhance social interaction skills in autistic spectrum disorder cases [2]. Human activity recognition is particularly useful for sports areas, since it can record and evaluate the performance of players, allowing for the further growth of their abilities. The notion that robots can execute desired tasks by reading human intentions in human-robot interaction or cooperation situations is applicable. By using human activity detection, it could also be used in virtual reality and augmented reality apps to enable the user to use augmented reality in a natural way.
There has been a lot of progress made in the field of human action recognition in recent years, with a number of interesting new products appearing on the market. Here are a few examples: • Smart cameras: These cameras use algorithms, based on artificial intelligence (AI) and machine learning (ML), to track and identify people's actions in real time. • Wearable Devices: Wearable technology uses sensors to monitor the wearer's every move, allowing for accurate recognition of common physical motions like running, jumping, and walking. • Health and Fitness Apps: Apps for health and fitness track and analyze user data using artificial intelligence and machine learning algorithms to make suggestions and give feedback, based on specific activities like running, cycling, and swimming. • Automated Surveillance Systems: For security and safety reasons, automated surveillance systems are available that use artificial intelligence and machine learning algorithms for human action identification. • Human-computer interaction: systems that employ human action recognition for human-computer interaction are available, with examples including gesture recognition in gaming and virtual reality.
These are just a few examples of new products made to recognize human actions. The field is dynamic, so it is reasonable to expect plenty of new and interesting advances in the near future.
Understanding human activity is more difficult than ever due to changing online environments, occlusion, different viewpoints, execution pace and biometric changes. Online Adaptation is the capacity to identify activities that occur in continuous video feeds and respond immediately by classifying the event. When it comes to action identification, the traditional method often puts an emphasis on categorizing the manually-clipped actions, and contrasts with online action recognition. While traditional action recognition is easier, in that it only requires classifying the manually-trimmed actions, online action recognition is more difficult. It needs to detect and recognize the occurrences of actions without classifying them, and it must be done in the presence of only partial actions. Occlusion may create difficulty in differentiating between various bodily parts, due to interocclusion and self-occlusion [3]. As the human body alters with varying sizes, appearances, shapes, and distances, viewpoint and biometric variability results in significant intra-class variance, which, in turn, impacts the performance of algorithms. It is also possible that these differences in execution rates are caused by differing performance styles and speeds.
The recent advancement of Convolutional Neural Network (CNN) has led to remarkable progress in human action recognition in videos. Several tasks. like classification, segmentation, and object detection, have significantly improved through CNN. Unfortunately, the impact of this progress is mostly on image-based tasks. There was not initially as much focus on the video domain, due to the inability of neural network models to capture temporal information in video and due to a lack of large data sets.
There have been many research articles summarizing work employing methods of human activity recognition in the computer vision field. A broad description of available RGB action datasets is given by Zhang et al. [4]. Chen et al. [5] evaluated human action recognition techniques that make use of the concept of depth. Kinect sensor-based motion detection applications were demonstrated by Lu et al. [6]. In [7,8], skeleton-based action recognition algorithms were discussed with multiple anatomical characteristics. In addition, there are other reviews on activity recognition, such as [9,10]. The paper by Zhu et al. [11] primarily evaluated RGB data-based human activity identification.
The study of recognizing and classifying human actions is referred to as "human action recognition". Researchers in the field of recognizing human actions often look at the following types of actions: This is, by no means, a complete list, and different researchers may focus on different types of activities depending on their interests and the topic at hand. The percentage of different human activities that have been studied during the last decade is shown in Figure 2. Based on the complexities of human actions, there are four basic types of human activity [12], and they are defined as follows: actions at the atomic level, between human and object, between pairs, and within groups. The emphasis of this study is on these four kinds of action done by a single individual, or several individuals. Our study includes a thorough investigation of hand-crafted human action recognition, as well as systems based on learning. Moreover, our paper discusses practical problems and possible answers for future studies that might help improve human action recognition.
Our Contributions
In brief, the paper's key contributions are as follows: • We provide a detailed introduction to human activity recognition using computer vision. • Our comprehensive analysis of action recognition was facilitated by examining both conventional and deep learning-based approaches. • We present a generic framework for recognizing human actions in videos.
•
In order to classify all the different approaches to human action recognition, we proposed a new taxonomy, and present a detailed discussion with recent work in regard to our taxonomy. • This study explores the challenges associated with existing approaches to actions and interactions, as well as emerging trends for possible future paths in the detection of complex human behavior and online activity.
We structured our paper as follows: Section 2 takes a look at the overall human action recognition techniques. In Section 3, we provide a generalized framework for identifying human actions. Section 4 presents research method and taxonomy for human action recognition and reviews the approaches based on feature extraction and activity types. Reviews on handcrafted methods and machine learning methods, such as deep learning, for human activity identification, as well as their capabilities in a variety of datasets, are also presented in this section. Section 5 represents the popular public datasets and approaches for human action recognition. Evaluation metrics and performances on different datasets are discussed in Section 6. Section 7 examines the issues, opportunities, and future directions of human activity recognition. Finally, Section 8 concludes and outlines potential avenues of research.
Overview
Research on human activity detection may be separated into methodologies, depending on feature extraction and the sorts of activities that are being studied. As a result of progress in machine learning studies, the human action recognition methodologies for accessible datasets may be classified as either manually-built features, using machine learning methods, or fully-automated methods, based on deep learning. It is important to remember that the fundamental objective is to acquire reliable human action characteristics, independent of the data format or processing method used. It has been suggested that spatial and temporal salient point characteristics [13,14], spatial and temporal density features [15,16] and combined trajectory features [17,18] may all be used to analyze RGB data. Human action depiction and identification using handmade features are hindered by issues like the limitations of human identification and posture estimate algorithms, camera motion, occlusion, and complex scenarios.
The use of depth sensors allows real-time, reliable human posture estimation since changes to the foreground or background have no effect on the accuracy of depth data, which enables objects to be quickly classified by their relative depth. Systems that use depth data and skeleton sequences to identify human actions have high recognition accuracy with little computational burden. In human activity recognition research, these approaches are widely used [4,[19][20][21][22]. Due to the high cost and precision requirements, these techniques are useful at close range and in specialized environments. The most common types of depth cameras include triangulating (using data from more than one camera), structured light (SLT), and time of flight (TOF) cameras. Large mistakes and poor accuracy are common in outdoor conditions when using SLT or TOF depth cameras because of their sensitivity to light. While the dual camera system is cheaper, it is more difficult to use in low-light situations because of the intricacy of the depth information computation. Laser scanners, for example, may also be used to determine depth, although they are costly and unsuitable for surveillance and residential monitoring.
Microsoft's new sensor, Azure Kinect [23], is made for applications that use artificial intelligence (AI) and the Internet of Things (IoT). The Azure Kinect sensor has a number of advanced features, such as a depth camera, RGB camera, and microphone array. The Azure Kinect sensor is part of Microsoft's Azure AI platform and is supported by a range of machine learning models that are specifically designed for AI and IoT applications. Microsoft is empowering developers to create a wide variety of cutting-edge AI and Internet of Things applications by integrating the sophisticated capabilities of the Azure Kinect sensor with strong machine learning models.
Automated feature learning from photos, using deep learning approaches, outperforms handmade features. Numerous attempts have been made to utilize deep learning strategies to extract features using RGB, skeletal, and depth information, and this provides a fresh overview of human action detection. Data include overall outlook features, depth information, and optical flow information, and skeletal sequences may be used for multimodal feature learning [24][25][26][27] from deep networks. Human action characteristics may be learned from either single-mode or multi-modal combined data using deep learning networks. Visual patterns, as well as optical flow data, are often utilized for input into deep learning techniques, with only a small number of approaches based on skeletal and depth data being used. The field of action feature extraction has recently gained a lot of interest due to emerging high-efficiency posture estimate techniques that leverage deep learning [28][29][30], and this is currently an important study area in human activity recognition [31].
Action categorization and detection are two distinct aspects of human action recognition. Segmented videos with just a single action in them may be used to classify actions using action classification. This is done by detecting the start and finish timings of each activity, locating them in space, and classifying them as either simple or complex actions. In the early stages of human activity recognition research, the challenge of classifying actions was the primary emphasis. Human action detection research has been increasingly popular over the years as a result of the growth of associated research subjects, like human posture estimation, object recognition, and deep learning [32][33][34][35][36][37].
Recognition of human actions is the topic of extensive research. The complexity of human action may be divided into four categories: atomic level, between human and object, between pairs, and within groups [12]. Atomic action is the simplest kind of action, including just the movement of the human body's components. Various simple motions may be used to create more complicated activities. When it comes to basic motions, "waving", "raising a foot", and "bending" are among the most prevalent. An individual action is defined as "walking", "punching", or "jumping" when it is performed by a single individual. Human-to-object interactions, like "holding a knife" or "playing piano", are all examples of interactions. An activity that involves numerous people or things is known as a "group action" and includes parades, gatherings, meetings, fights, and other similar events. First-and second-level action recognition have been extensively studied in the past.
Research into group action detection is still very much in inception, despite the increased attention it has received in the last several years.
Significant Achievements
Numerous laboratories and companies are investing time and resources into developing systems that recognize human actions with the use of AI and ML, particularly deep learning. Lists of the most important contributions made by the many different research groups and organizations working on human action recognition are notoriously difficult to compile. Among these organizations, the following have made particularly significant contributions: However, this is, by no means, an entire list, and many other groups and organizations have made important contributions to human action recognition as well.
Human Action Recognition Framework
Based on the kind of data analyzed, HAR, in earlier studies, was classified into two basic techniques: vision-based and sensor-based [47,48]. The first examines photos or videos captured by optical sensors [49,50], while the second investigates raw data from wearable sensing devices and monitoring devices [51,52]. Optical sensors may be distinguished from other kinds of sensors by the data they collect. Optical sensors, as opposed to wearable sensors, produce two-, three-, or four-dimensional pictures or videos. As a sensor-based HAR, wearable devices are an excellent example, since they are worn by the wearer to monitor and track a wide range of actions, like running or jogging, sitting, and resting [53]. A sensor, on the other hand, does not operate if a target is either too far away [54] or conducts behaviors that are not recognized by the sensor [55]. When it comes to vision-based HARs, CCTV systems have long been used [49]. Video-based systems for recognizing gestures and activities have been extensively investigated [56,57]. Furthermore, this issue is particularly advantageous to security, surveillance [58,59], and interactive applications [60,61]. Vision-based HAR has continued to be the primary focus of study in recent years, since it is more cost-effective and simpler to acquire than data captured through sensors. Therefore, this research only includes a limited, yet representational, range of research based on computer vision.
There are four major components to the human activity recognition framework, shown in Figure 3. The first is the data collection phase, which consists of capturing data using optical sensing equipment. The second is the pre-processing phase, which includes significant pre-processing stages regarding the collected data. The third is the learning or training stage, where features are learned from the dataset using techniques like machine learning and deep learning. The fourth is the activity recognition, or classification, phase.
Research Method and Taxonomy
A complete, accurate, and up-to-date review and comprehensive taxonomy of human action recognition necessitates a methodical and rigorous approach to research. In order to conduct this survey on human action recognition, we used the following research methodology:
•
Defining the scope and objectives: Goals and scope were established by first detailing what would be included in this study, which, in this case, centered on the many aspects of human action recognition. In this article, we give a brief overview of human action recognition, including where it came from, how it has changed over time, and how far it has come right to the present. • Conducting a comprehensive literature search: We searched academic literature extensively to find studies, articles, and publications pertinent to the study of human action recognition. We used Google Scholar, MDPI, PubMed, and IEEE Xplore, among many others, to accomplish this. • Evaluating the quality of the literature: We evaluated the quality of the literature we found by looking at aspects like the validity and reliability of the research methods used, how well the results fit with the goals of our review, and how well the data was analyzed and interpreted. • Classifying the literature: We organized the material we collected in terms of the precise components of human action recognition we were examining, using a classification system. Methods based on feature extraction, and methods based on activity types, and so on were all included. • Synthesizing the literature: To synthesize the literature, we summed up the main points of each research article we studied, compared and contrasted their methods and results, and added our own original thoughts and conclusions. • Analyzing and interpreting the data: We studied and interpreted the data from the literature review in order to address the particular issue, make conclusions, and find gaps in the present body of research.
We took a methodical and exhaustive approach to authoring this review of human action recognition in order to generate a high-quality, comprehensive, and up-to-date study that offers significant insights into this interesting and rapidly evolving topic.
Action classification issues in the present study cover the four semantic levels of action (atomic, behavior, interaction, group). only the first two categories (atomic, behavior) of action categorization have been the subject of previous studies. There has not been a lot of work on the topic of recognizing group activity in the scientific community yet, despite the current rise in interest in interaction recognition. Action feature representation is a fundamental issue in basic categorization and in action performed by a single individual. Research on action recognition, on the other hand, focuses mostly on basic actions and single-person actions. Since this research is based on a survey, we review and examine action identifying approaches considering two perspectives: feature extraction and activity type. This is shown in Figure 4. The next sections provide an in-depth look at human action identification techniques, including methods for extracting human action features, and methods for recognizing atomic, behavior, interaction, and group activities.
Handcrafted Representation Method
There has been a lot of recent interest in the field of handcrafted approaches to action recognition. Handcrafted feature representation has achieved great performances on different action classification problems. This approach intends to retrieve the temporal and spatial features in videos and to extract local descriptors from the frames of videos. Traditional machine learning often makes use of these features, like SVM and likelihood outline models, to recognize activities in raw videos. Handcrafted-based strategies use human perception and historical context to obtain actionable insights from data. There are typically three main stages to these kinds of strategies: (1) Segmentation of action (2) Selection of features and (3) Action classification, based on captured features. In order to build the descriptor, key features are extracted from the source video segments. The categorization is carried out via a general-purpose classifier, thereby expanding the method's adaptability, giving rise to lower computational costs and not being dependent on large data sets for training. Depending on the data modality the hand crafted approach can be categorized in three methods: techniques based on depth, techniques based on skeletons, and techniques based on hybrid types of features.
Depth-Based Approaches
As depth cameras and range imaging methods [62] have improved, researchers are able to more precisely execute HAR. To aid computers in recognizing human activity more precisely, RGB-D cameras gather depth information in addition to the original RGB data (as seen in Figure 5). Depth-based methods for action recognition take the depth images as input and detect the foreground to extract the human body and its corresponding movement in action. Several researchers [22,[63][64][65][66] projected the depth information of an image frame in 3D, such as views from the top, front and side, so as to extract the features. In [63] 3D points from the human body were derived from depth mapping to model the corresponding postures for human action recognition. However, it is computationally costly and time consuming to process a huge amount of 3D extracted points in large datasets. A method for human action recognition was proposed by [67], in which they used depth maps to generate a Depth Motion Maps (DMM) and, then, the DMM was used to compute a Histogram of Gradients (HOG). However, Chen et al. [22] claimed that, by substituting the HOGs with sequences of DMMs, computational cost could be reduced and accuracy of action recognition improved. To deal with the diversity in action speed, a multi-temporal DMM [65] was introduced to extract motion and shape information from different range of depth sections. Bulbul et al. [64] enhanced the shape attributes of DMMs by a new feature descriptor combining Contour let Transform (CT) and HOGs. However, the surroundings of 3D points were not considered in these methods, and, thus, the necessary information for action recognition might be missed. The 3D points collected from the surface image frame can be used to calculate the normal vectors so as to extract the motion and shape features in an action recognition model [19,69,70]. In [19] a novel descriptor for action recognition, using depth sequences, was proposed. This descriptor can capture shape and motion information simultaneously from a 4D space normal orientation histogram, using depth images, time and coordinates. A super normal vector was proposed by [69], using polynormal to encode local shape and motion information. Slama et al. [70] introduced a framework, in which they modeled local displacement features as sub-spaces lying on a Grassmann manifold to also create a density function of probability for action categorization.
On the other hand, several researchers introduced the segmentation of depth data to the point of interest and to extract the features for activity detection. Wang et al. [71] presented a method for extracting semi-local features which explored extensive sampling space to reduce noise and occlusion. A method for depicting local features surrounding the point of interest from videos was presented by [72]. In [73] a local point descriptor, obtained through sampling movement and structural features to present human activity in a depth context, was proposed. Liu et al. [74] produced spatial-temporal interest points (STIPs) using movement and structural features, extracted by means of noisy depth data. They introduced a bundle of visual words, a two-tiered model which removed noise and represented both shape and motion gestures. However, the research scope of these approaches are limited by computational cost and the need to detect interest points using all depth data from videos.
Skeleton-Based Approaches
It is also possible to derive information on the human body's skeleton from depth measurements, as seen in Figure 6. The low-dimensional space [75] of skeleton data makes HAR models run quicker. Exploiting depth cameras to create a 3D human joint is a promising research direction because of the wide range of potential applications. Human action representation based on body skeleton is considered an open research topic among researchers. Human body joints can be represented with 2D/3D coordinates from depth images and can be used to extract motion features by tracking their movements.
Many Skeleton-based action recognition approaches are proposed by researchers and can be categorized into two types: approaches based on trajectory and approaches based on volume. Trajectory-Based Approach: Approaches based on trajectory investigate the spatial and temporal movement of the human body's skeleton to extract different features. A trajectory descriptor, based on 3D points from the human skeleton, was proposed by [76], in which they integrated several 2D points to extract the movement of all joints. In [43], human actions were represented by calculating the relative and progressive features of skeleton joint angles. To collect enough dynamic and static knowledge, Qiao et al. [77] used trajectories of local features to create a constrained time window. Devanne [78] represented action movement sequences as spots in free-form curve space by projecting the position information of skeletal joints onto a Riemannian surface. The human action classification was then accomplished by calculating the resemblance on the manifold of paths. Guo et al. [79] suggested a gradient variance-based function to reflect movement trajectories of rigid bodies in six dimensions by decomposing the human body skeletal data considering five parts. After sparsely historizing the coded skeletal representations, a support vector machine (SVM), using a chi-square kernel, was applied for action detection. To improve skeleton-based action identification, ref. [80] proposed PoseConv3D. Instead of using a graph pattern to depict human bones, PoseConv3D uses a three-dimensional heatmap volume.
Volume-Based Approach: Texture, colour, pose, histograms of optical flow, histograms of directed gradients, and other features can be used in volume-based methods to reflect video as a spatial-temporal volume. Similarities between the two volumes are used to identify behavior. When the scene is noisy, volume-based techniques are ineffective. They are generally good for detecting very basic motion or gestures. Two-dimensional focus point detectors [81], include the use of techniques, such as scale invariant feature transformation (SIFT), as well as other techniques, such as corners and the Gaussian Laplacian, and were used to detect 3D interest points presented in [82]. Chaaraoui et al. [83] used an evolutionary technique to choose an optimum collection of skeletal data to create primary pose orders for every movement using dynamic time warping (DTW). Movements of five different bodily components were used by [84] to project their relative 3D structural relationship, for which human action was represented as curves. Even though there were overlapped areas of body parts, this approach could expose the concurrence of body parts, while isolating body parts could be difficult. Recently, ref. [85] specifically examined skele-tal data to jointly acquire various pair-wise relational connections among various human objects, enabling group action detection.
Hybrid Feature-Based Approaches
A combination of multi-sensory information, including color and depth maps, as well as skeletal data, can improve detection performance. There are a number of suggested approaches that use a combination of joint and depth image features to extract the matching depth data surrounding skeletal points. A new function, called the "local occupancy pattern" (LOP), was developed by [86,87] to obtain details of the visuals of every point by means of capturing local depth data. Through embedding skeleton data into depth sequences, ref. [75] partitioned the human body into many motion sections. A discriminatory descriptor was created by combining local features derived from these motion pieces. In [88], an hierarchical hidden Markov model's base layer was also used to correlate depth knowledge of objects around skeleton joints. Using a random forest-based fusion technique, ref. [89] coupled motion based on spatial information and interest points. In order to choose informative skeleton frames, Yang et al. [21] suggested using the cumulative movement energy feature extracted from depth maps and the minimize computational expenses by excluding noisy frames. After computing eigenjoints, they employed the non-parametric Naive Bayes Nearest Neighbor method to differentiate between various behaviors.
Some researchers suggest using RGB data in addition to a mixture of skeleton joints with depth frames. Ref. [90] modeled motion characteristics using skeleton joints to provide descriptions of appearance signals. A linked, hidden, conditionally randomized field model [91] was presented for learning the dormant relationship of ocular characteristics through depth and RGB sources. The temporal sense within each modality was maintained in this model when learning the association between two modalities. For action recognition, ref. [92,93] produced spaces by projecting details using RGB and depth photos and then using layers to keep individuals' places discreet, indicating that information and similarity from multiple sources may be exchanged to minimize noise and increase efficiency. Stateof-the-art methods for HAR, based on handcrafted features, are shown in Table 1.
. Deep Learning Representation Method
Deep learning is a branch of machine learning that uses hierarchical algorithms to learn high-level abstractions from data. It is a prominent approach that has been extensively used in conventional AI domains, including semantic parsing, transfer learning, natural language processing, computer vision, and many others. The appearance of large, highquality, and publicly accessible marked datasets, as well as the empowerment of parallel GPU computation, allowing transformation from CPU-based to GPU-based training, and, thereby, facilitating substantial acceleration in deep model training, are two of the most prominent factors that contributed to deep learning's massive boost. Neural networks, hierarchical probabilistic structures, and a number of unsupervised and supervised feature learning algorithms are all part of the deep learning family of techniques. Deep learning approaches have recently sparked interest due to their ability to get better performance than state-ofthe-art strategies in a variety of activities, as well as the availability of diverse data from various sources.
Computer vision has accomplished noteworthy outcomes by shifting from hand crafted features to deep learning-based features. Deep Learning-based action recognition is gaining much attention these days because of its remarkable performance and power in extracting features from multi-dimensional datasets. Unlike the classical machine learning hand crafted method, where features need to be modeled to recognize human action, deep learning methods insert each feature into a deep network as input and learn the complex information through several layers. Deep learning models are very much data hungry and computationally costly in the training phase. The objective of these models is to learn various types of representation that offer an automated extraction of necessary features for action recognition. Action recognition approaches using deep learning structure could be classified as Convolutional Neural Networks, Autoencoders, Recurrent Neural Networks, and Hybrid Models.
Convolutional Neural Networks (CNN)s
Convolutional Neural Networks (CNNs) are widely recognized as a leading technique in the field of deep learning, in which several layers are robustly equipped. It has been shown to be highly accurate and is the most extensively employed in many computer vision tasks. Figure 7 depicts the general CNN architecture's flow. There are three primary types of neural network layers that make up a CNN, and they are, respectively, referred to as convolutional, pooling, and fully connected. Different layers have different functions. A forward process and a backward process are used to train the network. The forward process's primary objective is to represent the input image in each layer using the current parameters (weights and bias). The losses are then calculated using predicted and actual values. The majority of CNN-based action recognition approaches include converting the locations or translation of skeletal components into visual representations, which are then classified using CNN. A linear interpolation method was used in [110] to build threedimensional skeleton joints to provide four sets of 2D maps, each representing a different joint's location in space, using a linear interpolation function. To make use of the generated range mapping in conjunction with AlexNet, the action was categorized. Ke et al. [111] used the relative locations to make three clips of grayscale images. The local structure knowledge was integrated to detect action, by loading grayscale images into the VGGNet that had already been trained, thereby producing a network capable of learning many tasks at once. Despite the fact that an overall image scaling procedure might introduce additional noise for the network, ref. [112] suggested simply inputting a skeletal image to a revised CNN-based Inception-ResNet structure for activity identification. The disadvantage of this approach is that it assumes that each operation has a set number of input frames. In [27,113] they encoded the spatial and temporal details of three dimensional skeletal orders across three combined trajectories, depending on the three perspectives (top view, front view, and side view). Three ConvNets, trained using trajectory maps, were fused late in the process to get the classifications. Xie et al. [114] Morshed used a temporal readjustment inside a residual training module to reduce the differences across skeletal sequences in the spatial-temporal domain and then modeled this data using convolutional neural networks in action detection.
Unlike traditional approaches, Yan et al. [31] built a deep graph neural network that automatically extracted the spatiotemporal pattern from the skeletal data by using joint coordinates and estimated confidences as graph nodes. Using a neural network design, Huang et al. [115] demonstrated that a non-Euclidean Lie cluster configuration [116] might be included in deep learning by extracting temporally coupled Lie group structures for action identification. Liu et al. [117] presented a method wherein a body structure transformation picture with figure posture was created to decode snippets of motion. To mitigate domain-shift and enhance the model's generalizability, Tang et al. [118] developed a selfsupervised training framework in an unsupervised space adaptation environment that split and permuted individual time sequences or parts of the body. Ref. [119] presented a technique, termed "nonuniform temporal aggregation" (NUTA), that combines data from only informative time intervals, allowing both localized and clip-level data to be merged. Ref. [120] created a secondary, lightweight network on top of the main one and had them make predictions about each other's pseudo-labels. In order to acquire underlying longterm temporal reliance in an adaptable fashion, ref. [121] suggested a unique Temporal Relocation Module (TRM). In order to guarantee full and comprehensive activity detection by oversampling, ref. [122] offered the notion of overlapped spatiotemporal cubes, which provided the backbone of activity suggestions. The current state-of-the-art CNN-based approaches are summarized in Table 2. In comparison to CNNs, Recurrent Neural Networks (RNNs) are able to accurately model temporal data. Current approaches based on RNN often use LSTM to handle lengthy action sequences, because this architecture may circumvent the overall disappearing of gradient issue through using a gathering function to retrieve the effective cache size for a load pattern. Figures 8 and 9 show the basic block diagram of RNN and LSTM, respectively. RNN-based approaches, rather than transferring motion information to images, refer to joints, or the relationship between joints, as a data source. Differential RNNs with gating added to the LSTM were proposed by Veeriah et al. [138] to represent the variations of salient movements. A wide range of characteristics compiled from many frames were input into the proposed LSTM framework. An end-to-end hierarchical recurrent neural network (RNN) that combined features from five human limbs was proposed for behavior detection by Du et al. [139,140]. However, as pointed out in [141], this procedure neglected to take into account the connection between non-adjacent parts. Shahroudy et al. [142] built a part-aware LSTM using the human body structure. By linking together different types of part-based memory cells, the 3D skeletal series was used to teach the relationships between non-adjacent parts. Action recognition in RGB video was accomplished by Mahasseni et al. [143] by layering a regularized long shortterm memory (LSTM) network over a deep CNN. They proposed utilizing the 3D skeletal sequence from several acts to regularize the network, reasoning that thr additional data would make up for any gaps in the video's coverage. Zhu et al. [144] input a skeletal point into the multilayer LSTM model generalization for developing co-occurring properties during behavior identification. To analyze the many geometric relational properties of all joints, and to determine behavior, Zhang et al. [141] employed a stacked three-layer LSTM. After noticing the lack of precision while transitioning 3D skeletal joints into the individual position method, Zhang et al. [145] presented the viewing-adaptable RNN-LSTM structure as a means of dealing with viewpoint disparities. By using the global LSTM memory unit, Liu et al. [146] created a whole-specific condition sensible LSTM that intelligently focused on informative joints across frames. The attentional potential was further enhanced by using a repeating attention mechanism that enhanced identification accuracy by decreasing the overall noise of unrelated joints.
In contrast to prior RNN-based models, that only described the time domain of a skeleton, Liu et al. [26] presented a hierarchical layout-focused traversing strategy to manage the spatially adjacent map depicting the skeletal joint. Furthermore, a confidence gate was presented to filter out noise and occlusion in three-dimensional skeletal features. For behavior detection, Song et al. [147] recommended integrating joint-selection gates into the spatially focused structure and frame-selection gates into the temporal framework. Both the spatial embodiment of skeletons and their temporal dynamics were modeled using the two-stream RNN design presented by Wang et al. [148]. The extra spatial RNN modeled mutual spatial dependence by taking motion information into account. Si et al. [149] used a residue mapping-based connection for labeling individual body parts as nodes, thereby capturing the structural interaction between components at each frame. Next, a temporally stacked learning system, comprised of a 3 layer LSTM, was used to represent the time-series development of the joints.
Autoencoders
An autoencoder [150] is a type of neural network that is used to learn effective encodings. An autoencoder is programmed to recreate its own inputs, rather than training the network to predict any target value. As a result, the outcome vectors have the same dimensions as the input vectors. The autoencoder is improved by minimizing the replication error during the operation, and the learned function is the corresponding code. In most cases, a single layer is incapable of extracting the discriminative and representative characteristics of raw data. To achieve their goal, researchers now use a deep autoencoder, which passes the code learned in the previous autoencoder towards the next. Figure 10 shows the basic block diagram of an autoencoder. Hinton et al. [151] suggested the Deep Autoencoder (DAE), which has been extensively analyzed in recent papers [152][153][154]. A deep autoencoder is most often trained using a back-propagation variant, such as the conjugate gradient form. While this model is always quite accurate, it can become quite ineffective if errors are found in the first few layers. In light of this, the network learns to recreate the training data's average. Pre-training the network with initial weights that estimate the final solution is meant to address this issue efficiently [151]. There are also autoencoder variants, proposed to keep the expression as "constant" as possible when the input changes. Vincent introduced a denoising autoencoder model to boost the model's robustness [155,156], which retrieves the right input from a distorted version, requiring the model to obtain the structure of the source distribution. With several hidden layers, a deep autoencoder is an efficient unsupervised feature representation method. The neural notion of data learning is motivated by the fact that hidden layer parameters are not manually built [157], but rather learned automatically, based on the given data. This idea inspired researchers to use DAE to learn time axis features of video sequences. During transformation, the high-dimensional deep features are squeezed down to low dimensions with minimal error. Baccouche et al. [158] suggested an autoencoder method that automatically learnt sparse over-finished spatiotemporal characteristics.
Hinton et al. [159] suggested the Restricted Boltzmann Machine (RBM), in 1986, as a generative stochastic neural network. An RBM is a Boltzmann Machine version with the requirement that the exposed and hidden units form a bipartite graph. This constraint makes training algorithms more effective, especially the gradient-based contrastive divergence algorithm [160]. Hinton [161] offered a thorough clarification, as well as a realistic method, for training RBMs. Further analysis in [162] addressed the key challenges of training RBMs, and their underlying causes, and suggested a new algorithm to overcome the challenges, which comprised of an adaptive learning ratio and an improved gradient. The model estimated binary units with noise rectified linear units to conserve information about comparative intensities as information passed across multiple layers of feature detectors, as described in [163]. Not only did the refinement perform well in this structure, but it was also commonly employed in numerous CNN-based architectures [164,165]. Deep Belief Networks (DBNs), Deep Energy Models (DEMs) and Deep Boltzmann Machines (DBMs) can all be built using RBMs as learning modules.
The Restricted Boltzmann Machine (RBM) [166] is a probabilistic model, with visible and hidden variables, that uses energy as a basis for its predictions. There are visible and hidden layers, so this may be seen as an undirected, fully-connected graph. As a result of considering two successive layers as RBMs, the Deep Belief Network (DBN) is referred to when RBMs are stacked. Figures 11 and 12 Figure 11. DBN block diagram.
Hybrid Deep Learning Models
Hybrid deep learning models refer to combining two or more types of models as a means of boosting efficiency. Figure 13 depicts a sample hybrid CNN-LSTM deep learning model. For action recognition, some researchers suggested learning multi-modal features from separate networks. Three-dimensional convolutional neural networks (3DCNNs) [25,127] and bidirectional long short-term memory networks (LSTMs) were proposed by Zhang et al. [168] for acquiring spatiotemporal knowledge from multi-modal input. With the joint multimodal characteristics in hand, the linear SVM model was used for final motion identification. To train 3 distinct CNNs for activity detection, Kamel et al. [169] proposed dividing the sequential depth information and skeletal locations into two frames. For the purpose of activity recognition, ref. [170] created a hybrid technique by combining CNN and LSTM, the former of which was used for extracting spatial characteristics and the latter for retrieving temporal features.
For the purpose of recognizing gestures across several platforms, Wu et al. [171] created the Deeply Dynamic Neural Network (DDNN). The DDNN consists of a 3DCNN that extracts spatiotemporal features in depth and RGB pictures. To avoid combining the impacts of several convolutional networks, Wang et al. [172] presented an image stream to activity mapping to join characteristics of both depth and RGB streams as the feed to ConvNets. By Analyzing Joints in the Skeleton and Depth Sequences, ref. [173] examined a privileged knowledge-based RNN system for behavior detection. Liu et al. [174] suggested learning greater attributes from raw depth images and limited features from skeleton points, such as location and angle details. For action recognition, the two kinds of features were combined and fed into SVM. Current strategies for human action identification utilizing hybrid models are outlined in Table 3.
Attention-Based Methods
Attention models have emerged in recent years and have demonstrated promising results in a variety of difficult temporal inference tasks, including video caption recognition. After a certain job is done, it serves to increase interpretability by providing differentiable mapping from all of the output locations to the next input [188]. For the most part, a human action is comprised of several stages, such as preparation, climax, and completion. Feature learning includes unique sets of sequence frames to demonstrate various concepts in each step. As a consequence of this, while viewing a picture, one pays attention to various aspects of different areas. On the other hand, because the typical parameter learning treats each cue in the picture as equally important, it affects image recognition in a way that causes inconsistent task-specific attention recognition. Many applications use projected saliency maps to boost the degree of relevant cue correlation, thereby resulting in better recognition performance. Increasing focus is being drawn to the utilization of an attention mechanism that implicitly pays attention to related signals.
According to the study by Xu et al. [189], an attention-based architecture that learns to represent images, as well as high-quality results, in three benchmark datasets, was implemented in the image captioning application. To begin with, Bahdanau et al. [190] implemented the attention technique in computer translation and showed that performance on the problem of English-to-French translation met or exceeded the standards of the stateof-the-art phrase-based system. Some studies used RGB video to teach computer vision to recognize human actions. For example, Shikhar developed a machine learning algorithm that focused on portions of the frames and classified human actions after a few glances [182]. To further measure the uncertainty of the forecast, ref. [191] went beyond the deterministic transformer and created a probabilistic one by capturing the distribution of attention values.
The temporal attention model proposed by Z. Liu et al. studied human activities and could identify just the important frames [192]. Some studies on spatial-temporal attention examined the spatial-temporal attention design and suggested two different models: one to investigate spatial and temporal distinctiveness, and the other to explore the time complexity of feature learning [193]. These attention models were specifically designed to do action analysis in the image frames, and then mine relevant frames to find an action-related representation and combine the representation of such action-important frames to construct a powerful feature vector. Transformer is the most recent model using an attention mechanism that attracts researchers nowadays. Ref. [194] introduced GateHUB, which includes a unique position-guided gating cross-attention technique to emphasize, or downplay, portions of the past, depending on their usefulness for predicting the current frame.
Transformer: Natural Language Processing researchers originally developed the Transformer [195] and then showed its effectiveness in a variety of tasks [196][197][198]. Since then, it has been used in a variety of domains, from language [199,200] to vision [201]. The standard transformer is made up of many encoder and decoder components, as shown in Figure 14. Each encoder block has a self-attention layer, as well as a linear layer, and each decoder block incorporates an encoder-decoder attention layer, in addition to the other two. In recent research [202], the Point SpatioTemporal Transformer (PST2) was developed for transforming point cloud patterns. Action identification into three-dimensional point clouds may benefit from the adoption of Spatial Temporal Self Attention (STSA), which can record the spatial-temporal semantics. Ref. [203] developed various scalable model versions that factorize self-attention over the space, period, and modalities to deal with the high amount of spatiotemporal units collected from different modalities. Ref. [204] suggested processing videos using the online approach and caching "memory" for each iteration, rather than attempting to analyze more frames simultaneously, as is the case with most current systems. Ref. [205] proposed a paradigm in which many encoders would be used to represent the various angles of the video frame, using lateral connections between the encoders to fuse the data from the various angles. A transformer for video learning can be built by three approaches: self-attention, multi-head attention, and position encoding. Figure 14. An overview of the video transformer.
• Self-Attention: Self-attention is a fundamental transformer mechanism for both encoding and decoding blocks. For each video clip in the vision region, the self-attention layer takes a sequence of X (either a video clip or an entity token) and linearly converts the input into three distinct vectors: K (key), Q (query), or V (value). • Multi-Head Attention: A multi-head attention method [195] was presented to describe the complicated interactions of token entities from diverse perspectives. • Position Encoding: A limitation of self-attention is its inability to collect the sequence's order information, as is the case with CNNs [206] and RNNs [207]. Position encoding [195] can be used in the encoder and decoder blocks to overcome this issue.
VideoBERT [208] was the first to use a transformer-based pre-training technique to study video-language representation. It adopted a single stream architecture, adapting the BERT [199] architecture to the multi-modal domain. Video signals and linguistic words were combined and fed into multi-layer transformers, where the model was trained on the connection between text and video. VLM (Video-Language Model) [209] is a job-agnostic model with a BERT-like inter method transformer that can take text, video, or both as input. VLM provides two new masked task schemes: Masked Token Modeling (MTM) and Masked Modality Modeling (MMM). The VATT (Video-Audio-Text Transformer) structure, proposed by [210], is an end-to-end model for learning multi-modal abstractions using direct audio, video, and text. Specific to the video's temporal, height, and width dimensions, they divide the raw video into a series of [T/t] X [H/h] X [W/w] patches. For videolanguage learning, CBT. Ref. [211] suggested noise contrastive estimation (NCE) [212] as the loss goal, which maintains fine-grained video information comparable to vector quantization (VQ) and nonlinear activation loss in VideoBERT. ActBERT [213] used visual inputs, like global activity and regional objects at the local level, to help models learn video-text representations in conjunction. The Tangled Transformer block enhanced communication between diverse sources in a multi-stream paradigm. Univl [214] was the first to pre-train a model on both comprehension and generating proxy tasks. It used a multi-stream structure, with two individual transformer encoders, to incorporate video with text, an inter-modal transformer to interact completely with video and text embeddings, and a decoder for derivation processes. In the latest work, instead of computing self-attention generally, it even used spatial-temporal factorization, ref. [215] suggested a bias toward localization in visual transforms, which improved performance, while maintaining accuracy.
Activity Type-Based Human Action Recognition
Human activity is seen as a method of communicating, of interacting with machines, and of being engaged with the world. For this survey, we defined an activity as a specific body part or movement that is made up of numerous basic actions. These elementary actions are done sequentially throughout time. They may be done on one's own or with others. We include a scheme in this section, ranking tasks from basic to complicated, with varying levels of complexity. This hierarchy is shown in Figure 15. Figure 15. Activity type hierarchy. Images are from HOLLYWOOD 2 [216], UCF50 [217], MCAD [218], and NTU RGB+D [142] dataset.
Atomic Action
Atomic actions are simple motions at atomic levels, like lifting the hand or walking. They form the foundation for more sophisticated voluntary and purposeful movements. They are very identifiable, and have been discussed in many studies, such as [84,[219][220][221]. Hand movementsm such as gesturing, may be used to convey a variety of complex thoughts or directives. "Applauding" is an example of a gesture that may be done with intent. In contrast, "covering up the face using hands when becoming uncomfortable" and "drawing off the hand upon contacting a hot substance" are unintentional. A few gestures are global, but many are connected to personal circumstances and locations. We may cite [219,[222][223][224] in this field.
Behavior
These are physical movements and actions that people exhibit in response to particular emotional and psychological circumstances and which are perceivable to others on the outside. Examples of efforts to detect such human activities include proposals found in [219,225,226].
Interaction
These are the many forms of reciprocity in which changes occur to those participating in the contact, whether it be people or things. Human interactions, like "kissing", and human-object interactions, like "cooking", comprise the complex activities done as a whole. Recognizing interactions is a theme in papers like [222,227].
Group Activities
The activities performed by a number of individuals, such as "cuddling", are referred to as "group activities". These actions may be more or less complicated, and they can be difficult to monitor or identify at times. Refs. [219,222] provide methods for recognizing complex activities, which make it feasible to identify complex activities. Human activities, such as weddings and parties, take place in a certain context, as do high-level activities that reflect human interactions [217,228].
Popular Datasets and Approaches
Researchers may assess their performance and verify their ideas by using publicly available datasets. Ref. [229] asserts that data files which are characterized by actions may be sorted into several groups. This includes the following: data records on movie clips, social networks, people's ways of behaving, human postures, atomic activities, or everyday activities of daily life. Ref. [10] listed 13 data sets that might be utilized for training and testing, gathered with Kinect. We utilized popular datasets mentioned in scholarly articles and classified them by activity type: atomic action, behavior, interaction, and group activities.
NTU RGB+D
This action recognition dataset was developed by Nanyang Technological University in 2016 [142]. This extensive HAR video library contains over 50,000 video clips and 4 million frames. There are 60 separate acts in it, each carried out by 40 different people, encompassing health-related and social activities. The dataset was captured using 3 Microsoft Kinect v2 devices at the same time. Its uniqueness is explained by the large number of viewing angles (80) from which it was collected. You may get an expanded variant of this dataset now [231]. In the extension, there are 120 different acts made by 106 different people ( Figure 17).
MSR Action 3D
The MSR Action 3D was developed at Microsoft Research Redmond by Wanqing Li [63]. It holds a total of 567 sequences of depth maps of 10 people going through 20 different types of actions twice or three times. To record the sequences, a Kinect device was used ( Figure 18).
Behavior Dataset
Multi-Camera Action Dataset (MCAD) NUS-National University of Singapore created this dataset in 2016 [218]. Specifically, it was created to assess the open-view categorization issue in a monitoring context. A total of 20 individuals participated in the recording of 18 daily activities that were derived out of KTH, TRECIVD, and IXMAS datasets utilizing five cameras. Every activity was performed by an individual eight instances for each camera (four times throughout the day and four times in the nighttime) in order to capture it ( Figure 19).
Multi-Camera Human Action Video Dataset (MuHAVI)
Kingston University developed this in 2010 [232]. It focuses on human activity recognition techniques that use silhouettes. The videos used are of 14 performers doing their respective action scenes 14 times. This was done by using eight non-synchronized cameras placed on the platform's four sides and four corners ( Figure 21).
UCF50
The UCF50 was developed by the University of Central Florida's computer vision research institute in 2012 [217]. The theme for this project is that it is made up of 50 action classes, all taken from genuine YouTube videos. As an extension of the 11-category YouTube activity dataset (UCF11), this dataset features a wider variety of action-oriented videos ( Figure 22). There are three types of algorithms for categorizing human activity: unmodified video classification, activity classification with no filtering, and activity detection. As complicated human activities, the dataset includes many varied scenarios and movements ( Figure 23).
The Kinetics Human Action Video Dataset
The DeepMind team developed this in 2017 [234]. There were 400 human activity categories in the first version (Kinetics 400) and each one had a minimum of four hundred YouTube video snippets featuring a wide range of activities. An improved version from the earlier Kinetics 400 collection, called Kinetics 600 dataset, intended to capture around 600 human action classes. Each action class includes a minimum of 600 video clips for the Kinetics 600 dataset. The Collection is made up of about 500,000 short videos, each of which is around ten seconds long and is labeled with a single category. This dataset includes URLs for all kinds of human-related activities, such as cheering, thanking someone, etc. (Figure 24).
HMDB-51 Dataset
The HMDB dataset [235], which includes approximately 7000 clips hand-labeled and manually collected from diverse sources, like YouTube, Google videos, and Prelinger collection, was released by Serre Laboratory at Brown University in 2011. Human action recognition's 51-class dataset is broken down into five motion categories, which are defined as follows: human interaction, body movement, facial expression, object manipulation, and object interaction. Background noise and shaky camerawork are two of the most problematic aspects of using actual video footage ( Figure 25).
HOLLYWOOD Dataset
The dataset includes varied video clipsm and was first introduced by the INRIA Institute in France in 2008 [236]. Every sample is marked with one of eight activities: getting in or out of a vehicle, answering a phone call, handshaking, hugging, sitting, sitting up, standing up, and kissing. The dataset was sourced from 32 movies: 20 of the movies produced a test set, while the rest of the 12 movies provided the training sets.
HOLLYWOOD 2 Dataset
This was released in 2009 by INRIA as well, to expand the Hollywood dataset [216]. It includes 12 action types (similar to the Hollywood dataset but with four additional actions: driving a vehicle, getting in a car, eating, and fighting) and a total of 3669 video clips collected over 69 movies and approximately 20 h of footage ( Figure 26).
UCF-101 Action Recognition Dataset
In 2012, the UCF CRICV (Center for Research in Computer Vision) created this [237]. The UCF101 dataset serves as an expansion to the UCF50 dataset [217], which provides 50 action classes. A total of 13,320 videos from 101 real-world action classes were gathered from YouTube and combined into one dataset. This provides the greatest range of motion and different perspectives (point of view, lighting circumstances, etc.) Table 4 represents the details of popular datasets in the field of human action recognition.
Evaluation Metrics and Performance
Human activity recognition has adapted and utilized a number of performance indicators from other classification domains. Here, we cite commonly used measures, including precision, recall, F score, accuracy, and the confusion matrix, based on [238]. In the context of action recognition metrics, the terms "true positive", "false positive", "true negative", and "false negative" have the following meanings: • True Positive: Both the predicted and actual activity categories are the same. • False Positive: activities that do not match the searching category but are projected to belong to the sought category. • True Negative: activities in which the actual, as well as projected, activity do not conform to the searching class. • False Negative: activities that should go into a certain category but are, instead, expected to fall outside of that category.
The following is a list of the most commonly used performance metrics:
1.
Recall: Recall is also known as sensitivity, true positive rate, or likelihood of detection. It is based on real positive instances, expected to be positive in advance. Sensory assesses the percentage of activities that are projected to be in a certain class. In the same way, the system's inability to detect activities is determined by the system's sensitivity. Mathematically, we may write this as follows: where T p = True positive, T n = True negative, F p = False positive, F n = False negative.
2.
Precision: It defines the probability of an activity being observed to really occur. The likelihood that an observed activity would be wrongly identified by the recognizer is given a precision of 1. Mathematically, we may write this as follows: 3. F Score: Precision and recall are the two factors that define the harmonic mean. It tells us how accurate the test is. F measures both the accuracy and the robustness of a classifier at the same time. The value 1 is the greatest value, while 0 is the worst. Mathematically, we may write this as follows:
4.
Accuracy: This metric measures the proportion of accurate predictions against the total number of samples. As long as the classes are evenly sampled, the accuracy yields satisfactory outcomes. It is possible to represent this mathematically as follows:
5.
Confusion Matrix: Known as an "error matrix", this sums together the model's prediction outcomes and indicates the model's overall accuracy. An error graph is generated and shown in a confusion matrix for each kind of misclassified data. There is a row for each anticipated class and a column for each actual class in the matrix, or the other way around. Figure 27 shows the structure of a confusion matrix. The methods that achieved the highest accuracies on popular datasets are shown in Table 5. In this section, several problems that may impair the functioning of HAR systems are discussed. At different stages of the recognition process, several techniques may be employed. The systems in question are mostly linked to equipment for acquiring and processing data, as well as experimental and application spaces. An image-based recognition system's primary challenge is lighting fluctuation, which has an impact on the quality of pictures and, thus, on the information that is processed. Systems designed to function utilizing a single viewpoint acquisition device impose a similar restriction on how the perspective may be altered. The more information that can be collected, the more restricted and granular the visualization is for the actions being studied. The following encompasses a number of the various kinds of occlusion: self-occlusion, when body parts obscure each other, other-object occlusion and partial-body-part occlusion. These are key constraints to wearable augmented reality systems. The efforts of [10,84,221,222,248,249] addressed these issues. The diversity of gestures associated with complex human actions, and the existence of links between comparable kinds of actions, may introduce complications owing to data association issues. In order to create comprehensive, resilient, and adaptable HAR systems, it is essential to identify and correct any shortcomings. For example, [10,220,222,224,249], which described the limitations in hand configurations, preset activities, and detection of basic motions and actions. Certain techniques for the identification of body parts and items in scenes may mistake the person's body parts with materials in the scene, as shown in [222,224,249,250], or malfunction when people wear different clothes [10,84]. These issues are linked to other issues, including the amount of background noise [222], complicated or shifting backdrops, and unstructured scenery [10,84,221,[248][249][250], as well as changes in size [10,84]. Many researchers evaluate the performance of their ideas using their own recorded datasets. Benchmark datasets are limited to domain-specific applications, which presents issues. For instance, the everyday activities and fall recognition datasets utilized for training successful models are just too small.
Opportunities
A contemporary system of human action recognition (HAR) has many complications that have to be managed in order to fulfil the primary tasks for which they were designed. A HAR-based video surveillance system may be installed as long as it is continuously monitored and generates stable responses that appear on time. In [222,224,249] this problem was addressed. There is an even bigger problem, when trying to represent human-to-human and human-to-object interactions with precision, that is not as simple as it seems. This may be used in security and surveillance systems, and may be able to spot many odd situations.
At the same time, there are new social problems that have emerged as a result of increased implementation costs and adoption for surveillance, elderly support, and patient monitoring, combined with society's greater acceptance. One specific example, in [220,222], illustrates how difficult it is to integrate devices at home for monitoring, which is regarded as an invasion of privacy and intimacy. It is also important to investigate the HAR system's progress in mobile devices. In order to satisfy the user's privacy constraints, this method of recording data would require storing the information on the user's device, faster server-to-device communication, and shortened computing time. Battery life is a big problem, and on-device implementation is tough owing to memory constraints, identification model parameter space constraints, and power consumption [251]. The third difficulty is associated with the limits of the user's physiological and physical capabilities, since the user depends on these systems for movement and functioning. According to this logic, the usage of HAR technologies should not be dependent on the user's age, race, body size, or capability. It should be possible for both novice and expert user to get the most out of these tools. The problem of achieving large gains is acknowledged in [222,249]. The detection of continuous motion is made much more challenging due to the amount of data that is being streamed at any one time. With all due respect, it is no wonder that HAR systems have not yet been able to recognize and identify diverse motions in varied lighting situations and are not willing to develop with respect to the rate and quantity of gestures. The main issue addressed in numerous academic publications includes such titles as [10,84,222,224,249]. HAR systems are capable of context awareness. Therefore, further investigation is needed in this field. This may be beneficial in promoting the usage of previously suggested methods and the progress that has been achieved in various application areas.
Long-term videography may be particularly difficult to understand and identify when it comes to day-to-day activities. This is because everyday life activities are made up of many complicated actions, which is why long-term films include them. Even though these actions are diverse and varied, they are challenging to model. A further problem exists in that beginning and finishing times overlap for each activity. This issue is discussed by [248]. In addition, resolving the difficulty of discriminating between acts that are voluntary and involuntary is still an active topic to explore.
In addition to the above difficulties, additional general challenges related to human activities, such as missing portions of a video, recognizing multiple activities being done by the same person, and identifying and predicting actions, such as in congested settings, are addressed in [219,252]. The key underlying problems in deep learning-based HAR technologies include the presence of memory constraints, an abundance of parameters to be updated, complex multi-variant data collection and fusion, and the implementation of multiple deep learning-based architectures in smartphones and wearable devices.
Future Directions
Identifying human actions with the use of multi-modal data has several benefits, primarily because it often offers more detailed information. It can also be utilized to reduce the amount of noise present in data from single sources; thus, increasing the reliability of action identification. As a result, in order to improve the effectiveness of future studies of human actions, farther efficient incorporation of varied intelligence should be created, rather than the repetitive unification of characteristics from various origins. It is possible that, in the case of human interactions, the combination of characteristics of people, and correlations derived from different data sources, would result in a more reliable interpretation. Aside from that, relational content from the surrounding environment, which has been relatively understudied, has the capability to boost the efficiency of conventional feature representations for human action identification.
The ability to adjust our view for various camera positions is a beneficial feature, since it mostly enables us to move about and simplifies the process of calibrating sensors located in different places. The skeleton-based techniques are inherently resistant to being seen from various perspectives. Nevertheless, the computed skeleton data may not be correct when viewed from the side, which most likely results in a decrease in recognition performance. In this approach, the recent approaches to depth-based techniques are mostly focused on the generation of artificial multi-view data in order to supplement the training set. As a result, prospective studies might put greater emphasis on the development of feature descriptors that are perspective invariant.
The current understanding of activity recognition algorithms is incomplete, because depth-based datasets are often produced through certain contexts. One may find a significant discrepancy between the compiled datasets and the real world, because of the absence of categories, samples, occlusion occurrences, restrained activities, distance, and internal environmental variables. Due to this, algorithms are challenging to use in real-world circumstances. For this reason, it is important to gather large-scale datasets for training and testing to use in real situations.
Although many powerful deep learning techniques may beat hand-crafted techniques, the majority of them require the use of a pre-processing phase that involves color, depth, or skeleton data, before extracting hand-crafted representations. While handmade representations are primarily dedicated to representing feature dimensions, they also reduce the capacity of deep learning techniques to understand the results. It could be because of a lack of samples for training. Thus, if we provide enough training data, future models may come up with new types of deep learning architectures with the express purpose of directly learning representations from raw video data.
Human behavior analysis from a live video stream is sought after in practical applications, where the activity identification algorithms, which are applied to pre-segmented video sequences, must be very precise. This means that, since existing research studies tend to use trim data with a single category per segment, we do not yet know whether, or if, their findings are applicable to online instances. One important path to follow while creating identification methods is to identify recognition techniques that may be used in real situations.
The term "human behavior", which refers to the various factors involved in human activity, such as facial expressions, human behavior, and attention, is more complex than the terms "human action" or "human interaction". Automated behavior interpretation is an important component in the development of genuine intelligent systems, and it benefits a certain field that looks at human cognitive capabilities.
Conclusions
Many areas of computer vision, like human-computer interaction, robotics, monitoring, and security, require understanding and interpreting human actions efficiently. This article provides an overall look at the current advances in this research area. It sets forth many criteria, according to which it classifies items. This paper began by discussing the various HAR systems and the systems' primary goals. Following this, it provided a summary of the procedures that are currently considered "state-of-the-art", including the validation procedures that utilized to test those approaches. In addition to this, it classified human actions, as well as the methodologies used to represent particular action information. The various techniques might also be classified according to the kind of acquisition equipment, and an additional category based on the stages of recognition is provided (detection, tracking, and recognition). When looking at the results of this research, it was found that every methodology suffers from some constraints. This may be attributed to progress in the deep learning method and positive findings with respect to detection and identification performance. Alternatively, group activities and interactions are important study subjects, because they may offer relevant information in a wide variety of HAR domains, such as public security, camera monitoring, and the identification of aberrant behavior. An extension of HAR processes to smartphones is also being investigated, since cellphones have become more essential to our everyday lives and are already accepted by society, despite their lack of invasive properties. In order to create a good human action recognition system, we need to consider a number of characteristics, encode them into a distinct model, and ascertain that the outcomes of the modeling are correct. Finally, as a closing statement, we may state that, although many new methods with the goal of better understanding human activity have been created, they still face many difficulties that must be overcome. | 16,598.4 | 2023-02-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Using the Autler-Townes and ac Stark effects to optically tune the frequency of indistinguishable single-photons from an on-demand source
We describe how a coherent optical drive that is near-resonant with the upper rungs of a three-level ladder system, in conjunction with a short pulse excitation, can be used to provide a frequency-tunable source of on-demand single photons. Using an intuitive master equation model, we identify two distinct regimes of device operation: (i) for a resonant drive, the source operates using the Autler-Townes effect, and (ii) for an off-resonant drive, the source exploits the ac Stark effect. The former regime allows for a large frequency tuning range but coherence suffers from timing jitter effects, while the latter allows for high indistinguishability and efficiency, but with a restricted tuning bandwidth due to high required drive strengths and detunings. We show how both these negative effects can be mitigated by using an optical cavity to increase the collection rate of the desired photons. We apply our general theory to semiconductor quantum dots, which have proven to be excellent single-photon sources, and find that scattering of acoustic phonons leads to excitation-induced dephasing and increased population of the higher energy level which limits the bandwidth of frequency tuning achievable while retaining high indistinguishability. Despite this, for realistic cavity and quantum dot parameters, indistinguishabilities of over $90\%$ are achievable for energy shifts of up to hundreds of $\mu$eV, and near-unity indistinguishabilities for energy shifts up to tens of $\mu$eV. Additionally, we clarify the often-overlooked differences between an idealized Hong-Ou-Mandel two-photon interference experiment and its usual implementation with an unbalanced Mach-Zehnder interferometer, pointing out the subtle differences in the single-photon visibility associated with these different setups.
I. INTRODUCTION
The single-photon source (SPS) as a resource for quantum information technology has in recent years exhibited great progress in experimentally achieved efficiency and quantum state purity, pushing the technology towards practical near-term applications. Recent advances have enabled single photons to be generated on-demand with efficiencies exceeding 50% [1][2][3] and near-unity quantum indistinguishability [4] and purity [5,6], facilitating advances in boson sampling [7] and quantum key distribution [8][9][10], and even approaching minimum fidelities required for efficient linear optical quantum computation [11,12].
For on-demand SPSs, these advances have largely been achieved using semiconductor quantum dots (QDs), where the dipole-active transition of an electron-hole pair (exciton) across the band gap in conjunction with the three-dimensional confinement afforded by the QD geometry provides an excellent quantum two-level system which, when inverted by excitation, emits a single photon radiatively. Additional challenges to implementation *<EMAIL_ADDRESS>of SPSs which have seen recent progress are the desirable criteria of scalability [13][14][15][16], and frequency tuning [17], as QDs are typically grown such that energy levels are stochastic in nature, but many applications require many SPSs with degenerate frequencies. Effective methods for frequency tuning QD SPSs (with a large variance in attainable bandwidth between methods) include electrical tuning [18,19], strain tuning [20][21][22], quantum frequency conversion via optical nonlinearity [23], and multi-photon Raman transition processes utilizing multilevel systems [24][25][26]. This last all-optical tuning process typically involves two sequential laser pulses, and uses the biexciton (two exciton) state, which extends the twolevel structure of the QD to a cascade-type ladder system, and as such is applicable to any ladder system involving three or more energy levels, not just QDs.
In a similar manner, we have shown recently-and demonstrated experimentally using the QD biexcitonexciton cascade-how on-demand frequency-tunable single photons can be generated from such a ladder system with high efficiency, indistinguishability, and purity, by instead using a single pulse excitation under the presence of a cw laser dressing the exciton-biexciton transition [27]. Depending on whether the cw laser is resonant or detuned, this SPS then operates using either the Autler-Townes (AT) effect or ac Stark shift, respectively. Such an approach allows for the potential of all-optical frequency modulation of the emitter resonance [28], which has applications including creating high-dimensional entangled quantum states [29,30], and topological states [31]. This optical frequency tuning may potentially also improve the performance of entangled photon pair sources in QDs [32][33][34][35][36], where the small fine structure splitting of polarized excitons can degrade entanglement fidelity.
While the possibilities of frequency-tuning a SPS at the level of the coherent optical system dynamics are interesting, single photon emitters for practical quantum information technology applications have very stringent requirements on efficiency, single-photon indistinguishability, and purity. It is thus an important question from a theoretical perspective what role the cw dressing laser plays in these SPS figures of merit, and how the source can be designed to minimize these effects. The analysis required to answer such a question would supplement and extend previous theoretical work that has helped elucidate the limits of SPS figures of merit in undressed systems, including the role of the pulse, cavity, and electronphonon scattering [1,4,5,[37][38][39][40].
In this work, we address this question in detail by studying theoretically a four-level ladder system, which can be physically realized using the QD biexciton-exciton cascade, as we have done in Ref. [27]. We find that the primary effects of source figure-of-merit degradation come from undesirable spontaneous emission from the higher energy state, and, in the case of semiconductor QDs, electron-phonon scattering induced by the cw laser causing excitation-induced dephasing during the emission process and, usually, increased population of the higher energy (biexciton) state. However, we find that incorporating an optical cavity resonant with the lower energy (exciton) state can mitigate these effects by accelerating emission into the preferred cavity mode. The ac Stark regime offers better SPS efficiency and indistinguishability at the cost of much reduced bandwidth of achievable frequency-tuning. We note that in this work we refer to the states of the system as QD (bi)excitons, but all results are also presented for the case of no phonons, which is generally applicable to any quantum four-level system, and the principles apply equally to a three-level system, with modified population dynamics due to the reduced number of decay channels.
The layout of the rest of the paper is as follows: in Sec. II, we introduce our frequency-tunable SPS design and basic principles of operation, based on a four-(or three-) level quantum ladder system, of which the biexciton cascade in QDs is one physical realization. We present our quantum master equation (ME) model of the SPS, including for the case of QDs coupling to phonon reservoirs using the polaron master equation (PME) method.
Next, we define in Sec. III the figures of merit we use to quantify the SPS fidelity. In particular, we include a quantum optical derivation of the two-photon inter-ference visibility used to extract the single-photon indistinguishability of the source in a Mach-Zehnder (MZ) interferometer simulating a Hong-Ou-Mandel (HOM) interference experiment. The resulting expression is well known [41], and can be expressed in terms of the inteferometer properties, the single-photon purity, and the single-photon indistinguishability. However, most theoretical works on the subject to date assume an HOM inteferometer with two distinct SPSs; the MZ setup used in experiments only utilizes one physical SPS, which gives differing photon statistics. As a result, different expressions for the single-photon indistinguishability have existed in the literature-a discrepancy which becomes particularly important when the purity is non-ideal, as recent work has highlighted [42]. For the sake of comparing results directly to experiment, we expect this derivation will be of use in bridging the gap between theoretical and experimental works in the literature.
In Sec. IV, we discuss our main results for the operation of the SPS in both AT and ac Stark regimes, and show how for the case of the QD SPS the phonon bath influences the performance of the device in both regimes and places limits on achievable figures of merit. We also show how an optical cavity can be used to significantly improve device performance by reducing timing jitter and phonon-related decoherence by means of selectively increasing the desired dipole transition rate. We then discuss aspects of the initial pulse excitation, including the effect of cw dressing on the source purity. Finally, in Sec. V we conclude.
We also include five Appendices: in Appendix A, we present a full analytical solution for the efficiency and indistinguishability for the case of resonant laser dressing in the absence of phonon effects. In Appendix B, we show how a unitary transformation to the dressed state basis and secular approximation can be used to remove fastoscillating terms in the ME, which drastically improves computational efficiency for most numerical calculations. In Appendix C, we use a weak phonon coupling approximation, appropriate for the regimes studied in this work, to derive simple analytical expressions for the phonon interaction terms, and give an intuitive physical picture for the phonon processes. In Appendix D, we show how to extend the PME to include a time-dependent excitation pulse which we use to calculate the SPS purity. Lastly, we include in Appendix E a study of the cw error rate induced by the far off-resonant excitation of the system by the cw drive (otherwise neglected for most of our analysis), which can typically be mitigated by spectral filtering.
II. THEORETICAL MODEL OF A FREQUENCY-TUNABLE SPS
In this section, we present the main theoretical model we use to study the frequency-tunable SPS using the QD biexciton-exciton cascade, and describe its regimes of op-eration. In Sec. II A, we describe the four-level cascade model and present the ME of the system Hamiltonian under cw driving with radiative emission. In Sec.'s II B and II C we describe the AT and ac Stark regimes of operation, respectively, and in Sec. II D we describe how we model the electron-phonon interaction using the PME. Further detail and characterization of our SPS scheme, including emission spectra, can be found in Ref. [27].
A. Quantum ladder model
We model the quantum ladder cascade system for the practical physical realization of the semiconductor QD as a four-level system with ground |G , excitons |X and |Y (with orthogonal linear polarizations), and biexciton |B states, with the |B -|X transition dressed by a coherent drive with strength Ω cw . The coherent laser also weakly couples the |X -|G transition, which we assume to be far-detuned due to the biexciton binding energy.
The total Hamiltonian for this setup, neglecting for now phonon coupling and radiative emission, is (letting = 1 throughout) where ω i is the energy of the i th state and i ∈ {X, Y, B}, The undressed system (i.e., with Ω cw = 0) gives rise to Xpolarized fluorescence emission energies at ω B − ω X and ω X . In Fig. 1(a), we show a schematic of this undriven system in this bare state basis.
The system is driven at (near-)resonant cw frequency ω cw = ω B −ω X −δ with an X polarized laser, such that δ is the laser detuning from the biexciton-X-exciton transition. By moving into an interaction picture defined by where the biexciton binding energy is defined as E B = 2ω X − ω B , and performing the rotating wave approximation, we obtain the time-independent system Hamiltonian: Equation (2) contains a far-detuned drive coupling the |X -|G transition via the σ X x term, and as such can model weak cw excitation of the exciton from the ground state. We shall assume for proper device operation that E B |δ|, Ω cw in all cases, such that this coupling can generally be neglected. However, we shall use Eq. (2) for the simulations in Appendix. E where we model the cw error rate. Neglecting this term, we can move into a different rotating frame instead defined by that the system Hamiltonian can now be written as where This Hamiltonian has eigenenergies E ± = ±η/2, where η = Ω 2 cw + δ 2 , and corresponding eigenstates The frequency splittings apparent in E ± allow for frequency tuning of the source via radiative transitions between the dressed energy levels. Without yet considering phonon coupling, we can model spontaneous emission using a Lindblad ME for the reduced density operator of the system ρ: where we have included radiative decay from both excitons with rate γ X , and from the biexciton with total rate γ B (i.e., we assume throughout orthogonal polarization channels have equal decay rates): where
B. Autler-Townes regime
For the case of no detuning (or small detuning relative to the drive strength |δ| Ω cw ), and a drive strength which exceeds the decay rates of the system (i.e., Ω cw γ X ), the SPS operates in the AT regime, where the |± eigenstates of the Hamiltonian in Eq. (3) become symmetric and antisymmetric superpositions of |X and |B states, with an AT energy splitting of Ω cw . The emission spectrum from the |X -|G transition then consists of two peaks with energies ω X ± Ω cw /2. These peaks have nearly equal spectral weight (area), and as such the efficiency of a device operating in the AT regime is at most ∼ 1/2 if only one of the peaks is of interest. In Fig 1(b), we show a schematic of the four-level system QD model operating in this regime, and the associated energy splittings.
C. ac Stark regime
For larger detunings (|δ| Ω cw ), the |± dressed eigenstates of the Hamiltonian in Eq. (3) become unequal superpositions of |X and |B states, and as such, (d) Example schematic of a phonon-assisted excitation process; shown here is the process where the excitation from the ground |G to excited |X state, which is detuned by EB +δ, is assisted by the annihilation of a phonon in the phonon bath with energy ∼ EB + δ. Note for (b,c) we have not shown the |Y state which is involved in another decay channel as seen in (a).
a system initialized in the |X state will tend to emit photons preferentially from the eigenstate which contains a higher amplitude of the |X state; the emission spectrum will consist of a dominant peak from the transition from this dressed state to the ground state and a subdominant peak from the other dressed state transition to the ground state. As the detuning is increased even further relative to the drive strength, the subdominant peak becomes negligible, and the spectrum consists of a single Stark shifted peak. In this limit, one also requires the detuning and drive rates to greatly exceed the damping.
We then define a parameter equal to the undressed resonance ω X minus the frequency of the dominant peak in the spectrum, which quantifies the frequency shift achieved in the SPS: By construction, this quantity is positive (negative) when δ is positive (negative), corresponding to a red (blue) frequency shift. ∆ ac can then be used to give the frequency shift of the dominant peak both in ac Stark shift and AT regimes, although ∆ ac flips sign and thus changes discontinuously as δ → 0 as the dominant and subdominant peaks switch roles. Formally at δ = 0, ∆ ac is unde-fined, as both peaks with frequency shifts ±Ω cw /2 are equally prominent in the AT regime (without considering, e.g., electron-phonon coupling). We can also solve for the drive strength in terms of this frequency shift and laser detuning, In the ac Stark regime, where |δ|/Ω cw 1, Ω cw ≈ 2 √ ∆ ac δ, and one also satisfies δ/∆ ac 1. Thus, we shall use both criteria interchangeably to denote the ac Stark regime. Also in this limit, ∆ ac ≈ Ω 2 cw /(4δ), which is the usual ac Stark shift encountered in perturbation theory.
In Fig. 1(c), we show a schematic of the QD model operating in the ac Stark regime and the associated frequency shift of the |X state ∆ ac .
For the sake of this work, we do not explicitly define for what values of δ/∆ ac or δ/Ω cw the system enters either regime, but rather seek to understand the two regimes as limiting cases associated with these parameters.
D. Exciton-phonon coupling and the polaron master equation
It is well known that the coupling of excitons in semiconductor QDs to longitudinal acoustic (LA) phonon modes has important effects on their dynamics under optical driving, including excitation-induced dephasing, offresonant feeding effects, Rabi frequency renormalization, and non-Markovian real phonon transitions which lead to the formation of a broad phonon sideband [43][44][45][46][47][48][49][50][51][52]. Using a spherical QD wavefunction model, the phonon coupling can be characterized by a super-Ohmic spectral function where α is the phonon coupling constant and ω b is a cutoff frequency which scales inversely with the size of the QD [53]. The Hamiltonian that couples excitons with phonons takes the form of the independent Boson model, which is exactly diagonalizable [54,55]. Employing a unitary "polaron" transform to a frame in which this interaction is diagonalized thus allows one to construct a perturbative expansion in the optical drive strength; this approach, under the Born-Markov approximation, yields the PME [47].
Assuming the different transitions in the cascade have equal dipole moments [56], the result is that to incorporate phonon coupling, we add to the ME in Eq. (5) the following term where , and the timedependent complex phase term is defined through and B = e −φ(0)/2 . We have absorbed a coherent attenuation factor B from the PME into our definition of Ω cw for easy comparison with the no-phonon case, as well as a polaron shift in exciton resonance frequencies.
Except for when we model the cw error rate, we can neglect the σ X m term in X m , which is consistent with using the approximate Hamiltonian in Eq. (3). The operators X m (−τ ) = U (τ )X m U † (τ ) are calculated using U (τ ) = exp [−iH S τ ]. Note this unitary transform can be simplified analytically when using Eq. (3) as H S [47,57], and we do this in Appendix B in the dressed state frame.
It is worth noting that the X m terms in Eq. (10) lead to an overall scaling factor of ∼ Ω 2 cw in L PME ρ, which dominates for small effective drive η relative to ω b and k B T , although the full functional dependence of the phonon scattering on the drive strength will also depend on the interplay between the phonon function φ(τ ) and coherent dynamics induced by H S in the X m (−τ ) functions; this leads to rich and highly nonlinear features in the phonon decoherence rates as a function of Ω cw [57]. In Appendix C, we derive simplified expressions for the phonon coupling rates valid for strong driving and weak phonon coupling strengths.
Throughout our calculations, we shall use for our calculations two different sets of phonon parameters, denoted I and II, such that α I = 0.04 ps 2 , and ω b,I = 0.9 meV, while α II = 0.006 ps 2 , and ω b,II = 5.5 meV. For the most part (a notable exception being the calculation of the cw error rate), set I corresponds to a "stronger" phonon coupling strength, and is similar to what has been extracted from measurements with InAs/GaAs QDs [49][50][51][58][59][60], while set II gives a (in most cases) weaker phonon coupling strength, and is more similar to numbers consistent with experimental results in some waveguide structures [61], including our own [27]. For our study of the source purity, we also use a set III with intermediate values α III = 0.025 ps 2 and ω b,III = 2.5 meV. In all cases, we use a phonon bath temperature of T = 4 K.
One of the main consequences of phonon coupling is the formation of a broad phonon sideband, which arises from non-Markovian real phonon transitions concurrent with photon emission. Photons emitted into this sideband have poor indistinguishability [39,62], and as such this sideband is usually filtered out for HOM interference measurements, leaving only the zero-phonon-line (ZPL), which has much better coherence properties due to the fact that phonon dephasing of the ZPL for bulk phonons vanishes very rapidly at low temperatures [63] (and this is a higher-order process, not captured by our PME). The PME is in fact capable of capturing this non-Markovian effect by means of the exponential factor which arises upon transformation back to the lab frame from the polaron frame [39,54]. We can, for the sake of this work, approximate the filtering process that we assume to occur to remove this phonon sideband by simply neglecting this factor, and calculating all observable quantities directly in the polaron frame. In doing so, we miss an efficiency cut that arises from neglecting the sideband contribution to the emission. However, we can analytically approximate this contribution using the factor B , and we quantify this efficiency reduction in Sec. IV.
In addition to this phonon sideband in the emission spectrum, it is also important in this work to consider the phonon sideband in the absorption spectrumspecifically, the potential for phonon-assisted excitation of energy levels under detuned driving. An example of this process is shown in Fig. 1(d): in this example, we show that the far detuned excitation of the |X state by the cw laser (due to the σ X x term in the full Hamiltonian of Eq. (2)) can be assisted by the absorption of a phonon in the phonon bath with energy E B + δ-if the phonon spectral function J(ω) is appreciable over this frequency range, this process may become significant. In the case of Fig. 1(d), the process is suppressed at low temperatures due to the small thermal occupation of phonons in the bath (although it still plays a potentially significant role as we show in Sec. IV), but the corresponding process for a cw drive with frequency exceeding the energy transition of interest is highly significant even at low temperatures, as it involves phonon creation. The dynamics of the driven |B -|X transition are also subject to similar considerations, and Appendix C gives simplified analytical rates and a schematic picture of these phonon processes.
Finally, we note that the coherent (unitary) part of the phonon effects also leads to small frequency shifts in the emission spectrum; for the sake of this work we neglect these and focus on the nominal frequency splitting given by ∆ ac ; if desired, the analytical simplifications in Appendices B and C can be used to calculate these small shifts explicitly.
III. SPS FIGURES OF MERIT AND TWO-PHOTON INTERFERENCE EXPERIMENTS
In experiment, the two-photon interference (TPI) visibility of the source is typically measured by simulating an HOM interferometery setup using an unbalanced MZ interferometer, excited with two photon pulses separated in time by T 0 , and with overall repetition time of the laser T rep . In contrast to this setup, theoretical analyses of the single-photon indistinguishability (which is often conflated with-or defined to be equal to-the TPI visi-bility) often derive this parameter by assuming an HOM setup with two identical but distinct SPSs described by the same density operator [37,64,65]. While both approaches should lead to perfect TPI for perfectly indistinguishable single photons with unity purity (no multiphoton probability from a source excitation), the photon statistics of the two scenarios are different, leading to different normalizations. This can make direct comparison of experiment and theory difficult, particularly in the case of non-unity single-photon purity. In the work of Kiraz et al. [64], a HOM-type experiment was analyzed, but the authors omitted terms corresponding to the second order correlation function of the source field; as this correlation function separates into a product of photon flux expectation values at large delay times, this omission erroneously led to a definition of the TPI visibility which in fact corresponds to an MZ-type experiment-although only in the limit of an ideal interferometer and zero multiphoton emission probability per excitation.
Furthermore, some authors use the single-photon indistinguishability interchangeable with the "corrected" TPI visibility, after accounting for the finite multiphoton probability of the source, imbalance of the beam splitters, and deviation from perfect interference fringe contrast of the interferometer [66]. This is, however, potentially ambiguous, as the latter two effects are considerations arising from the experimental detection process, whereas the multiphoton probability of the source is a fundamental and physical limitation on the degree of TPI achievable with the source. Additionally, there exists an alternative definition of the TPI visibility which involves normalizing by a cross-polarized cross-coincidence histogram peak at zero delay, which gives a different value for the observed visibility for nonzero two-photon emission probability. The difference in photon statistics from HOM vs. MZ inteferometry experiments was correctly pointed out by Fischer et al. [67], although the metric they propose to quantify the TPI visibility differs from those typically used in experimental works.
To help clarify the matter, and for the sake of one-toone comparison of experiment and theory, we present in Sec. III A a quantum mechanical derivation based on field correlation functions of the TPI visibility for a real MZ interferometer (the expression for which is already wellknown from photon counting arguments [41,67,68]) in the spirit of previous studies of the corresponding quantity for an HOM interferometer [64,65,69]. We explicitly define the indistinguishability to a measure of the first-order degree of coherence of the source, and g (2) [0] to be a measure of the second-order degree of coherence of the source (purity, or lack of multiphoton emission events). We define the raw TPI visibility to be what is measured in experiment, and the corrected TPI visibility what would be measured in an idealized experiment with perfect fringe contrast and balanced beam splitters; this latter metric is the most important single parameter to characterize the fidelity of the SPS as it encompasses both first and second order coherence of the source.
In Sec. III B, we then relate these experimentally observable quantities to the theoretical figures of merit for our frequency-tunable SPS, by means of conventional quantum optics input-output theory [70]. In particular, we show how the dressed state basis of Eq. (4) can be used to derive figures of merit for each sidepeak of the spectrum separately.
A. Derivation of TPI visibility
A simplified schematic of the experimental procedure for extracting the TPI visibility of photons emitted sequentially from a SPS using a MZ interferometer is shown in Fig. 2(a). The SPS, excited every T rep with two pulse excitations separated in time by T 0 (assumed to be much greater than the relaxation time of the SPS, to ensure independent excitation events), emits photons into a decay channel mode. These photons then pass through two sequential beam splitters, where one of the transmission channels between the beam splitters is subject to a time delay T 0 . The outputs of the second beam splitter then propagate to photodetectors from which a crosscorrelation HOM coincidence signal can be constructed as a histogram of detection events; for perfect quantum single-photon interference, this signal vanishes at zero time delay [71]. We also show in Fig. 2(c) an experimental example of this cross-coincidence function taken from the data in Ref. [27]. Note that in this analysis we assume a purely pulsed SPS; any residual cw contribution which arises due to (for example) the small excitation of the ground-excited state transition from the far off-resonant cw laser is assumed to be filtered out of the emission spectrum, although it is possible to extend the analysis to also account for the cw background [42,72].
In experiment, the raw visibility of TPI is often defined as (sometimes without the factor of 2) [4,15,41] where A 0 denotes the area of the peak in the crosscorrelation coincidence histogram at τ = 0, and A ± denote the area of the neighbouring peaks. To theoretically calculate this value, we include here a quantum optics derivation of the cross-correlation signal that is detected by an unbalanced MZ interferometer setup with delay T 0 as show schematically in Fig. 2(a). We assume a signal mode with annihilation operator s, which we will relate to the system modes of the QD via standard input-output theory [73], as well as a vacuum mode with annihilation operator v. We assume that the timescale of decay for the system dynamics T lifetime ∼ γ −1 X is much smaller than T 0 .
Assuming for simplicity the two beam splitters to be identical and lossless, we can express the modes detected by the photodetectors as (13) and (14) where the reflectivity and transmissivity of the beam splitters satisfy |T | 2 + |R| 2 = 1 and RT * + R * T = 0, enforcing unitarity. The vacuum terms do not contribute to any normal-ordered expectation values and are dropped henceforth.
For convenience, we define quantities associated with the source excited only with a single pulse excitation. These constitute the main single-photon figures of merit for the total spectrum emitted from the source (i.e., including both peaks of the split spectrum in the AT regime). These are (i), the number of photons emitted by the source: which we shall sometimes refer to as the "efficiency" (or "brightness", although note there are other sources of end-to-end efficiency degradation not captured by this metric); (ii) the normalized Hanbury-Brown-Twiss (HBT) g (2) [0]-a measure related to the two-photon emission probability of the source, which can be measured by blocking one arm of the MZ interferometer and normalizing the t-integrated peak (time-averaged) in the cross-correlation signal around τ ∼ 0 to any other peak: (16) and (iii), the single-photon indistinguishability: which is a measure of the first order degree of coherence of the SPS. The cross-correlation signal from the 'a' and 'b' photodetectors is proportional to the probability that if one detector detects a photon at time t, the other will detect a photon at time t + τ , and is proportional to (assuming identical detectors) The vanishing of G MZ (t, τ ) around τ = 0 for times around t where one might expect classically a crosscorrelation photon detection event due to two wavepackets hitting the (second) beam splitter at the same time and travelling down different channels is, similar to the HOM setup, a hallmark of TPI and a signature of single-photon indistinguishability. However, the photon statistics of the MZ interferometer differ from that of an HOM interferometer, and the calculation of the TPI visibility is modified accordingly. Note that we could instead define G (2) MZ (t, τ ) using only one of the two crosscorrelation functions, which will, in the presence of an unbalanced beam splitter change the various peak weights of the time-integrated cross correlation function [41], but our metric of two-photon visibility is independent of this choice. Using Eq.'s (13), (14), and (18), we find: In the above, we have made note that since we have assumed that T 0 T lifetime , the dynamics of operators separated in time by T 0 are uncorrelated. Thus we have dropped terms where for any values of t, τ , using this property, we can decompose the correlation function as a product of multiple expectation values involving a phase oscillation (i.e., unequal numbers of annihilation and creation operators within a correlation function). These terms are small but nonzero for finite g (2) [0]-however, they are phase-sensitive and thus will average out to zero in an experimental setting where data is collected over a timescale longer than the coherence of the system [41,67].
In the TPI experiment, the MZ interferometer is fed two photons from the same source separated in time by T 0 , and this process is repeated every T rep . For the following derivation, we shall assume that T rep − 4T 0 T lifetime , such that each laser repetition is an independent event containing only two QD excitation events. However, as the figures of merit we derive only involve peaks at τ delays of zero and ∼ ±T 0 , the final results are valid for the weaker condition T rep − 3T 0 T lifetime . Under the assumption that T rep 4T 0 , we only need to consider from the perspective of a theoretical analysis the function G MZ (t, τ ) for a source excitation s(t) that is excited only at times t = 0 and t = T 0 (i.e., one laser pulse cycle). Thus, any correlation functions involving operators with time arguments that are not within ∼ T lifetime of 0 or T 0 vanish. Furthermore, G MZ (t, τ ) is nonzero around times t and delays τ of ∼ 0, ∼ T 0 , and ∼ 2T 0 only.
In the TPI experiment, photon detection events are integrated over time, and thus we will integrate G Schematics of (a) an unbalanced MZ simulating a HOM two-photon interferometry experiment with (c) sample cross-coincidence count data taken for an undressed QD in Ref. [27], and (b) an idealized HOM experiment with distinct but identical sources.
ation time of the system, the system density operator is prepared in the same excited state from both QD pulse excitations, and thus certain correlation functions will be equal around time arguments ∼ 0 and ∼ T 0 . Using this knowledge of the MZ cross-coincidence correlation function, we can simplify the time integration: The function ∞ 0 dtG (2) MZ (t, τ ) corresponds to the coincidence count histogram as shown schematically in Fig. 2(c) and has a characteristic 5-peak structure (experimentally, peaks at larger delays also occur due to the laser repetition rate). With this function in mind, we define the TPI visibility by normalizing the peak around τ = 0 to its neighbouring peaks: where the integration bounds of τ are chosen to capture the peak of interest alone (much larger than T lifetime to capture the entire peak, but not so large as to integrate over neighbouring peaks).
Using Eq.'s (20) and (21), we localize the integration bounds for t and τ to occur to around t, τ ∼ 0. The ex-pression for V raw can then be highly simplified by noting that any correlation functions containing operators s(t), s † (t) where t is not in the vicinity of ∼ 0 or ∼ T 0 vanish, recalling that operators separated in time by ∼ T 0 become uncorrelated, and finally noting that correlation functions evaluated around T 0 are equivalent to to those evaluated around 0.
As an example of how this works, consider the integration around τ ≈ 0: MZ (t, τ ). We have three terms coming from Eq. (20), all given by Eq. (19) at different time arguments. Considering just the third term G (2) (t + 2T 0 , τ ) as an example, this term only has one nonzero correlation function, physically corresponding to a multiphoton detection event from the output given from the second pulse excitation of the source, having travelled down the longer arm of the MZ interferometer, and is given by the second term in Eq. (19), where R = |R| 2 and T = |T | 2 .
Applying this procedure to all terms, we ultimately where N 0 is the number of laser pulse cycles over which data is collected, multiplied by an overall efficiency factor which contains, for example, both extraction and detector efficiencies and is assumed equal for both detectors. From this, we find the visibility, recovering known results [41,67], where is a correction factor related to imperfections associated with the interferometry setup; we have added a factor of the interferometer fringe contrast (1 − ) wherever first order degree of coherence correlation functions appear to account for optical surface imperfections reducing interference visibility. Clearly, in the limit of no multiphotons (g (2) [0] = 0), the visibility of TPI in a perfect MZ interferometer (1 − = 1) with balanced beam splitters (R = T = 1/2) and the single-photon indistinguishability are equivalent. We can find a corrected TPI visibility (which would occur in a perfect MZ interferometer with balanced beam splitters) using where the second line is appropriate in the usual high purity case that g (2) [0] 1. We can also solve for the single-photon indistinguishability: It is useful to contrast this result to that obtained from a HOM interferometry experiment using two distinct SPSs with identical expectation values, as shown in Fig. 2(b). In this case, we can consider two sources with bosonic operators s 1 (t) and s 2 (t) incident upon a single beam splitter, and compute the cross-correlation function of the detectors. Following a completely analogous derivation as in the MZ case, we can find the area of the peaks that appear around τ ∼ 0: which leads to Equation (29) is a direct generalization of a result initially found by Hong, Ou, and Mandel [71] to allow for nonzero g (2) [0]. In the case of an ideal inteferometer, Eq. (31) reduces to V (HOM) raw . This definition has appeared in some theoretical works [37,57,[74][75][76], sometimes referred to therein as the indistinguishability. Most notably, this definition leads to a TPI visibility of ∼ 1/2 in the limit of distinguishable single photons, in contrast to the MZ setup. It is worth noting however, that there also exist definitions of the visibility which differ from Eq. (12) by a factor of two, such that the visibility in the MZ setup also goes to ∼ 1/2 for distinguishable photons [41]. Such a definition is still slightly different than the HOM setup in the case of an imperfect interferometer or nonzero g (2) [0].
One should also be aware that a different convention for the visibility is sometimes encountered, where the raw visiblity is instead defined as 1 − A 0 /A 0,cross , where A 0,cross is the zero delay peak area obtained after rotating the polarization of one of the MZ arms prior to the second beam splitter. A 0,cross thus differs from A 0 by the indistinguishability term, which vanishes for cross-polarized photons. From Eq. (36), it can be seen that the resultant expression for the raw visibility in this case differs slightly from the definition used in this work for nonzero g (2) [0], and this is important to keep in mind when comparing visibilities calculated using different methods.
B. Input-output relations and SPS figures of merit
In the previous subsection, we derived the brightness N , HBT visibility g (2) [0], indistinguishability I, and TPI visibility V for photons emitted from a SPS in terms of the output channel operators s, s † . Using input-output theory, these can be related to QD exciton operators for the desired |X -|G transition in the Heisenberg picture as s(t) = √ γ X σ − X (t) (neglecting vacuum input noise terms which do not contribute to any expectation values) for the total emission spectrum, neglecting any extraction efficiency loss from photons emitted into undesired modes [70]. However, in the AT regime (and also in the ac Stark regime to some extent), there exist two spectral components emitted from the exciton transition due to the cw laser dressing inducing energy splittings. When the difference in frequency between these peaks is greater than their spectral widths, input-output theory can be applied to each of the transitions separately [70], and figures of merit can be expressed for these peaks separately [27]. This is done using the eigenstates of Eq. (4).
For example, consider the total emitted photon number (brightness): where ρ ± (t) = ±| ρ(t) |± , and ρ +− (t) = +| ρ(t) |− . In the last line, we dropped an integration over a coherence term, as in the limit of well-separated peaks (i.e., with center-frequencies separated by γ X ), the integrand is highly oscillatory and contributes negligibly; such is in the same spirit as the secular approximation made in Appendix B.
One can also define indistinguishability for the sidepeaks using the dressed operators: where g (1) , σ − ± = |G ±|, σ + ± = |± G|, and ρ ± = ±| ρ |± . While it is possible to define an HBT g (2) ± [0] for the sidepeaks, this quantity is likely strongly dependent on the filter width used to isolate the peak of interest (and thus can not be unambiguously defined without reference to the filter width), as the short excitation pulse leads to broad two-photon emission tails in the spectrum. As for a pulse shorter than the cw dressing timescale, the first emitted photon during the pulse is centered around the undressed ω X energy, we can expect most of this to be filtered out of the isolated sidepeak, and generally we expect the g ± [0] to be much smaller than the total g (2) [0]. In fact, this purity-enhancing effect has been predicted even with unshifted emission frequencies in the context of pulse excitation of QDs in cavities, which play the role of spectral filtering [37].
In summary, the figures of merit we use to quantify the SPS are the emitted photon number (or efficiency/brightness) N , the HBT g (2) [0], and the singlephoton indistinguishability I, given by Eq.'s (15), (16), and (17), respectively (all with s(t) = √ γ X σ X (t)). We also have the corresponding quantities defined for the sidepeaks of the dressed system, N ± and I ± , given by Eq.'s (32) and (33), respectively. In Appendix E, we define and study an additional figure-of-merit, the cw er-ror rate E cw , which quantifies the proportion of photons emitted due to the weak off-resonant excitation of the ground-exciton transition by the dressing laser. This effect has not been taken into account in the calculation of the figures of merit in the main text, as we assume this contribution can be removed from the emitted spectrum (and also is usually very small) by spectral filtering, in typical cases.
IV. SPS OPERATION AND FIGURES OF MERIT FOR AUTLER-TOWNES AND AC STARK REGIMES
In this section, we analyze the operation and figures of merit of our SPS source in both AT and ac Stark regimes. In Sec. IV A, we derive approximate formulae for the emitted photon number and indistinguishability in the ac Stark regime by adiabatic elimination of the |B state, assuming the source to be excited in the |X excited state by a short pulse at time t = 0. In Sec. IV B, we assume the same initial condition, and calculate the evolution of the reduced density operator using the full ME Eq. (5) with the Hamiltonian (and associated rotating frame) of Eq. (3).
Throughout this section, unless otherwise stated, we let γ X = 1.32 µeV as in Ref. [27]. For the case relevant to QDs where all radiative transitions have the same rate, such that the |B state has half the lifetime of the |X state, we have γ B = 2γ X , and the solution to the ME with initial condition |X is analytically calculable for the AT regime δ = 0 (neglecting any phonon effects). This situation corresponds closely to the biexciton cascade realization of our SPS source, where for example in Ref. [27], γ B /γ X = 1.92. The solution is given in full in Appendix A, but in the well-dressed limit Ω cw /γ X 1, the emitted photon number is N = 1/2, and the indistinguishability is a very poor I = 11/21, whereas for each sidepeak the emitted photon number is N ± = 1/4 and I ± = 2/3.
In Sec. IV C, we show how a cavity mode can be used to increase the spontaneous emission rate of the |X -|G transition, improving the SPS figures of merit at the cost of larger emission linewidths relative to the frequency shifts. We assume that the only significant effect of the cavity in the regimes studied is to change the ratio γ X /γ B , so the results of this section are also applicable to non-QD systems where the decay rates of each transition may be quite different.
In Sec. IV D, we discuss the efficiency loss that arises from the filtering of the phonon sideband, which can be captured using the polaron transform, and in Sec. IV E, we discuss the role of the pump pulse to initialize the system in the |X state at time t = 0, and what role a pulse with a nonzero duration plays in the HBT g (2) [0].
Well into the ac Stark and/or AT regime, we have a Hamiltonian which oscillates rapidly compared to the dissipation rates of the system (spontaneous emission and phonon decoherence rates). To simplify our numerical calculations by avoiding having to resolve these rapid oscillations, in Appendix B, we perform a secular approximation by moving into an interaction frame defined by the system Hamiltonian Eq. (3) and dropping these rapidly-oscillating terms. For all numerical calculations presented in this work, we have checked that the secular approximation gives the same results (i.e., visually indistinguishable on any plots) as the full ME presented in the main text as the driving rates are increased into regimes where the secular approximation is expected to asymptotically recover the full solution, thus ensuring the accuracy of our simulations.
Finally, we note that if the effective cw drive Rabi oscillation period ∼ η −1 is not much larger than the excitation pulse width, the QD will experience Rabi oscillations between the |B and |X state during the process of the pulse excitation, if in the AT regime, or become offresonant with the Stark shifted |X state during the pulse excitation, if in the ac Stark regime. Such a process will highly degrade the inversion efficiency of the pulse, and an initial condition of |X will no longer be applicable. Better inversion efficiency could perhaps be achieved using different excitation techniques, such as a non-π pulse, off-resonant phonon-assisted excitation [38,49,51,77], or adiabatic rapid passage [66,78]; however, we leave a full study of this to future work. For reference, assuming a Gaussian pulse with full width at half maximum in intensity of 2 ps, proper inversion is achieved for η/γ X 249.
A. Adiabatic elimination under far off-resonant driving
For |δ| Ω cw , the biexciton state remains largely unpopulated as it is driven far off resonance. In this regime, for the case of no phonon coupling, we can approximate the system as a two-level system under radiative decay via adiabatic elimination of the biexciton state. We consider the dynamics of the σ − B operator under the Heisenberg-Langevin equation (neglecting noise fluctuation terms): As the dynamics of σ − B are fast oscillating, we approximateσ − B ≈ 0, and solve for σ − B : where and the approximation in the second line is justified on the grounds that for the adiabatic elimination procedure to be valid the detuning should greatly exceed the linewidths. We note that in the ac Stark regime, A 1, and thus from Eq. (36) we can also find σ . Substituting this result into the ME and expanding to second order in A, we find that the dynamics for a QD initialized in the |X state can be described with a simple ME for an effective two-level system, where here we have used that to second order in A, ∆ ac ≈ δA 2 . Equation (37), when considering the radiative decay of the X exciton, is an ME for spontaneous emission with effective decay rate γ X + A 2 γ B /2 and pure dephasing with rate γ eff = A 2 γ B /2 ≈ γ B 2 ∆ac δ . Note that this result can be derived as well in a more rigorous manner by using the effective operator formalism, which utilizes a Feschbach projection and perturbation theory to separate slow and fast subspaces [79]. The single-photon indistinguishability can be easily calculated for this system: again with accuracy up to order A 2 , Also to order A 2 , we have N ≈ I. For γ B = 2γ X , we have I = 4δ 2 /(4δ 2 + Ω 2 cw ). We can express this in terms of the ac Stark shift ∆ ac , so that In the absence of sources of additional dephasing or decoherence, Eq. (39) gives an estimate of the highest achievable indistinguishability for a given ∆ ac in terms of the maximum detuning δ that can be introduced without exciting other unwanted energy levels, with Ω cw ≈ 2 √ δ∆ ac , or in terms of the maximum drive strength without introducing additional decoherence.
B. Numerical results for SPS efficiency and indistinguishability
In this subsection, we present the results of our numerical (and analytical) solutions of the ME for the singlephoton emitted photon number and indistinguishability, without yet considering any cavity coupling (for γ X = γ B /2). For all plots here and in subsequent subsections, unless otherwise stated, we show results without any phonon coupling as solid lines, phonon parameter set I as dashed lines, and phonon parameter set II as dashed- dotted lines. Red (lower energy) sidepeaks are shown in red, blue (higher energy) sidepeaks are shown in blue, and the total spectrum results are shown in black. To calculate the two-time correlation functions that appear in the definition of the indistinguishability and HBT g (2) [0], we use the quantum regression theorem [80]. In Fig. 3(a), we show the analytical solution of Appendix A, as well as the numerical solution with phonon coupling in the AT regime with δ = 0 for the SPS indistinguishability and emitted photon number. As the AT regime requires weaker drive strengths to achieve equivalent energy splittings as compared with the ac Stark regime, the influence of phonon scattering is quite weak here, only being perceptible for the phonon parameter set I. In Fig. 3(b), we show the numerical solutions to the ME for the indistinguishability and emitted phonon number for a device operating the ac Stark regime with a fixed frequency shift of ∆ ac = 5γ X , as well as the approximate solution derived under adiabatic elimination of the higher energy state |B in Sec. IV A. Note that we only present results here for the red (lower-energy) sidepeak, but well into the ac Stark regime as δ/∆ ac 1, the spectral weight of the other sidepeak rapidly goes to zero, and the red sidepeak becomes nearly equal to the total spectrum, as we show explicitly later.
As δ/∆ ac 1, the full numerical solution without phonon coupling asymptotically approaches the approximation solution under adiabatic elimination. For phonon parameter set I, the effect of phonon coupling initially increases with increasing detuning, before ultimately decreasing again at very high detunings.
To understand this observation, it is useful to refer to the approximate simplification of the PME which is presented in Appendices B and C as the weak phonon coupling ME under the secular approximation. While we use the full PME (with the secular approximation) for all numerical calculations, the expressions derived in Appendix C allow for insight into the physics of the phonon interaction and its effect on the source figures of merit. We show that for the phonon parameters studied in this work, the dominant effect of the phonon interaction is to induce transitions from the |+ to the |− state with rateΓ 0 [n ph (η, T ) + 1], which corresponds to a phonon creation process, as well as transitions from the |− to the |+ state with rateΓ 0 n ph (η, T ), which is a phonon absorption process, whereΓ 0 is given by Eq. (C3), and n ph (ω, T ) = [e ω/(k B T ) − 1] −1 is the thermal phonon occupation number. A schematic of these processes in the dressed state frame is shown in Fig. 11. For η k B T 1 , the phonon creation and absorption processes occur at similar rates, and the phonon coupling leads to an incoherent dephasing-like effect which scales with ∼ η 2 . At higher effective drive strengths η k B T , only phonon creation becomes probable, and the |+ to |− transition becomes driven with rate proportional to ∼ η 3 .
In light of this, the initial increase in the role of phonons can be understood as a consequence of the concurrent increase in the drive strength Ω cw ≈ 2 √ ∆ ac δ, which, for small η relative to ω b and k B T (for T = 4 K, k B T = 345 µeV = 178γ X ), leads to a roughly linear increase in the phonon decoherence rates as given by Eq. (10) as a function of δ (see Appendix C) when holding ∆ ac fixed. This manifests in an increased population of the higher lying biexciton state, which reduces the efficiency (as seen in the inset), and the indistinguishability via timing jitter. In fact, for phonon parameter set I, this increased decoherence is sufficient to outweigh the increased indistinguishability afforded by moving further into the ac Stark regime by increasing the detuning, leading to non-monotonic behavior of the indistinguishability as a function of δ/∆ ac , and an initial local maximum of the indistinguishability as the detuning is increased. In addition, this behavior allows us to deduce that excitation-induced-dephasing also reduces the coherence of the emitted photons, as over the range of detunings where the indistinguishability decreases, the emitted photon number continues to increase, suggesting that the reduction of coherence can not be entirely attributed to timing jitter.
Moving into even higher detuning regimes, we see the indistinguishability and efficiency increase again for phonon parameter set I, which is due to the phonon absorption process which takes states from |− to |+ becoming improbable as η k B T , as the number of phonons in the bath with the required energy becomes small. It is worth noting that this effect is only present for rather large detunings (of order ∼meV) and drive strengths (∼hundreds of µeV), where the π-pulse inversion is likely very inefficient. In Fig. 4, we show the formation of the local maximum for sufficiently large phonon coupling strengths by plotting the indistinguishability of the dominant red sidepeak as a function of detuning for a fixed frequency shift ∆ ac .
While holding the ac Stark shift ∆ ac fixed and vary-ing the detuning and drive strengths to achieve this shift (as we have done in Fig. 3(b) and Fig 4) is useful from a theoretical perspective to reveal the achievable figures of merit for a given Stark shift, as well as to show the transition from the AT regime at δ = 0 to the ac Stark regime as δ/∆ ac 1, it is less suited to what would be directly observed in an experiment, as the drive strength Ω cw must also be varied simultaneously with the detuning to achieve a constant Stark shift. Thus, in Fig. 5 we plot the indistinguishability and emitted photon numbers for a fixed detuning, and instead vary the drive strength Ω cw . Here, we can see that for a fixed detuning, the figures of merit vary inversely to the effective frequency shift; as we increase the drive strength, we move further away from the ac Stark regime with weak frequency shifts and good figures of merit (due to minimal excitation of higher energy states), towards the AT regime where δ/∆ ac is small, and the energy splittings are larger, but with increased decoherence mostly due to increased timing jitter.
In Fig. 6, we show more details on some aspects associated with the phonon bath interaction for a QD SPS. As visible in Fig. 3(b), for phonon parameter set I, there occurs a local maximum of the indistinguishability as a function of the detuning in the ac Stark regime, for a given Stark shift ∆ ac . This local maximum occurs for modest drive strengths and detunings, and as such is important to consider in light of practical experimental considerations. In Fig. 6(a), we plot the indistinguishability of the red sidepeak as a function of shift ∆ ac , where for each value of ∆ ac we sweep the detuning δ to find the value δ opt /∆ ac where this local maximum occurs, and plot this as well as the corresponding indistinguishability at this detuning value. We can clearly see here that for phonon parameter set I, the maximum achievable indistinguishablity drops off rapidly as the splitting is increased. By frequency shifts of ∆ ac ∼ 20γ X , the optimal indistinguishability occurs rather close to the AT regime, with small detunings δ/∆ ac and indistinguishabilities not much higher than the AT regime value of (without phonons) I − = 2/3. In Fig. 6(b,c), we compare the performance of the device for positive (red) and negative (blue) dominant peak frequency shifts by plotting the source figures of merit as a function of detuning δ/|∆ ac | for both positive and negative detunings. In the absence of phonon coupling (and, as with all of the calculations in this subsection, neglecting the far off resonant |X -|G drive term in the Hamiltonian), as expected, the device performance is perfectly symmetric with respect to the sign of the detuning.
Upon introducing phonon coupling, however, the device operation becomes highly asymmetric with respect to the detuning (and thus the sign of the dominant peak frequency shift ∆ ac ). In all cases, the figures of merit for the SPS are better for positive (red) frequency shifts ∆ ac in the ac Stark regime. This is because at low temperatures, there are few phonons present in the phonon bath (i.e., in the sense of a thermal distribution of bosons); for negative detunings, the laser frequency is larger than the transition frequency between the X exciton and biexciton states, and a resonant process can occur where the difference in energy between the laser and transition can be absorbed by the creation of a real phonon in the bath with this energy, leading to phonon-assisted absorption. The corresponding process, where for positive detunings the energy difference is made up for by annihilation of a phonon with energy near equal to the energy gap ( Fig. 1(d)), is more strongly suppressed due to the small number of phonons present in the equilibrium bath with this energy range at T = 4 K. Thus, for positive detunings (red energy shifts), the population of the undesired higher energy biexciton state is less, and as such, the efficiency and timing jitter is reduced.
In terms of the simplified model presented in Appendix C, the process associated with phonon emission becomes more dominant as η is increased, which leads to increased transitions from the |+ state to the |− state; for positive δ, |− is more |X like, whereas for negative detunings, it is more |B like. For this reason we mostly focus on positive detunings and frequency shifts throughout this work. As we show later in Appendix E, the usual case of positive biexciton binding energies can also lead one to favor positive detunings and frequency shifts.
Note we also see in Fig. 6(c), that the non-dominant peak weight (emitted photon number) goes very rapidly to zero as the detuning is increased, indicating that the device is operating in the ac Stark regime.
In Fig. 7, we plot the indistinguishability and emitted photon for the dominant red sidepeak as a function of detuning, as in Fig. 3(b), but for larger frequency shifts of ∆ ac = 20γ X and ∆ ac = 40γ X . Here the plots span a similar range of drive strengths Ω cw , showing how larger drive strengths are required to reach the ac Stark regime for larger frequency shifts, which thus increases phonon related decoherence.
C. Use of a cavity to improve device performance via the Purcell effect In our results thus far, we have assumed that the higher lying state |B has a spontaneous emission rate γ B which is twice that of the desired state |X such that γ B = 2γ X , which is close to the case in QDs where |B corresponds to the biexciton. However, it is the simultaneous dipole radiation of the higher energy state |B and the desired state |X which leads to timing jitter and reduced coherence of the spontaneously emitted (and frequency shifted) photons. Thus, it is intuitively reasonable that should we increase the radiation rate γ X relative to the decay rate γ B , we should expect to see improved figures of merit for both the indistinguishability (for reasons of timing jitter mentioned above) and emitted photon number (as emission into the X-polarized decay channel becomes accelerated relative to the Y -polarized one).
One way to increase the effective radiation rate γ X relative to γ B is to use the Purcell effect afforded by a cavity with an enhanced density of optical states (near)resonant with the |X -|G transition. To be concrete, we can consider a single-mode cavity with bosonic operators [a, a † ] = 1, QD-cavity coupling rate g, which couples to the |X exciton with a Hamiltonian term and has photon decay rate (spectral full width at half maximum) κ, which should satisfy κ E B to ensure the biexciton transition is not also broadened. Then, in the bad cavity (weak coupling) limit that g/κ 1, the cavity mode can be adiabatically eliminated, and the result is that the spontaneous emission rate γ X is increased by a factor γ X → (1 + F P )γ X , where and γ 0 is the bare γ X before any cavity enhancement.
In principle, to determine the quantitative influence of incorporating a cavity mode on the SPS figures of merit, the Hamiltonian H cav in Eq. (40) should be included in the system Hamiltonian H S , and the PME should be modified to reflect this change (as in, e.g., Ref.'s [37,38,48,56]). Output observables of the system should then be calculated in terms of the cavity operators a and a † , as input-output theory tells one that the scattered fields of the reservoir in the Heisenberg picture (which are ultimately detected) differ from their input by √ κa(t) [70,81]. This approach leads to a correct description of some of the subtleties involved with cavity coupling, including filtering of the output spectrum which occurs due to finite cavity width κ, the emitted photon numbers in each respective channel (i.e., the cavity mode collects a factor F P greater photons than background emission channels, in the weak-coupling limit), and cavity-induced dephasing.
For simplicity, however, we shall for the results in this section assume that the influence of the cavity is solely to increase the exciton decay rate γ X . This approach has, for one, the advantage of generality, as it can apply to any (e.g., atomic) system with decay rates which do not satisfy γ B = 2γ X , with or without any cavity coupling. Furthermore, in the weak coupling limit g κ, which is the ideal regime of operation for SPSs [37], the phonon decoherence rates associated with the cavity-QD interaction become insignificant [37,39], and the dominant effect of the cavity on the QD dynamics is enhanced spontaneous emission. Thus, we simply use a variable γ X in our simulations with the model of Sec. II, with the understanding that Eq. (41) can be used to get an estimate of the cavity parameters required to observe the corresponding figures of merit as a function of γ X . Neglecting the background spontaneous emission channels, we can thus let F P ≈ γ X /γ 0 , where γ 0 = 1.32 µeV, and F P is given by Eq. (41).
In Fig. 8, we plot the figures of merit of the SPS operating in the AT regime with δ = 0 as a function of the variable decay rate γ X . We see in the case of QD SPSs that, provided the enhancement is given by a cavity with a sufficiently broad linewidth to capture the frequency splittings and operate in the weak-coupling regime, that the source can emit with near 1/2 efficiency (the best case scenario in the AT regime) and high (> 90 − 95%) indistinguishability for AT splittings on the order of hundreds of µeV (but with highly broadened linewidths). In Fig. 9, we explore the ac Stark regime of operation with a variable decay rate γ X /γ 0 . Here, we see that as the decay rate is increased, the emitted photon number increases as losses due to Y -polarized emission channels are decreased, up to a point at which the emitted photon number begins to decrease again; this decline can be attributed to the laser detuning becoming smaller relative to the effective decay rate, which moves the operation regime towards the AT regime, and thus slightly increases the population of the non-dominant peak at the cost of the dominant peak. Nonetheless, for large emission rates γ X /γ 0 , the emission exceeds the phonon scattering rates and the figures of merit remain quite high; for the case of QDs, we see that near-unity indistinguishability (> 97% for both phonon cases a ∆ ac = 25γ 0 ) for frequency shifts on the order of tens of µeV.
Interestingly, in Fig. 9(d), the emitted photon number N − actually becomes larger with phonon coupling. This can be understood in the context of the weak phonon coupling approximation of Appendix C. In particular, the ratio of the rate that takes states from |+ to |− to the rate that takes states from |− to |+ is simply exp [η/k B T ]-a direct consequence of the thermal occupation distribution of phonons in the bath. Thus, for η k B T (in Fig. 9(d), and positive detunings δ > 0, we expect phonons to increase the proportion of photons emitted into the dominant peak. Indeed, for Fig. 9(d), we have η/k B T ≈ 5. In contrast, Fig. 9(c) has η/k B T ≈ 1, and this effect is not seen. The fact that the indistinguishability in Fig. 9(b) is not also improved relative to the no-phonon case is indicative that the excitationinduced dephasing of the |X -|G transition dominates over the reduced timing jitter afforded by this phononinduced transition. We stress that the numbers presented here can be achieved with cavity and QD parameters which have been demonstrated in the literature with semiconductor microcavities [3,4,[82][83][84][85][86][87][88][89][90][91]; the range of Purcell factors shown here (less than 40; cf. a value achieved with a photonic crystal cavity of 43 [87]) can be achieved with (e.g., using dielectric micropillar resonators [3,4,[82][83][84]92]) a linewidth κ of a few hundred µeV (corresponding to Q factors ∼ 10 3 −10 4 ), and a coupling g on the order of (at most) tens of µeV, and the phonon parameter sets we use reflect measured values as discussed in Sec. II.
D. Efficiency loss due to phonon sideband coupling
As mentioned in Sec. II, the presence of the non-Markovian broad phonon sideband due to scattering with LA phonons leads to much lower photon indistinguishability if it is not filtered out of the detected spectrum (retaining only the ZPL-in this case the frequency shifted peak(s)). Since we are assuming in this work that the sideband is removed by frequency filtering after emission, we would like to quantify this efficiency cut in terms of the phonon coupling parameter sets I and II we use for the presented simulations.
Without any cavity coupling (i.e., assuming a postemission filtered ZPL), the fraction of total photons emitted that remain unfiltered is η eff = B 2 , whereas with efficient cavity filtering, it is η eff,cav = B 2 F P /(1 + B 2 F P ) [39]. In Table I, we show these filter efficiencies for phonon parameter sets I, II, and III at T = 4 K. We can also find low-temperature analytical expressions for these efficiencies by noting that whereT = πk B T /ω b , and in the second line we have, for the term containing n ph (ω, T ), expanded the exponential cutoff as a power series and evaluated the resulting Bose-Einstein integrals 2 ; at T = 4 K, retaining only terms up to orderT 4 is an excellent approximation for ω b 1.5 meV, and qualitatively accurate for ω b,I = 0.9 meV as well.
Note that the efficiency cut has not already been taken into account in our simulations for emitted photon numbers N (±) , and the efficiency given here must also ultimately be considered to yield the total SPS efficiency (in addition to other experimental considerations, e.g., output fiber coupling efficiencies, etc.) E. Single photon purity g (2) [0] associated with pulse excitation In our simulations presented in Sec.'s IV B and IV C, we have simply assumed the SPS to be initialized in the |X state, and we have not explicitly modelled the pulse excitation. As we have also neglected the σ X x term in H S that can give rise to (far off-resonant) excitation of the |X -|G transition, we have not included in our model any mechanism for more than one photon to be emitted from the transition of interest to our SPS. Formally, then, all of the above simulations are for g (2) [0] = 0.
To improve on this approximation, we include the pulse excitation directly in the system Hamiltonian, and use a time-dependent PME which incorporates the pulseinduced phonon scattering [37,38,57], described in Appendix D. However, the re-excitation probability (and α (ps 2 ) ω b (meV) B η eff (%) η eff,cav (%) phonon parameter set I 0.04 0.9 0. corresponding two-photon emission probability) is not strongly affected by the dressing field so long as η −1 is much larger than the pulse width in time, as the pulse occurs much quicker than the period of Rabi oscillations between the |X and |B states. Thus, the statistics of two-photon emission are very closely related to results known for the simple two-level system pulse excitation [5,27,37,93]. The primary modification to the g (2) [0] comes from the fact that in a two-photon emission event, the first photon will, for η −1 much larger than the pulse duration, be emitted in the desired |X -|G transition. The sequential photon, however, is emitted with probability N (i.e., the efficiency). Specifically, consider a Gaussian pulse with full width at half maximum in intensity τ p , and area in amplitude π (i.e., after the coherent attenuation factor from phonons B is applied, as with the rest of this paper). Then, for a short pulse that satisfies ητ p 1 and γ X τ p 1, the results of Fischer et al. [93] give g (2) [0] ≈ η G γ X τ p /N 2 for the case of no cw dressing laser, 4376 is a factor associated with the Gaussian pulse. For the reasons outlined previously, with cw dressing, this value is reduced by a factor of N such that Next, using Eq. (25), we find where we neglect small terms of order (γ X τ p ) 2 . Note if we multiply the second term on the right hand side of Eq. (44) by a factor of 1/N , this equation also applies to the usual scheme of QD SPSs with resonant pulse excitation and no dressing, and is thus broadly useful in characterizing the relationship between indistinguishability and TPI visibility as measured in an MZ interferometry experiment. It is important to keep in mind, however, that the indistinguishability (first-order coherence) is also degraded due to pulse excitation in a manner which scales similarly to the g (2) [0] behavior [37,76]. In Fig. 10, we study the full g (2) [0] of the source, using the PME model of the excitation pulse described in Appendix D, as well as the approximate solution in Eq. (43). For ps-duration pulses, the approximate formula is very accurate, agreeing almost exactly with the full calculation (without phonons) for τ p ≈ 2 ps, and de- HBT g (2) [0] as a function of pulse width for AT (green lines) and ac Stark (blue lines) regimes, without phonons (solid lines) and with phonons using parameter set III (dashed lines). Also plotted as stars is the semi-analytical approximate solution given in Eq. (43) with (red) and without (blue) phonons.
viating only slightly in the case of phonons. With phonon coupling, the g (2) [0] is uniformly larger than without, which is likely due to the reduction in inversion efficiency from pulsed-excitation-induced dephasing [37]. For long pulses in the ac Stark regime, the approximate solution overestimates the g (2) [0]. This may be because in this scenario ητ p 1 is no longer satisfied even for a short pulse, and the Rabi oscillations between the |X and |B states lead to a small reduction in the re-excitation probability and thus g (2) [0].
As such, we can conclude that to maximize the purity of this source and minimize g (2) [0], very short pulse excitation should be used to minimize re-excitation probability (but not so short as to excite unwanted energy levels). Additionally, the use of a high-Q factor cavity, which already is beneficial to the SPS figures of merit for reasons of collection efficiency and minimization of timing jitter (see discussion in Sec IV C), also suppresses g 2 [0] strongly by means of a dynamical decoupling effect leading to a pulse-induced time-dependent Purcell factor, as found in Ref. [37]; this effect is not captured by our simple model which involves merely changing the relative decay rates.
There is, in addition to the contribution to g (2) [0] from the pulse excitation, a potential contribution which arises from the cw drive directly via the σ X x term in Eq. (1). Generally we assume these photons can be filtered out of the collected spectrum, however one can expect this contribution to the g (2) [0] to remain small relative to the pulse so long as E cw (discussed in Appendix E) is sufficiently small, and Ref.'s [42,72] contain information on how to incorporate a cw contribution to the g (2) [0] into the analysis if desired.
V. CONCLUSIONS
In conclusion, we have theoretically analyzed the important factors that affect the figures of merit of our experimentally realized SPS [27], which allows for optical frequency tuning of emitted single photons, using a polaron transform ME method to incorporate rigorously the effects of electron-phonon scattering. Our detailed study is also relevant to wide range of semiconductor QD platforms.
In the AT regime, we have shown that (without any cavity coupling and equal dipole transition rates) large frequency shifts can be achieved, but at the cost of poor indistinguishability (∼66%) and efficiency (∼25% at best). In the ac Stark regime, with a detuned laser drive, we have shown that much higher indistinguishabilities can be achieved (>90% for frequency shifts of tens of µeV), but with the need for much higher cw drive strengths leading to increased phonon-related decoherence, including increased population of the higher energy state and thus increased timing jitter and reduced efficiency, as well as excitation-induced dephasing of the photon emission channel. This phonon-related degradation of the source figures of merit increases rapidly as the frequency shift ∆ ac is increased. For large enough phonon coupling rates, phonon scattering also leads to, for a fixed frequency shift ∆ ac , a local maximum in the indistinguishability as a function of detuning, whereas without phonon coupling the indistinguishability continues to increase as the detuning increases further according to the approximate relation derived by adiabatic elimination of the higher energy state I ≈ N ≈ δ/(δ + ∆ ac ). In this case, the maximum achievable indistinguishability is set by the presence of other energy levels neglected in this analysis, other sources of excitation-induced decoherence, or other experimental limitations.
In addition, we have shown how the low-temperature asymmetry associated with the phonon bath due to low phonon occupation probabilities leads to preferential operation of the SPS in the ac Stark regime with positive (red detuned) detunings (and thus red Stark shifts of the emitted photons), as this leads to reduced phononinduced excitation of the higher energy state and thus reduced timing jitter and efficiency loss.
We have also elucidated how cavity coupling (or more generally, application of the model to systems with different transition rates between levels) can be used to improve the source figures of merit via the Purcell effect, at the cost of broader emission linewidths relative to the frequency shift. In this case, using the AT regime to generate indistinguishable photons becomes more practical, with >90% indistinguishability achievable with N ≈ 1/2 for frequency shifts up to hundreds of µeV for realistic QD and cavity parameters which have been widely achieved in the literature for dielectric resonators. The presence of selectively enhanced spontaneous emission rates also was shown to benefit the ac Stark regime, enabling near unity indistinguishabilities with high efficiency for Stark shifts up to tens of µeV.
Finally, we analyzed the multiphoton statistics of the source, including the g (2) [0] parameter, which was shown to follow closely related trends as previous studies on simple undressed two-level system models have predicted [37,93].
We reiterate that while the results in this paper have been presented for the specific realization of the source with the semiconductor QD biexciton cascade four-level system model (which necessitates an analysis of LA phonon coupling), the principles of operation only require a quantum 3 (or more) level ladder system, and many of the results of our analysis for the case of no phonons should apply qualitatively to these systems as well.
Furthermore, we expect that the specific implementation of the frequency tuning mechanism we have illustrated in this work could be modified, expanded upon, or optimized further by the engineering or implementation of different energy levels or radiative decay rates. For example, the QD biexciton cascade also involves a Ypolarized exciton, which could be used to create Stark shifted photons of the X-cascade by instead dressing the |Y -|G transition with an orthogonally polarized cw laser. This would remove the issue of the cw background by employing polarization filtering. In this case, the indistinguishability follows similar trends as in the case of biexciton-exciton dressing. For the sake of this work, however, we have restricted our detailed analysis to a cascade type dressing involving the biexciton state, as this is what we have reported in experiment.
Overall, our results indicate that the use of the AT and ac Stark effects to produce optical frequency shifts in a quantum ladder system can be effective in generating indistinguishable single photons with high efficiency. While the achievable figures of merit are ultimately limited by phonon effects in the case of the QD cascade system studied here, frequency shifts of up to tens-hundreds of µeV are achievable with realistic cavity and QD parameters which have regularly appeared in the literature, while maintaining high indistinguishabilities, efficiencies, and purities, and this analysis is consistent with experimental results we have reported in Ref. [27]. ACKNOWLEDGMENTS L.D. and Stephen H. acknowledge financial support from the Alexander von Humboldt Foundation. We are also grateful for the support by the State of Bavaria, the Natural Sciences and Engineering Research Council of Canada, and the Canadian Foundation for Innovation. We would like to thank Christian Schneider for useful discussions and work in fabrication of the sample used to obtain the data in Fig. 2(c). For γ B = 2γ X , and δ = 0 (AT regime), neglecting phonons, the solution to the ME of Eq. (5) with the Hamiltonian (and rotating frame) of Eq. (3) can be expressed analytically. Considering the system to be in the |X state at t = 0, the single-photon indistinguishability is, up to an integral, where and with and where cos φ = Ω/Ω cw , sin φ = γ X /(2Ω cw ), and Ω = . For the density matrix elements, we obtain and where a ± (t) = with Ω = Ω 2 cw − γ 2 X 16 . In the dressed state basis (using Eq. (4)), we have simply ρ + (t) = ρ − (t) = e −γ X t /2, and so N ± = 1/4. The first-order two time correlation function is (A10) Notably, in the well-dressed limit (γ X /Ω cw → 0), we have N = 1/2 (as the dressed system has equal decay rates to both polarization channels), and I = 11/21.
We also consider the indistinguishability of a photon emitted from one of the sidepeaks, I ± . By symmetry, for δ = 0 both sidepeaks give the same indistinguishability such that I + = I − . The result for this I ± is (assuming Ω is real, as these observables are only well-defined for dressing exceeding the damping) where with In this case, the indistinguishability I ± tends to 2/3 in the well-dressed limit, which is the same as what one would find for an undressed system initialized in the |B state.
Appendix B: Interaction frame secular approximation In this appendix, we discuss the secular approximation which can be made when the emission spectrum peaks are well-separated, which removes the fast oscillations from the equations of motion and makes the numerical solution of the ME vastly more efficient. This approximation also produces an ME in Lindblad form.
We first consider the model given by the system Hamiltonian H S of Eq. (3) and the ME of Eq. (5) with the additional phonon term of (10). We then move into an interaction frame defined byρ(t) = U † (t)ρ(t)U (t), where for our four-level system model, The ME in this interaction frame then takes the forṁ where L(t)ρ(t) = U † (t)Lρ(t)U (t), in terms of the dressed state basis of Eq. (4) with dressed energies E ± = ±η/2. Next, we note that applying the unitary transformation of Eq. (B1) to the radiative and phonon terms L rad ρ and L PME ρ will yield a sum over time-independent terms, as well as time-dependent terms which oscillate at frequencies given by E ± and E + − E − . If η is much larger than the characteristic rates atρ(t) evolves, then these time-dependent terms can be dropped, making a secular approximation (or post-trace rotating wave approximation), as they average out to give a negligible contribution to the interaction frame density operator evolution. Intuitively, we expect this situation to occur when the peaks of the system are well-separated by much more than a linewidth or any of the phonon rates. The characteristic rates at whichρ(t) evolves are given by the coefficients of Eq. (B2) which we give below.
Making the secular approximation, we thus drop all of these rotating terms such that Eq. (B2) now has no explicit time dependence (i.e., except that coming from ρ(t)). With some work, it can be shown that the ME in the interaction frame with the secular approximation can then be written in Lindblad form aṡ where the radiative contribution is where we have let σ +− = |+ −| to simplify the notation, and the non-unitary phonon contribution is We also have a unitary part of the phonon interaction, which is given by the Hamiltonian The complex phonon scattering rates are given bỹ From Eq.'s (B5) and (B6), we see that the influence of the exciton-phonon interaction can be seen in the dressed state basis to take the form of a pure dephasing-type term with rate Re{Γ 0 x }, and phonon-driven transitions from |+ (|− ) to |− (|+ ) with rate Re{Γ + m } (Re{Γ − m }). Additionally, the Hamiltonian term H PME gives a renormalization of the dressed state energies, which in the bare-state basis is equivalent to a small shift in the |X and |B state energies, as well as the drive term between them; our simulations (not shown) performed without this Hamiltonian term look very similar to the full calculations, indicating that L S PMEρ (t) has the dominant influence on the SPS figures of merit.
It is easy to see that in the dressed-state frame, the observable figures of merit for the ± peaks are calculated the same as in the bare state frame, but with ρ →ρ, and for the total emitted photon number, N = N + + N − .
Appendix C: Weak phonon coupling PME For common values of the phonon parameters α, ω b , and the temperature T , including the parameter sets we study in this work, the phonon coupling function satisfies |φ(τ )| 1. In this case, we can to a good approximation expand the phonon functions G m (τ ) that appear in to leading order in φ, to find G Under this weak phonon coupling approximation, we can simplify the PME in the dressed state basis under the secular approximation of Appendix B. The Hamiltonian part of the PME then becomes (using primes to indicate this weak phonon coupling approximation) whereΓ ± y are given by Eq. (B7c), and the incoherent part of the PME becomes where n ph (ω, T ) = e ω/(k B T ) − 1 −1 is the thermal phonon occupation number, and While we do not use the weak phonon coupling approximation for our numerical simulations (although for the phonon parameter sets we use, it is expected to give good quantitative agreement with the full PME), it is nonetheless useful to gain analytical insight into the underlying physical processes and scaling behavior of the phonon decoherence rates. The Hamiltonian term H PME gives a small renormalization of the dressed state energies, such that E ± → E ± + Im{Γ ± y }/2, which gives a perturbation to the nominal splitting ∆ ac . As mentioned in the main text, we neglect this phonon drive-dependent frequency shift renormalization when we refer to the frequency shift ∆ ac , and we have checked in our simulations that its effect is small (typically 10% of ∆ ac ).
FIG. 11. Schematic of phonon-assisted transitions between dressed states as given by the weak phonon coupling PME. In the AT regime (a), the phonon absorption processes with rateΓ 0 n ph (Ωcw, T ) and the phonon emission processes with rateΓ 0 [n ph (Ωcw, T ) + 1] take the system to both |X and |B states, as the dressed states |± are symmetric and antisymmetric combinations of these states, whereas in the ac Stark regime (b) the phonon absorption processes with ratẽ Γ 0 n ph (η, T ) and the phonon emission processes with ratẽ Γ 0 [n ph (η, T ) + 1] take the system to higher and lower energy, respectively. In this schematic we let n ph (ω) ≡ n ph (ω, T ).
The non-unitary part of the PME (which leads to phonon-related decoherence) under the weak phonon and secular approximations is given by L S PMEρ and is shown schematically in Fig. 11.
In the AT regime, with δ = 0, the phonon rate is simplyΓ 0 = π 2 J(Ω cw ), and we can furthermore neglect the exponential cutoff term in the phonon drive, as the regime where it becomes significant requires very high drive strengths and is difficult to realize in experiments. Then, if Ω cw k B T (very strong driving), the phonon dissipator simply takes the form of spontaneous emission from state |+ to state |− with rate α π 2 Ω 3 cw . At lower drive strengths Ω cw k B T , the phonon-induced transitions between |+ and |− states occur with the same rate α π 2 k B T Ω 2 cw , giving the expected Ω 2 cw scaling. In the ac Stark regime with |δ| Ω cw , we can find the leading order behavior (i.e., in δ/∆ ac ) of the incoherent phonon rates by approximating η ≈ |δ|, and Ω cw ≈ 2 √ ∆ ac δ. Then, for δ k B T , the phonon dissipator again takes the form of spontaneous emission from |+ to |− with approximate rate α2π∆ 3 ac δ ∆ac 2 e −δ 2 /(2ω 2 b ) . At lower detunings δ k B T , again the phonon-induced transitions between |+ and |− occur at the same approximate rate α2πk B T ∆ 2 ac |δ| ∆ac e −δ 2 /(2ω 2 b ) .
Appendix D: Polaron master equation with pulse drive
To extend the PME to include the excitation pulse, we add a term Ω p (t) cos (ω X t)σ X x to the Hamiltonian in Eq. (1), where Ω p (t) is the pulse amplitude satisfying ∞ −∞ dtΩ p (t) = π, neglecting the coupling of the pulse to the |B -|X transition which is appropriate provided E −1 B is much smaller than the duration of the pulse. We also neglect the coupling of the cw drive to the |X -|G transition. We then choose another interaction frame given by The PME superoperator is then , X m (t)] + H.c., (D2) and we have again absorbed a coherent attenuation factor B from the PME into our definition of Ω cw and Ω p (t) for easy comparison with the no-phonon case. The operators X m (t−τ, t) = U (t, t−τ )X m (t−τ )U † (t, t−τ ) are calculated using the "additional Markov" [37] approximation: U (t, t−τ ) ≈ exp [−iH S (t)τ ], and X m (t−τ ) ≈ X m (t) within the integrand, and this approximation is valid for pulse widths much greater than ω −1 b , and Appendix E: Continuous wave excitation error rate As mentioned in the main text, the presence of the σ X x term in Eq. (2) leads to weak cw excitation of the |X state, by means of the far off-resonant drive. As a result, in addition to the (pulse-wise) emitted photon number N , which is calculated in absence of this term, using instead Eq. (3) for the system Hamiltonian, there is a small, constant in time, photon emission flux. However, this contribution produces photons which are (for E B + δ > 0) blue Stark shifted by ≈ Ω 2 cw /4(E B + δ), which in many instances should be far-off from the desired peak of interest, and as such can be filtered out of the collected spectrum. Nonetheless, in this section, we consider the cw error rate assuming no emitted photons of the |X -|G transition are filtered.
To quantify this cw error rate, we define a quantity E cw to be the ratio of the average number of photons emitted in the absence of any pulse excitation over a duration equal to the laser repetition rate T rep , divided by the number of photons emitted by the source with a pulse excitation.
To calculate E cw , we assume as a first approximation that the pulse initializes the system in the |X state. We then can simulate, using the Hamiltonian of Eq. (2), the photons emitted N 0 over a duration T rep starting from the initial condition ρ 0 , which we choose to be the steadystate condition of the ME, and then divide this quantity by the number of photons emitted using the same Hamiltonian with instead initial condition |X , which we denote N X . Then, For the case where we do not consider phonon interactions, as an additional approximation to this quantity, we can also note that so long as Ω cw |E B + δ|, the cw excitation of the |X -|G transition is very weakly driven (i.e., see Fig. 1(d)), and as such the steady-state population of the |X state, and thus the photon flux due to the cw drive will be very small. In this case, we can approximate the biexciton population (and associated coherences) to be negligible, and solve the ME in the absence of phonons analytically to find: where ρ 0 X = Ω 2 cw /(4(E B + δ) 2 ) is the steady-state population of the |X state under this approximation, and in the second line we have noted that the difference between N X , the emitted photon number calculated over a time duration T rep with the Hamiltonian of Eq. (2) and the initial condition ρ(t = 0) = |X X|, and the total emitted photon number N calculated using the Hamiltonian of Eq. (3) and the same initial condition (which does not contain any cw excitation contributions) scales with ρ 0 X . Specifically, the final line of Eq. (E2) allows one to approximate the cw error rate in the absence of phonon couplings using only the emitted photon number N calculated without considering the far off-resonant driving of the |X -|G transition. When phonons are considered at a nonzero temperature, phonon-assisted transitions as shown schematically in Fig. 1(d) render this approximation inappropriate.
In Fig. 12, we plot the cw error rate E cw using both the full calculation N 0 /N X , as well as the approximate solution in the final line of Eq. (E2) for the case of no phonons, for both AT and ac Stark regimes. We use biexciton binding energy E B = 3.24 meV, as in our experiment in Ref. [27]. For both regimes, the approximate formula is an excellent approximation to the full solution. Also in both regimes, we find in contrast to the figures of merit studied in the main sections of this paper, the error rate is much worse for phonon parameter set II compared to parameter set I (which is close to the no-phonon case). This is due to the larger value of ω b,II giving a more appreciable value of the phonon spectral function at the detuning from the relevant transition J(E B + δ), meaning that the resonant process of absorption simultaneous with phonon absorption becomes more prominent, despite this process being mostly suppressed at low temperatures. We can see this clearly in Fig. 12(b), where for negative detunings, the error rate increases drastically for both phonon parameter sets, as the detuning E B +δ of the |X -|G transition becomes smaller and the spectral function becomes more and more appreciable even for the smaller phonon cutoff frequency of ω b,I . This is an additional reason to prefer positive detunings (and thus red frequency shifts) when operating in the ac Stark regime, for the typical case of positive binding energy E B .
We note, of course, that the cw error rate can be improved by using a smaller repetition rate T rep , however one has to keep in mind the relaxation rate of the SPS system, and if the repetition time were made too small the derivation and equations presented in Sec. II on the HOM and HBT experimental procedures may need to be revised; in this manner, the introduction of a cavity provides another potential advantage through enhancement of the relaxation rate. | 22,261.4 | 2022-01-09T00:00:00.000 | [
"Physics"
] |
Boron Modified Bifunctional Cu/SiO2 Catalysts with Enhanced Metal Dispersion and Surface Acid Sites for Selective Hydrogenation of Dimethyl Oxalate to Ethylene Glycol and Ethanol
Boron (B) promoter modified Cu/SiO2 bifunctional catalysts were synthesized by sol-gel method and used to produce ethylene glycol (EG) and ethanol (EtOH) through efficient hydrogenation of dimethyl oxalate (DMO). Experimental results showed that boron promoter could significantly improve the catalytic performance by improving the structural characteristics of the Cu/SiO2 catalysts. The optimized 2B-Cu/SiO2 catalyst exhibited excellent low temperature catalytic activity and long-term stability, maintaining the average EG selectivity (Sel.EG) of 95% at 190 °C, and maintaining the average EtOH selectivity (Sel.EtOH) of 88% at 260 °C, with no decrease even after reaction of 150 h, respectively. Characterization results revealed that doping with boron promoter could significantly increase the copper dispersion, enhance the metal-support interaction, maintain suitable Cu+/(Cu+ + Cu0) ratio, and diminish metallic copper particles during the hydrogenation of DMO. Thus, this work introduced a bifunctional boron promoter, which could not only improve the copper dispersion, reduce the formation of bulk copper oxide, but also properly enhance the acidity of the sample surface, so that the Cu/SiO2 sample could exhibit superior EG selectivity at low temperature, as well as improving the EtOH selectivity at high temperature.
Introduction
Ethylene glycol (EG) and ethanol (EtOH) are both versatile chemical raw materials and intermediates, which can be used for the synthesis of plentiful fine chemicals and applying in all walks of life [1][2][3]. The traditional preparation routes are derived from petroleum or biological routes. However, with the shrinkage of the crude oil resources and increasing environmental pollution, the coal chemical route based on C1 chemical technology has become a research hotspot in recent years [1][2][3]. Among them, the hydrogenation of DMO (dimethyl oxalate) is increasing important because its ability which can build a bridge between syngas and methyl glycolate (MG), EG and EtOH, etc. (Scheme 1). However, challenges still remain in how to obtain the corresponding hydrogenation target products according to market demand. sented the highest Sel. EG of 95.0% at 210 °C [22]. Chen et al. have synthesized mannitol modified Cu/SiO2 catalysts by ammonia evaporation method, which showed excellent catalytic activity (average Sel. EG of 95.0%) at 200 °C [23]. Ren et al. have introduced La promoter into Cu/SiO2 catalysts by hydrolysis precipitation method, which showed the outstanding catalytic performance with the Sel. EtOH of 95.8% at 250 °C [24]. However, there are still huge challenges need to be overcome, especially in terms of their poor low-temperature catalytic activity and stability. Scheme 1. Hydrogenation reaction process of the DMO to MG, EG, and EtOH.
As a promoter additive, B2O3 dopant can facilitate the formation of more surface defects and adjust acid-base properties of Cu/SiO2 catalysts. Yin et al. concluded that the content of boron promoter was crucially important to improve the Con. DMO and Sel. EG in the DMO hydrogenation. Appropriate addition of B2O3 by the ammonia-evaporation method could increase the dispersity and specific surface area of Cu species, thus improving the catalytic performance of Cu/SiO2 catalysts [9]. Zhu et al. found that the introduction of boron promoter to Cu/SiO2 catalysts by precipitation-gel method could effectively suppress the growth of metallic copper particles and increase the dispersion of Cu species, which could enhance catalytic activity and stability [25]. Zhao et al. found that the doping of boric acid on Cu/SiO2 catalyst by ammonia evaporation hydrothermal method had a crucial influence on its catalytic performance for the deep-hydrogenation of DMO to EtOH [26]. As we know, compared with other preparation methods, the Cu/SiO2 catalysts developed by the sol-gel method had the advantages of smaller copper particles, higher dispersion, and larger surface area. However, there were almost no reports on doping boron promoter to improve the catalytic performance of Cu/SiO2 catalysts by sol-gel Scheme 1. Hydrogenation reaction process of the DMO to MG, EG, and EtOH.
According to the literature, Cu-based catalysts are extensively investigated owing to the outstanding activity in C-O and C=O bonds and relative inactivity for C-C bonds in DMO hydrogenation [4][5][6]. Among them, the Cu/SiO 2 catalysts presented good catalytic performance, low cost and environmentally friendly in DMO vapor-phase hydrogenation reaction [7][8][9]. However, pure Cu/SiO 2 catalysts are difficult to meet the requirements of large-scale industrial applications owing to their poor activity and stability. Generally, the suitable Cu + /Cu 0 ratio and the strong interactions between active ingredient copper and SiO 2 support or promoter additives show a crucial influence over the catalytic performance of Cu/SiO 2 catalysts. The deactivation of Cu/SiO 2 catalysts is due to Cu coagulation and Cu + species loss by valence transition during the reaction [10]. To solve this problem, numerous studies have been conducted on preparation methods and promoter additives [11][12][13][14][15][16][17][18][19][20][21][22]. In previous studies, different preparation methods, including sol-gel, ion-exchange, deposition-precipitation, ammonia evaporation hydrothermal, and ammonia evaporation method have been applied in preparing Cu/SiO 2 catalysts with high metallic dispersion and moderate interaction between copper and silica [10,11]. Moreover, different kinds of species (e.g., Zn, Ni, B, In, Ag, or Mg, etc.) were doped to increase the Cu species dispersion and maintain a suitable Cu + /(Cu + + Cu 0 ) ratio, thereby improving the catalytic performance of the Cu/SiO 2 catalyst [8,[18][19][20][21]. Ye et al. have successfully synthesized β-cyclodextrin (β-CD) modified Cu-SiO 2 catalysts by the sol-gel method, which presented the highest Sel. EG of 95.0% at 210 • C [22]. Chen et al. have synthesized mannitol modified Cu/SiO 2 catalysts by ammonia evaporation method, which showed excellent catalytic activity (average Sel. EG of 95.0%) at 200 • C [23]. Ren et al. have introduced La promoter into Cu/SiO 2 catalysts by hydrolysis precipitation method, which showed the outstanding catalytic performance with the Sel. EtOH of 95.8% at 250 • C [24]. However, there are still huge challenges need to be overcome, especially in terms of their poor low-temperature catalytic activity and stability.
As a promoter additive, B 2 O 3 dopant can facilitate the formation of more surface defects and adjust acid-base properties of Cu/SiO 2 catalysts. Yin et al. concluded that the content of boron promoter was crucially important to improve the Con. DMO and Sel. EG in the DMO hydrogenation. Appropriate addition of B 2 O 3 by the ammonia-evaporation method could increase the dispersity and specific surface area of Cu species, thus improving the catalytic performance of Cu/SiO 2 catalysts [9]. Zhu et al. found that the introduction of boron promoter to Cu/SiO 2 catalysts by precipitation-gel method could effectively suppress the growth of metallic copper particles and increase the dispersion of Cu species, which could enhance catalytic activity and stability [25]. Zhao et al. found that the doping of boric acid on Cu/SiO 2 catalyst by ammonia evaporation hydrothermal method had a crucial influence on its catalytic performance for the deep-hydrogenation of DMO to EtOH [26]. As we know, compared with other preparation methods, the Cu/SiO 2 catalysts developed by the sol-gel method had the advantages of smaller copper particles, higher dispersion, and larger surface area. However, there were almost no reports on doping boron promoter to improve the catalytic performance of Cu/SiO 2 catalysts by sol-gel method. Moreover, there were also relatively few reports on the effect of the boron promoter on the Cu/SiO 2 catalysts for simultaneously catalyzing DMO hydrogenation reaction to produce EG and EtOH.
In our previous work, we had introduced cyclodextrin to modify the Cu/SiO 2 catalysts by sol-gel method, but the cyclodextrin was neutral and did not exist in the sample after calcination, so it was difficult to change the surface acidity of the sample [22]. In this work, boron promoter modified Cu/SiO 2 bifunctional catalysts for the efficient hydrogenation of DMO to EG and EtOH were synthesized by sol-gel method. By adjusting the content of B species and reaction temperature, the catalytic performance for the selective hydrogenation of DMO to EG and EtOH could be significantly improved. Variety of characterization methods were utilized to deeply investigate their structures, surface chemistry and structure-activity relationship. Hence, the optimized 2B-Cu/SiO 2 catalyst is highly efficient, low-cost, environmentally friendly, as well as showing an industrial application prospect.
Catalyst Preparation
The 15 wt% Cu/SiO 2 catalysts with different B/Cu molar ratio were developed by using sol-gel method. The specific details of the synthesis method were described as follows: (1) Under the circumstance of constant stirring, 13 g Cu(NO 3 ) 2 ·3H 2 O was added to a 300 mL beaker with 45 g deionized water.
(2) A certain amount of H 3 BO 3 (B/Cu molar ratio of 0.25, 1, 2, and 3) were dissolved in the above blue transparent solution. And then 100 g of ethanol and 70 g of tetraethylorthosilicate (TEOS) were added and stirred vigorously in a water bath for 1-2 h at 70 • C, forming a blue jelly-like sol-gel. (3) The blue jelly-like sol-gel was changed into the blocky B-Cu/SiO 2 catalyst precursors by aging for 24 h at room temperature and drying for 61 h (17 h at 70 • C, 40 h at 90 • C, and then 4 h at 120 • C). (4) Finally, after calcining (5 h at 300 • C), crushing and sieving to 20-40 meshes, the B-Cu/SiO 2 catalyst precursors was then denoted as xB-Cu/SiO 2 catalysts (x representing B/Cu molar ratio).
Catalyst Characterization
To deeply investigate their structures, surface chemistry and structure-activity relationship of boron modified Cu/SiO 2 catalysts, variety of characterization (such as BET, N 2 O titration, XRD, H 2 -TPR, NH 3 -TPD, TEM, STEM-EDX mapping, XPS, XAES, etc.) methods were utilized in this work. More operational details about the characterization methods were presented in the supporting information.
Catalytic Reaction
As presented in Figure S1, all the xB-Cu/SiO 2 catalysts for the hydrogenation of DMO to EG and EtOH were evaluated in a fixed-bed reactor with a 120 mm stainless steel singletube (10 mm internal diameter). Briefly, 10 mL of catalyst (20-40 meshes) was located in the constant temperature zone of the reaction tube, and the remaining space of the tube was filled with the silica sand. All the xB-Cu/SiO 2 catalysts were reduced under 99.99% hydrogen for 8 h at 350 • C and 1.0 MPa pressure prior to the evaluation. The reactants (20 wt% DMO methanol solution) were injected into the gasification chamber and mixed with hydrogen (99.99%) by using high-pressure pump, after naturally cooling down to the reaction temperature. The reaction conditions were as follow: the DMO weight liquid hourly space velocity (LHSV) = 0.2 h −1 , H 2 /DMO molar ratio = 50, P = 2.0 MPa. After the reaction ran smoothly for 6 h, the reactants and products were separated and analyzed by a gas chromatograph (GC-900C) equipped with FID detector. The conversion rate of DMO and the selectivity to various products (MG, EG and EtOH) are calculated as follows: The physicochemical properties of the xB-Cu/SiO 2 catalysts were summarized in Table 1. Notably, all the actual loading of the copper and boron measured by ICP-OES were slightly lower than the theoretical values, but the measured B/Cu ratios were basically consistent with the theoretical values. As presented in Figure S2, all the xB-Cu/SiO 2 catalysts showed typical IV type isotherms with H 2 -type hysteresis loops, indicating that all the catalysts possessed mesoporous structure [27]. Besides, Table 1 also showed the, pore volume (V p ) and pore diameter (D p ) and BET surface area (S BET ) of the xB-Cu/SiO 2 catalysts. Obviously, the introduction of B species slightly influenced the V p and D p of the xB-Cu/SiO 2 catalysts. In addition, the S BET of the xB-Cu/SiO 2 catalysts displayed a volcanic change with the increase of boron content, and reached its summit in case of B/Cu molar ratio at 2, with the maximum value of 449.3 m 2 ·g −1 . However, further increasing the content of B species would lead to a gradual decline in SBET, which could be due to the addition of excessive boron promoter covered the surface of the xB-Cu/SiO 2 catalysts [25].
As we know, the Cu dispersion (D Cu ) and Cu surface area (S Cu ) are considered to be two key elements that determine the catalytic activity of the Cu/SiO 2 catalysts in the ester hydrogenation reactions [28]. As shown in Table 1, after introducing B element to Cu/SiO 2 catalyst, D Cu and S Cu have been improved to a certain extent. It was worth noting that 2B-Cu/SiO 2 possessed the highest D Cu (21.6%) and S Cu (23.8 m 2 ·g −1 ), however further increasing the content of B species would lead to a gradual decline in both D Cu and S Cu . This might be due to low boron loading acting as an isolating agent, inhibited thermal transmigration and agglomeration of Cu particles during the H 2 reduction process. Excessive boron loading formed a thin film on the copper particles at the surface of the catalyst, which hindered the contact between the copper particles and N 2 O, thereby reducing D Cu and S Cu [25].
XRD and TEM
To study the effect of boron promoter on the phase structure of Cu/SiO 2 catalyst, the xB-Cu/SiO 2 catalysts were characterized by XRD patterns. Figure 1A showed the XRD characterization patterns of calcined xB-Cu/SiO 2 catalysts. Notably, the crystal structure of xB-Cu/SiO 2 catalysts did not changed significantly after doping with boron promoter, and an amorphous SiO 2 diffraction characteristic peak emerged at 2θ = 22.8 • . When the B/Cu molar ratio x was lower than 2, obvious CuO characteristic diffraction peaks could be found at 2θ = 35.5 • , 38.7 • and 48.8 • (JCPDS 45-0937), and the intensity of the CuO diffraction peaks gradually decreased with the increase of boron content. With further increasing boron loading leaded to the disappearance of CuO diffraction peaks. These results confirmed that adding boron promotor could effectively inhibit the agglomeration and growth of CuO particles, and CuO particles were highly dispersed in the SiO 2 support. Figure 1B showed the XRD characterization spectrum of the reduced xB-Cu/SiO 2 catalysts. Upon reduction at 220 • C, CuO diffraction peaks (35.5 • , 38.7 • and 48.8 • ) of the xB-Cu/SiO 2 catalysts disappeared, and three diffraction peaks at 2θ of 43.3 • , 50.4 • , and 74.1 • emerged, which were assigned to the Cu (111), (200) and (220) faces (JCPDS 04-0836), respectively. Overall, the half-peak width of Cu 0 diffraction peaks increased and their intensity gradually diminished with the increase the content of boron promoter. When the B/Cu molar ratio was 2, the intensity of Cu 0 diffraction peaks became unobservable, indicating that the Cu particle size was too small to detect and the Cu particles were more uniformly dispersed. However, further increase boron loading could make the Cu 0 diffraction peaks become sharper. Besides, the average copper particle sizes of the reduced xB-Cu/SiO 2 catalysts calculated by Scherrer equation were also summarized in Figure 1B. Notably, the average copper particle sizes gradually decreased from 10.5 nm for Cu/SiO 2 to 7.4 nm for 3B-Cu/SiO 2 catalyst. These results suggested that doping with boron promoter could decrease the copper particle sizes and promote the D Cu . The 2B-Cu/SiO 2 catalyst exhibited the smallest particle size and maximum dispersion, which could be attributed to the most suitable interaction between Cu and B species at the B/Cu molar of 2. On the contrary, the addition of too much or too little B species could lead to the growth of copper particles and the decrease of dispersion. Moreover, the B 2 O 3 diffraction peaks emerged at 2θ = 15 • and 27.8 • (JCPDS 06-0297) when the B/Cu molar ratio was 3. In addition, the peak at 2θ of 36.5 • was ascribable to the (111) lattice face of Cu 2 O (JCPDS05-0667). The formation of Cu 2 O species originated from the strong interaction between the copper and the silicon support, which made Cu 2+ partially reduce to Cu + [29]. and an amorphous SiO2 diffraction characteristic peak emerged at 2θ = 22.8°. When the B/Cu molar ratio x was lower than 2, obvious CuO characteristic diffraction peaks could be found at 2θ = 35.5°, 38.7° and 48.8° (JCPDS 45-0937), and the intensity of the CuO diffraction peaks gradually decreased with the increase of boron content. With further increasing boron loading leaded to the disappearance of CuO diffraction peaks. These results confirmed that adding boron promotor could effectively inhibit the agglomeration and growth of CuO particles, and CuO particles were highly dispersed in the SiO2 support. Figure 1B showed the XRD characterization spectrum of the reduced xB-Cu/SiO2 catalysts. Upon reduction at 220 °C, CuO diffraction peaks (35.5°, 38.7° and 48.8°) of the xB-Cu/SiO2 catalysts disappeared, and three diffraction peaks at 2θ of 43.3°, 50.4°, and 74.1° emerged, which were assigned to the Cu (111), (200) and (220) faces (JCPDS 04-0836), respectively. Overall, the half-peak width of Cu 0 diffraction peaks increased and their intensity gradually diminished with the increase the content of boron promoter. When the B/Cu molar ratio was 2, the intensity of Cu 0 diffraction peaks became unobservable, indicating that the Cu particle size was too small to detect and the Cu particles were more uniformly dispersed. However, further increase boron loading could make the Cu 0 diffraction peaks become sharper. Besides, the average copper particle sizes of the reduced xB-Cu/SiO2 catalysts calculated by Scherrer equation were also summarized in Figure 1B. Notably, the average copper particle sizes gradually decreased from 10.5 nm for Cu/SiO2 to 7.4 nm for 3B-Cu/SiO2 catalyst. These results suggested that doping with boron promoter could decrease the copper particle sizes and promote the DCu. The 2B-Cu/SiO2 catalyst exhibited the smallest particle size and maximum dispersion, which could be attributed to the most suitable interaction between Cu and B species at the B/Cu molar of 2. On the contrary, the addition of too much or too little B species could lead to the growth of copper particles and the decrease of dispersion. Moreover, the B2O3 diffraction peaks emerged at 2θ = 15° and 27.8° (JCPDS 06-0297) when the B/Cu molar ratio was 3. In addition, the peak at 2θ of 36.5° was ascribable to the (111) lattice face of Cu2O (JCPDS05-0667). The formation of Cu2O species originated from the strong interaction between the copper and the silicon support, which made Cu 2+ partially reduce to Cu + [29]. To further observe the surface morphology of the catalysts, the reduced Cu/SiO2 and 2B-Cu/SiO2 catalysts were characterized by TEM images, and the relevant results were shown in Figure 2. It was notable that the Cu particles on the surface of Cu/SiO2 catalyst were relatively large and agglomerate to a certain extent. The mean size of Cu particles of To further observe the surface morphology of the catalysts, the reduced Cu/SiO 2 and 2B-Cu/SiO 2 catalysts were characterized by TEM images, and the relevant results were shown in Figure 2. It was notable that the Cu particles on the surface of Cu/SiO 2 catalyst were relatively large and agglomerate to a certain extent. The mean size of Cu particles of the Cu/SiO 2 catalyst was about 8.4 nm based on the statistical results of TEM images. However, after adding an appropriate amount of boron promoter, the metallic Cu particles of the 2B-Cu/SiO 2 catalyst were more uniformly dispersed without obvious agglomeration, and the average diameter was about 5.1 nm. Moreover, the enlarged TEM image and the EDS data in Figure S3 indicated that the Cu and B species were homogeneously evenly dispersed on the silica texture, overlapping with each other, implying that there was a strong interaction between active component Cu and boron promoter. The reason might be that Cu and B species were in close proximity, hindering the conversion of Cu 2+ to Cu 0 to a certain extent. The results of TEM images further proved that boron promoter could promote the D Cu of the Cu/SiO 2 catalysts by restraining the increase of Cu particles size, which were consistent with the conclusions of N 2 -adsorption, XRD and H 2 -TPR. and the EDS data in Figure S3 indicated that the Cu and B species were homogeneously evenly dispersed on the silica texture, overlapping with each other, implying that there was a strong interaction between active component Cu and boron promoter. The reason might be that Cu and B species were in close proximity, hindering the conversion of Cu 2+ to Cu 0 to a certain extent. The results of TEM images further proved that boron promoter could promote the DCu of the Cu/SiO2 catalysts by restraining the increase of Cu particles size, which were consistent with the conclusions of N2-adsorption, XRD and H2-TPR.
H2-TPR and NH3-TPD
As presented in Figure 3A, all the calcined xB-Cu/SiO2 catalysts with different B/Cu ratios were systematically characterized by the H2-TPR technology, in order to reveal the influence of the boron loading on the reducibility of the catalysts. It was clearly found that there was a H2 consumption peak at about 232 °C of the unmodified Cu/SiO2 catalyst, which could be ascribed to the reduction of highly dispersed copper species [16]. After doping with boron promoter, the H2 consumption peak of the xB-Cu/SiO2 catalysts gradually shifted to the high temperature direction. These results could be attributed to the fact that B2O3 was an electrophilic substance, gaining electrons more easily than the supporter SiO2, and there was an intense interaction between the support, copper oxide, and boron oxide species [21]. With the increase of boron content, the copper species on the catalyst surface was more difficult to gain electrons. Moreover, all the total H2 consumption values calculated from the TPR results by taking CuO as standard material were lower than the total theoretical H2 consumption values (Table S1). These results suggested that Cu + and Cu 0 coexisted on the surface of the reduced catalysts, which was in concordance with the XRD and XPS results.
H 2 -TPR and NH 3 -TPD
As presented in Figure 3A, all the calcined xB-Cu/SiO 2 catalysts with different B/Cu ratios were systematically characterized by the H 2 -TPR technology, in order to reveal the influence of the boron loading on the reducibility of the catalysts. It was clearly found that there was a H 2 consumption peak at about 232 • C of the unmodified Cu/SiO 2 catalyst, which could be ascribed to the reduction of highly dispersed copper species [16]. After doping with boron promoter, the H 2 consumption peak of the xB-Cu/SiO 2 catalysts gradually shifted to the high temperature direction. These results could be attributed to the fact that B 2 O 3 was an electrophilic substance, gaining electrons more easily than the supporter SiO 2 , and there was an intense interaction between the support, copper oxide, and boron oxide species [21]. With the increase of boron content, the copper species on the catalyst surface was more difficult to gain electrons. Moreover, all the total H 2 consumption values calculated from the TPR results by taking CuO as standard material were lower than the total theoretical H 2 consumption values (Table S1). These results suggested that Cu + and Cu 0 coexisted on the surface of the reduced catalysts, which was in concordance with the XRD and XPS results. In addition, a small shoulder peak appeared near the main reduction peak of the xB-Cu/SiO2 catalysts when the B/Cu molar ratio was higher than 1, which may be due to the formation of new copper species (such as copper borate). The results of H2-TPR further proved that the basic CuO and the acidic B2O3 had a strong interaction.
As illustrated in Figure 3B, the NH3-TPD technique was adopted to analyze the effect of boron promoter on the the strength and quantity of acid sites on the xB-Cu/SiO2 catalysts. It's worth noting that all the xB-Cu/SiO2 catalysts presented two apparent NH3 desorption peaks (at about 105 °C and 500 °C), representing the weakly acidic and strong acidic sites, respectively [30]. The low-temperature and high-temperature peaks could be attributed to ammonia adsorbed on Si−OH and ammonia adsorbed on Cu particles, respectively [31]. As summarized in Table 2, the number of surface acidic sites increased to a certain extent with the increase of B/Cu molar ratio, indicating that the adding B species In addition, a small shoulder peak appeared near the main reduction peak of the xB-Cu/SiO 2 catalysts when the B/Cu molar ratio was higher than 1, which may be due to the formation of new copper species (such as copper borate). The results of H 2 -TPR further proved that the basic CuO and the acidic B 2 O 3 had a strong interaction.
As illustrated in Figure 3B, the NH 3 -TPD technique was adopted to analyze the effect of boron promoter on the the strength and quantity of acid sites on the xB-Cu/SiO 2 catalysts. It's worth noting that all the xB-Cu/SiO 2 catalysts presented two apparent NH 3 desorption peaks (at about 105 • C and 500 • C), representing the weakly acidic and strong acidic sites, respectively [30]. The low-temperature and high-temperature peaks could be attributed to ammonia adsorbed on Si−OH and ammonia adsorbed on Cu particles, respectively [31]. As summarized in Table 2, the number of surface acidic sites increased to a certain extent with the increase of B/Cu molar ratio, indicating that the adding B species to Cu/SiO 2 catalysts could increase the strength and quantity of surface acidic sites, which was consistent with previously reported results [9].
XPS and XAES
The XPS and X-ray induced Auger spectra (XAES) spectra of the reduced xB-Cu/SiO 2 catalysts were shown in Figure 4, which were further measured to identify the surface composition and chemical state of catalysts. Remarkably, only two peaks were observed at binding energies of 932.5 eV and 952.4 eV belonging to Cu2p 3/2 and Cu2p 1/2 , respectively ( Figure 4A). This strongly suggested that the Cu 2+ had been reduced to Cu + or Cu 0 because of the absence of 2p→3d satellite peaks around 933.5 eV or 934.9 eV, which was consistent with the XRD results. As presented in Figure 4C, the binding energy of B 1s peak center at 193.5 eV was attributed to trivalent boron (B 3+ ), but no B 0 peak was observed at 188.7 eV, which was corresponding to the result by Zhu et al. [25]. Notably, with the increase of boron content, the surface amount of B 3+ gradually increased based on the intensity of B1s peaks of B1s XPS spectra. Since the binding energies of Cu + and Cu 0 were located at approximately the same position, it was necessary to separate these two species by Cu LMM XAES spectrum. The asymmetric and broad Auger peak in the Cu LMM XAES spectrum could be deconvoluted into two overlapped peaks at 913.8-914.4 eV and 918.3-918.6 eV, representing Cu + and Cu 0 species, slightly higher than the value of the pure Cu/SiO 2 catalyst [16,32]. The synergetic effect between Cu + and Cu 0 species on the surface of Cu-based catalysts was extensively acknowledged in the hydrogenation of DMO [5,6,11]. As summarized in Table 3, the surface Cu + /(Cu + + Cu 0 ) ratio was gradually increased from 59.5% to 69.2% with the increase of boron content, which corresponded with the results reported by Zhu et al. [25]. This might arise from the strong interaction between Cu species and B species, which resulted in a lower surface copper reduction degree and partial positive charge on the copper surfaces. Notably, the Cu + /(Cu + + Cu 0 ) ration of the optimal 2B-Cu/SiO 2 catalyst (64.6%) was lower than that of 3B-Cu/SiO 2 catalyst (69.2%), which could be attributed to the fact that the acidity and electron affinity of B 2 O 3 were stronger than that of support silica, resulting in an increase in the amount of Cu + . In addition, the Auger parameters (AP) of Cu + and Cu 0 species approximately the same as the previously published values of 1847.0 eV and 1851.0 eV, respectively [33].
Catalytic Performance Test
The gas-phase catalytic hydrogenation performance of the xB-Cu/SiO 2 catalysts with different B/Cu molar ratios were investigated in a stainless steel fixed-bed reactor, and the related results were presented in Figure 5. Clearly, the DMO conversion (Con. DMO ) and EG selectivity (Sel. EG ) could be improved by introducing B element under identical conditions. Among them, the 2B-Cu/SiO 2 catalyst with B/Cu molar ratio of 2 exhibited the highest Sel. EG of 93.5% (Con. DMO = 100%) at 200 • C. In addition, when the reaction temperature reached 260 • C, occurring more side reactions of excessive hydrogenation and dehydration. Notably, as the B/Cu molar ratio increased, the selectivity of EtOH (Sel. EtOH ) increased, whereas the selectivity of other byproducts including 1,2-butanediol (1,2-BDO) and 1,2-propanediol (1,2-PDO) decreased. Also, 2B-Cu/SiO 2 catalyst with B/Cu molar ratio of 2 afforded the highest Sel. EtOH of 88.2% (Sel. Others = 9.7%) at 260 • C. This result might be interpreted by the fact of that doping boron promoter could increase the number of surface acidic sites, which facilitated the improvement of the selectivity of the EtOH and lowering the selectivity of 1,2-BDO and 1,2-PDO [31]. In short, higher or lower B/Cu molar ratio would lead to a decrease in Sel. EG or Sel. EtOH to a certain extent, indicating that adding boron promoter to Cu/SiO 2 catalysts with suitable B/Cu molar ratio was more conducive to enhance the catalytic hydrogenation reaction activity of DMO hydrogenation to EG and EtOH. Remarkably, one of the most attractive features of the 2B-Cu/SiO 2 catalyst was that we could produce EG and EtOH in the same reaction device by simply changing the reaction temperature.
The effects of different reaction temperatures on the catalytic performance of the Cu/SiO 2 and 2B-Cu/SiO 2 catalysts were explored and summarized in detail ( Figure 6). The Con. DMO Figure 6A). On further increasing the temperature, EG was then over-hydrogenated to produce EtOH, which leaded to a decrease in Sel. EG and an increase in Sel. EtOH of 2B-Cu/SiO 2 catalyst. As a result, the Sel. EtOH reached the maximum of 88.2% (Sel. EG = 2.1%) at 260 • C, and then decreases to 79.1% (Sel. EG = 1.2%), which could be attributed to more side reactions occurring at higher temperature. However, under identical reaction conditions, the Sel. EtOH of Cu/SiO 2 catalyst was only 65.4% (Sel. EG = 25.9%) at 260 • C. The conversion frequency (TOF) represented the intrinsic activity of the catalyst. As described in Table S2, the optimal 2B-Cu/SiO 2 catalyst possessed the highest TOF value of 3.4 h −1 , while the TOF of Cu/SiO 2 catalyst was only 2.5 h −1 . The results showed that the adding boron promoter to the catalyst could make the catalyst generating more active sites after reduction. different B/Cu molar ratios were investigated in a stainless steel fixed-bed reactor, and the related results were presented in Figure 5. Clearly, the DMO conversion (Con. DMO) and EG selectivity (Sel. EG) could be improved by introducing B element under identical conditions. Among them, the 2B-Cu/SiO2 catalyst with B/Cu molar ratio of 2 exhibited the highest Sel. EG of 93.5% (Con. DMO = 100%) at 200 °C. In addition, when the reaction temperature reached 260 °C, occurring more side reactions of excessive hydrogenation and dehydration. Notably, as the B/Cu molar ratio increased, the selectivity of EtOH (Sel. EtOH) increased, whereas the selectivity of other byproducts including 1,2-butanediol (1,2-BDO) and 1,2-propanediol (1,2-PDO) decreased. Also, 2B-Cu/SiO2 catalyst with B/Cu molar ratio of 2 afforded the highest Sel. EtOH of 88.2% (Sel. Others = 9.7%) at 260 °C. This result might be interpreted by the fact of that doping boron promoter could increase the number of surface acidic sites, which facilitated the improvement of the selectivity of the EtOH and lowering the selectivity of 1,2-BDO and 1,2-PDO [31]. In short, higher or lower B/Cu molar ratio would lead to a decrease in Sel. EG or Sel. EtOH to a certain extent, indicating that adding boron promoter to Cu/SiO2 catalysts with suitable B/Cu molar ratio was more conducive to enhance the catalytic hydrogenation reaction activity of DMO hydrogenation to EG and EtOH. Remarkably, one of the most attractive features of the 2B-Cu/SiO2 catalyst was that we could produce EG and EtOH in the same reaction device by simply changing the reaction temperature. The effects of different reaction temperatures on the catalytic performance of the Cu/SiO2 and 2B-Cu/SiO2 catalysts were explored and summarized in detail ( Figure 6). The Conv. DMO and Sel. EG of the 2B-Cu/SiO2 catalyst sharply rose with the increase of temperature. And then the Sel. EG reached the maximum of 94.9% (Con. DMO = 99.99%) at 190 °C. However, under identical reaction conditions, the Con. DMO and Sel. EG of Cu/SiO2 catalyst were only 94.5% and 81.1%, respectively ( Figure 6A). On further increasing the temperature, EG was then over-hydrogenated to produce EtOH, which leaded to a decrease in Sel. EG and an increase in Sel. EtOH of 2B-Cu/SiO2 catalyst. As a result, the Sel. EtOH reached the maximum of 88.2% (Sel. EG = 2.1%) at 260 °C, and then decreases to 79.1% (Sel. EG = 1.2%), which could be attributed to more side reactions occurring at higher temperature. However, under identical reaction conditions, the Sel. EtOH of Cu/SiO2 catalyst was only 65.4% (Sel. EG = 25.9%) at 260 °C. The conversion frequency (TOF) represented the intrinsic activity of the catalyst. As described in Table S2, the optimal 2B-Cu/SiO2 catalyst possessed the highest TOF value of 3.4 h −1 , while the TOF of Cu/SiO2 catalyst was only 2.5 h −1 . The results showed that the adding boron promoter to the catalyst could make the catalyst generating more active sites after reduction. Generally, the long-term stability of Cu-based catalysts was critical for the practical application for vapor-phase DMO hydrogenation. Hence, the long-term stability test of Cu/SiO2 and optimized 2B-Cu/SiO2 catalysts were evaluated under the same conditions. As shown in Figure 7A, the 2B-Cu/SiO2 catalyst exhibited excellent catalytic stability, which maintained its stable Con. DMO (100%) and Sel. EG (95%) at 190 °C, with no decrease even after 150 h of reaction. On the contrary, the Cu/SiO2 catalyst deactivated apparently within 100 h under identical reaction conditions. Both the Con. DMO and Sel. EG were gradually reduced, while MG gradually became the main product. When the reaction temperature was 260 °C, a similar trend could be observed in Figure 7B, where the Con. DMO and Sel. EtOH of 2B-Cu/SiO2 catalyst were maintained at about 100% and 88%, respectively. For the Cu/SiO2 catalyst, the Con. DMO was maintained around 100%, while the Sel. EtOH sharply decreased from 62% to 34% within 100 h under identical reaction conditions. Conversely, the selectivity of EG, 1,2-BDO and 1,2-PDO gradually rose. Besides, in order to better study the stability performance of the optimal 2B-Cu/SiO2 catalyst, the long-term stability Generally, the long-term stability of Cu-based catalysts was critical for the practical application for vapor-phase DMO hydrogenation. Hence, the long-term stability test of Cu/SiO 2 and optimized 2B-Cu/SiO 2 catalysts were evaluated under the same conditions. As shown in Figure 7A, the 2B-Cu/SiO 2 catalyst exhibited excellent catalytic stability, which maintained its stable Con. DMO (100%) and Sel. EG (95%) at 190 • C, with no decrease even after 150 h of reaction. On the contrary, the Cu/SiO 2 catalyst deactivated apparently within 100 h under identical reaction conditions. Both the Con. DMO and Sel. EG were gradually reduced, while MG gradually became the main product. When the reaction temperature was 260 • C, a similar trend could be observed in Figure 7B, where the Con. DMO and Sel. EtOH of 2B-Cu/SiO 2 catalyst were maintained at about 100% and 88%, respectively. For the Cu/SiO 2 catalyst, the Con. DMO was maintained around 100%, while the Sel. EtOH sharply decreased from 62% to 34% within 100 h under identical reaction conditions. Conversely, the selectivity of EG, 1,2-BDO and 1,2-PDO gradually rose. Besides, in order to better study the stability performance of the optimal 2B-Cu/SiO 2 catalyst, the long-term stability was carried out under the condition of the initial DMO conversion rate of 100%. As presented in Figure S4, the 2B-Cu/SiO 2 also excellent catalytic stability even after 50 h of reaction, maintaining its stable Con. DMO (80%) and Sel. EG (23.5%) at 170 • C. These results revealed that adding an appropriate amount of bifunctional boron promoter could significantly enhance the catalytic stability of Cu/SiO 2 catalyst. Figure 7A, the 2B-Cu/SiO2 catalyst exhibited excellent catalytic stability, which maintained its stable Con. DMO (100%) and Sel. EG (95%) at 190 °C, with no decrease even after 150 h of reaction. On the contrary, the Cu/SiO2 catalyst deactivated apparently within 100 h under identical reaction conditions. Both the Con. DMO and Sel. EG were gradually reduced, while MG gradually became the main product. When the reaction temperature was 260 °C, a similar trend could be observed in Figure 7B, where the Con. DMO and Sel. EtOH of 2B-Cu/SiO2 catalyst were maintained at about 100% and 88%, respectively. For the Cu/SiO2 catalyst, the Con. DMO was maintained around 100%, while the Sel. EtOH sharply decreased from 62% to 34% within 100 h under identical reaction conditions. Conversely, the selectivity of EG, 1,2-BDO and 1,2-PDO gradually rose. Besides, in order to better study the stability performance of the optimal 2B-Cu/SiO2 catalyst, the long-term stability was carried out under the condition of the initial DMO conversion rate of 100%. As presented in Figure S4, the 2B-Cu/SiO2 also excellent catalytic stability even after 150 h of reaction, maintaining its stable Con. DMO (80%) and Sel. EG (23.5%) at 170 °C. These results revealed that adding an appropriate amount of bifunctional boron promoter could significantly enhance the catalytic stability of Cu/SiO2 catalyst.
Structure-Performance Relationship
As reported in previous work, Cu/SiO2 catalysts were extensively investigated in DMO hydrogenation reactions, and various promoters (e.g., Zn, Ni, In, Ag, or Mg, etc.)
Structure-Performance Relationship
As reported in previous work, Cu/SiO 2 catalysts were extensively investigated in DMO hydrogenation reactions, and various promoters (e.g., Zn, Ni, In, Ag, or Mg, etc.) had been adopted to improve their catalytic performance [2,32,[34][35][36][37]. Among them, doping with B species was one of the most useful methods to improve their performance by affecting D Cu . However, there were relatively few reports on the effect of the boron promoter on the Cu/SiO 2 catalysts for simultaneously catalyzing DMO hydrogenation reaction to produce EG and EtOH. In our previous work, the cyclodextrin was introduced to modify the Cu/SiO 2 catalyst by sol-gel method, but the cyclodextrin was neutral and did not exist in the sample after calcination, so it was difficult to change the acidity of the sample surface [22]. According to previous reports, appropriately reducing the basic sites and increasing the acid sites on the surface of Cu/SiO 2 samples was beneficial to inhibit the conversion of ethanol to C3-C4OH, thereby improving the selectivity of EtOH at high temperatures [31,38]. Herein, the xB-Cu/SiO 2 catalysts were synthesized by the sol-gel method with an appropriate B/Cu molar ratio could dramatically enhance the catalytic hydrogenation activity of Cu/SiO 2 catalysts for DMO hydrogenation to EG and EtOH. The significantly alterations of Cu/SiO 2 catalysts induced by boron promoter were found to be up to the Cu dispersion, reducibility, acidity-alkalinity, and distribution of Cu + and Cu 0 species, causing the corresponding alterations in catalytic performance. The XRD and TEM characterization results of the xB-Cu/SiO 2 catalysts revealed that the addition of B species could inhibit the increase of Cu particles size and promote the dispersion of Cu species. The characterization results of EDS, H 2 -TPR, XPS, and XAES implied that the strong electronic interactions existed between the active component copper and boron promoter, affecting the distributions of copper species with different valence states. The synergetic effect between Cu + and Cu 0 species on the surface of Cu-based catalysts was extensively acknowledged in the hydrogenation of DMO [5,6,11,[39][40][41][42][43][44][45][46]. In detail, the Cu 0 sites dissociate and activate H 2 molecule, and Cu + sites adsorbed the carbonyl group. As shown in Figure 8, the TOF value and Cu + /(Cu 0 + Cu + ) intensity ratio presented a similar trend, increasing accordingly with the increase of B/Cu molar ratio (≤2). However, further increasing boron loading resulted in the opposite trend. The optimized 2B-Cu/SiO 2 catalyst afforded the highest TOF value of 3.4 h −1 , and the Cu + /(Cu 0 + Cu + ) intensity ratio reached the maximum of 69.2% when the B/Cu molar ratio was 3. These results could be attributed to the fact that the acidity and electron affinity of B 2 O 3 were stronger than that of support silica, resulting in an increase in the amount of Cu + . However, excessive boron loading formed a thin film on the Cu particles on the surface of the catalyst, leading to a reduction in exposed and accessible Cu surface, which resulted in the decrease in catalytic activity. These results likewise demonstrate that an appropriate surface Cu + and Cu 0 concentration was one of the factors that affected the catalytic performance, but it only played a secondary role. The particle sizes, Cu dispersion, Cu surface area, and metal-support interaction, etc., also had significant effects on catalytic activity. Furthermore, Figure 9 presented a convincing schematic diagram of the xB-Cu/SiO 2 catalysts with the increase of B/Cu molar ratio, based on the characterization results and previous reports. As we all know, the monometallic Cu/SiO2 catalysts showed poor catalytic stability in long-term DMO hydrogenation reactions. The influence of boron promoter on the stabilization of Cu/SiO2 catalysts had been extensively studied. Zhu et al. discovered that the introduction of B2O3 to Cu/SiO2 could greatly promote the dispersion of Cu species by inhibiting the growth of Cu particles, which could enhance catalytic stability [21]. He et al. reported that B-Cu/SiO2 catalysts exhibit excellent catalytic stability, which was mainly resulting from the strong interaction between B2O3 and surface Cu element, which maintained the suitable Cu 0 /Cu + distribution and restrained the growth of Cu particles [24]. As shown in Table S3, the change in the DCu of 2B-Cu/SiO2 catalyst was basically negligible and only dropped from 21.6% to 20.4%. On the contrary, the DCu of Cu/SiO2 catalyst dramatically decreased from 17.9% to 12.5%, which resulted from the aggregation and growth of copper particles on catalyst surface. Calculated by on the Scherrer equation, the size of copper particles for the pure Cu/SiO2 catalyst agglomerated and increased to 17.7 As we all know, the monometallic Cu/SiO2 catalysts showed poor catalytic stability in long-term DMO hydrogenation reactions. The influence of boron promoter on the stabilization of Cu/SiO2 catalysts had been extensively studied. Zhu et al. discovered that the introduction of B2O3 to Cu/SiO2 could greatly promote the dispersion of Cu species by inhibiting the growth of Cu particles, which could enhance catalytic stability [21]. He et al. reported that B-Cu/SiO2 catalysts exhibit excellent catalytic stability, which was mainly resulting from the strong interaction between B2O3 and surface Cu element, which maintained the suitable Cu 0 /Cu + distribution and restrained the growth of Cu particles [24]. As shown in Table S3, the change in the DCu of 2B-Cu/SiO2 catalyst was basically negligible and only dropped from 21.6% to 20.4%. On the contrary, the DCu of Cu/SiO2 catalyst dramatically decreased from 17.9% to 12.5%, which resulted from the aggregation and growth of copper particles on catalyst surface. Calculated by on the Scherrer equation, the size of copper particles for the pure Cu/SiO2 catalyst agglomerated and increased to 17.7 As we all know, the monometallic Cu/SiO 2 catalysts showed poor catalytic stability in long-term DMO hydrogenation reactions. The influence of boron promoter on the stabilization of Cu/SiO 2 catalysts had been extensively studied. Zhu et al. discovered that the introduction of B 2 O 3 to Cu/SiO 2 could greatly promote the dispersion of Cu species by inhibiting the growth of Cu particles, which could enhance catalytic stability [21]. He et al. reported that B-Cu/SiO 2 catalysts exhibit excellent catalytic stability, which was mainly resulting from the strong interaction between B 2 O 3 and surface Cu element, which maintained the suitable Cu 0 /Cu + distribution and restrained the growth of Cu particles [24]. As shown in Table S3, the change in the D Cu of 2B-Cu/SiO 2 catalyst was basically negligible and only dropped from 21.6% to 20.4%. On the contrary, the D Cu of Cu/SiO 2 catalyst dramatically decreased from 17.9% to 12.5%, which resulted from the aggregation and growth of copper particles on catalyst surface. Calculated by on the Scherrer equation, the size of copper particles for the pure Cu/SiO 2 catalyst agglomerated and increased to 17.7 nm from initial 10.5 nm after 150 h long-term stability test. Nevertheless, the size of copper particles for the optimal 2B-Cu/SiO 2 catalyst only grew up to 8.6 nm under the identical condition ( Figure 10). TEM characterization results of the optimal 2B-Cu/SiO 2 and pure Cu/SiO 2 catalysts after 150 h long-term stability test further indicated that the introduction of boron promoter could was conducive to suppressing the aggregation and promoting the dispersion of Cu species (Figure 11). Moreover, the Cu + /(Cu 0 + Cu + ) intensity ratio of the Cu/SiO 2 catalyst was reduced from 59.5% to 48.9%, while that of the 2B-Cu/SiO 2 catalyst changed slightly after long-term stable test (Table S3 and Figure S5). The result suggested that the destruction of the balance between the Cu + and Cu 0 activity sites could result in decrease in catalytic performance of Cu/SiO 2 catalysts, which was consistent with previously reported results [11,18]. In conclusion, the main causes of Cu/SiO 2 catalysts deactivation could be attributed to the aggregation and growth of copper particles and the decrease of surface Cu + /Cu 0 , and further proved that doping a suitable amount of B species into the Cu/SiO 2 catalysts could effectively improve the stability of the catalyst. result in decrease in catalytic performance of Cu/SiO2 catalysts, which was consistent with previously reported results [11,18]. In conclusion, the main causes of Cu/SiO2 catalysts deactivation could be attributed to the aggregation and growth of copper particles and the decrease of surface Cu + /Cu 0 , and further proved that doping a suitable amount of B species into the Cu/SiO2 catalysts could effectively improve the stability of the catalyst. It is generally accepted that DMO hydrogenation reaction is a multi-step reaction. Firstly, DMO reacts with hydrogen to form MG, and then MG reacts with hydrogen to obtain EG. Finally, excessive hydrogenation of EG produces EtOH. Moreover, 1,2-BDO and 1,2-PDO may be derived from the dehydration reaction of EG with EtOH and methanol (MeOH), respectively. According to the previous reports, the acidic and basic sites on the surface of the catalyst are conducive to generate EtOH and C3−C4 diols (1,2-BDO and 1,2-PDO), respectively. From the NH3-TPD characterization results, the introduction of B species to xB-Cu/SiO2 catalysts could generate more acidic sites, thereby affecting the distribution of products, reducing the selectivity of 1,2-BDO and 1,2-PDO and improving the selectivity of EtOH at high temperature.
Conclusions
This work presents that the boron species promoted Cu/SiO2 bifunctional catalysts by sol-gel method exhibit remarkable catalytic activity and longtime stability for DMO result in decrease in catalytic performance of Cu/SiO2 catalysts, which was consistent with previously reported results [11,18]. In conclusion, the main causes of Cu/SiO2 catalysts deactivation could be attributed to the aggregation and growth of copper particles and the decrease of surface Cu + /Cu 0 , and further proved that doping a suitable amount of B species into the Cu/SiO2 catalysts could effectively improve the stability of the catalyst. It is generally accepted that DMO hydrogenation reaction is a multi-step reaction. Firstly, DMO reacts with hydrogen to form MG, and then MG reacts with hydrogen to obtain EG. Finally, excessive hydrogenation of EG produces EtOH. Moreover, 1,2-BDO and 1,2-PDO may be derived from the dehydration reaction of EG with EtOH and methanol (MeOH), respectively. According to the previous reports, the acidic and basic sites on the surface of the catalyst are conducive to generate EtOH and C3−C4 diols (1,2-BDO and 1,2-PDO), respectively. From the NH3-TPD characterization results, the introduction of B species to xB-Cu/SiO2 catalysts could generate more acidic sites, thereby affecting the distribution of products, reducing the selectivity of 1,2-BDO and 1,2-PDO and improving the selectivity of EtOH at high temperature.
Conclusions
This work presents that the boron species promoted Cu/SiO2 bifunctional catalysts by sol-gel method exhibit remarkable catalytic activity and longtime stability for DMO It is generally accepted that DMO hydrogenation reaction is a multi-step reaction. Firstly, DMO reacts with hydrogen to form MG, and then MG reacts with hydrogen to obtain EG. Finally, excessive hydrogenation of EG produces EtOH. Moreover, 1,2-BDO and 1,2-PDO may be derived from the dehydration reaction of EG with EtOH and methanol (MeOH), respectively. According to the previous reports, the acidic and basic sites on the surface of the catalyst are conducive to generate EtOH and C3−C4 diols (1,2-BDO and 1,2-PDO), respectively. From the NH 3 -TPD characterization results, the introduction of B species to xB-Cu/SiO 2 catalysts could generate more acidic sites, thereby affecting the distribution of products, reducing the selectivity of 1,2-BDO and 1,2-PDO and improving the selectivity of EtOH at high temperature.
Conclusions
This work presents that the boron species promoted Cu/SiO 2 bifunctional catalysts by sol-gel method exhibit remarkable catalytic activity and longtime stability for DMO hydrogenation to EG and EtOH. The target product of EG or EtOH could be regulated by facilely changing the reaction temperature. The optimal 2B-Cu/SiO 2 catalyst exhibited the best low temperature catalytic activity and longtime stability among the xB-Cu/SiO 2 catalysts. Adding the suitable B/Cu molar ratio was beneficial to reduce the size of copper particles, increase the Cu dispersion, enhance the metal-support interaction, regulate the surface copper species distribution, and then improve the synergetic effect of Cu + and Cu 0 species. Meanwhile, the introduction of boron promoter could enhance the surface acidity of the Cu/SiO 2 catalysts to a certain extent, thereby improving the Sel. EtOH of the catalyst at high temperature. Besides, the long-term stability of 150 h test showed that the sintering of copper particles and reduction of the surface Cu + /Cu 0 ratio were the major causes of deactivation of the Cu/SiO 2 catalyst. However, we proved that doping a suitable amount of boron promoter into the pure Cu/SiO 2 catalysts could restrain the increase of surface copper particles size, stabilize the surface Cu + /Cu 0 ratio and effectively improve the stability of the catalyst. The current findings reflect that the bifunctional boron promoter may provide potential promising applications in other hydrogenation of carbon-oxygen bonds.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 11,954.6 | 2021-11-29T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Optimization of Chitosan-Carboxymethyl Chitosan Membrane Modification with PVA to Increase Creatinine and Urea Permeation Efficiency
) membrane has been successfully fabricated and used as a dialysis membrane. This research aims to examine the manufacturing process, characterization
Introduction
High levels of urea and creatinine in the blood indicate that the kidneys are failing and require an artificial kidney/dialyzer [1].The most important component in dialysis equipment is the membrane, which functions to control the permeation of urea and creatinine from the blood out of the body without losing important blood proteins such as albumin and fibrinogen.The dialysis membrane must be inert, porous, high contact area, selective, permeative, do not adsorb protein, and biocompatible, and the pore size must match the size of the waste permeate [2,3].
In general, dialysis membranes are made from polymers in the form of hollow fiber.This type of membrane has high competence in separation systems because of its high area/unit volume, flexibility, inertness, and low operational costs [4].Some natural and synthetic polymers such as cellulose, chitosan (CS), alginate, polyethersulfone (PES), polysulfone (PSf), and cellulose acetate (CA) have been used as a membrane material [5,6,7,8].Chitosan membranes are starting to be developed for dialysis because they are compatible, pH stable, mechanical strength, and chemically inert.However, pure chitosan membranes are dense and less permeative, so many species are trapped on the membrane surface, which results in membrane fouling [9].This results in a decrease in permeation ability.
A series of studies have been carried out to increase the porosity and permeability of chitosan through modification, both through functional groups and membrane structure.Lusiana et al. [10] grafted heparin to increase the number of active sites in chitosan.Several researchers have carried out grafting to increase the active side of chitosan, including using citric acid, carboxymethyl cellulose, and graphene oxide [11,12,13].In addition, membrane porosity can be increased through alloying using polyvinyl alcohol (PVA), polyvinylidene fluoride (PVDF), and polyvinyl pyrrolidone (PVP) [3,14,15].In this research, chitosan was used as the main membrane material.To improve membrane performance, modifications were made using carboxymethyl chitosan, which functions to increase the number of active sites through the formation of electrostatic interactions, and the alloy uses polyvinyl alcohol (PVA) to increase membrane porosity [16].
CS-CMC Membrane Modification
Chitosan (0.5, 1, and 2 g) was each dissolved in 100 mL acetic acid 5% with a stirring rate of 300 rpm for 24 h.CS solution (1% w/v) was added to the CMC solution.The mixed solution (15 mL) was poured into a petri dish, and the solvent was evaporated at 60°C.Then, 4 M NaOH solution was added until the membrane was separated from the petri dish and washed using distilled water until neutral and dried.
CS-CMC/PVA Membrane Modification
A 1 g of PVA was dissolved in 100 mL distilled water at 60°C with a stirring rate of 300 rpm for 2 h.Then, the CS-CMC and PVA solutions were heated at 60°C with a stirring rate of 300 rpm for 24 h, as in Table 1.The mixed solution (15 mL) was poured into a petri dish, and the solvent was evaporated at 60°C.Then, 4 M NaOH solution was added until the membrane was separated from the petri dish and washed using distilled water until neutral and dried.
Membrane Characterization
Membrane chemical characterization includes functional groups using FTIR (Fourier Transform-Infra Red, Agilent Cary 630), crystallinity using XRD (X-ray Diffraction, Rigaku Miniflex 600), membrane surface morphology using FESEM (Field Emission Scanning Electron Microscopy, Thermo Scientific Quattro S), and creatinine and urea permeation tests using a UV-Vis spectrophotometer (PG T60).The resulting membranes were also characterized based on physical properties, including mass and thickness, porosity, swelling, water uptake, hydrophilicity, pH resistance, and biodegradation.
Mass and Thickness
The membrane mass of each composition was weighed using an OHAUS analytical balance.Membrane thickness was measured using a Mitutoyo thickness gauge.Membrane mass and thickness measurements were carried out three times for each membrane.
Porosity
All dry membranes were weighed, soaked in 10 mL of distilled water for 24 h, dried and weighed using an analytical balance.The porosity value of each membrane was calculated using Equation (1).
where v is the membrane volume, and ρ is the density of water.
Swelling
The dry membrane diameter was measured using a ruler with five predetermined points.Then, the membrane was soaked in 10 mL for 24 hours, and the diameter was measured again.The value of the degree of expansion for each membrane was determined using Equation (2).
Water Uptake
All dry membranes were weighed and then soaked in distilled water (20 mL) at room temperature for 1-6 h.Every 1 h, the membrane was removed, wiped with tissue, and weighed.The water absorption percentage was calculated using Equation (3).
Hydrophilicity
All dry membranes with flat surfaces were dropped with a drop of water from the surface of the membrane.Then, the contact angle was determined based on the resulting image to calculate the hydrophilicity of the membrane.
Resistance to pH
The resistance of the membrane to pH is determined by immersing it in various types of pH solutions, namely acidic, neutral, and basic.All dry membranes were weighed.The membrane was soaked in 10 mL of various pH solutions, including 3, 5, 7, 9, and 11, for 24 h.The mass of the membrane immersed in solutions of various pH was weighed.The resistance of the membrane to pH was determined using Equation ( 4).
Biodegradation
All dry membranes were weighed and then planted in soil that had been given fertilizer for 8 weeks.Every week, the membrane mass was weighed to determine the decrease in membrane mass.Membrane biodegradation was calculated using Equation (5).
Membrane Permeability Test
The membrane permeation test on urea and creatinine solutions was carried out using transport equipment consisting of two cylindrical chambers of glass connected and in the middle placed the membrane to be analyzed.The source phase was filled with 1.5 mg/dL (creatinine solution) or 25 mg/dL (urea solution), and the acceptor phase was filled with phosphate buffer solution (PBS) with a volume of 45 mL each.The permeation process lasted for 6 h with 2 mL of solution taken from both the source and acceptor phases every hour, then mixed with 2 mL of complexing reagent, and the absorbance was measured using a UV-Vis spectrophotometer at a wavelength of 486 nm for the creatinine-picric acid complex and 425 nm for the urea-DMAB complex.The clearance value was used to compare urea or creatinine levels at the initial time with the final time.
Membrane Characterization
The membranes were assessed through various analyses.Structural analysis and interaction modes were characterized using FTIR.XRD analysis was conducted to determine the membrane's phase, while FESEM analysis provided insights into its surface morphology.Additionally, the physicochemical properties of the membrane, including mass, thickness, porosity, degree of swelling, water uptake, and hydrophilicity, were also evaluated.
. Chemical Structure and Binding Mode Analysis
Figure 1a shows the FTIR spectra of chitosan (CS), carboxymethyl chitosan (CMC), chitosan-carboxymethyl chitosan (CS-CMC), and CCP at various dope compositions.The spectral characteristics of chitosan are specifically shown in the 1587 and 1647 cm -1 areas, which are the primary vibrations of N-H (amide I) and C=O (amide II); the C-N stretch in chitosan is observed in 1319 cm -1 [17].The sharp peak of CMC in 1587 cm -1 indicates the presence of N-H (amide I) primary vibrations [18].The characteristic peak of polyvinyl alcohol is evident at an absorption of 849 cm⁻¹, which corresponds to the presence of C-C stretching [19].Absorptions found in the range of 3362, 3302, 2877, 1148, and 1028 cm⁻¹ across all membranes are attributed to O-H stretching, N-H stretching, C-O-C asymmetric stretching, and C-O stretching, respectively [20].The modification of chitosan (CS) with carboxymethyl cellulose (CMC) does not result in any new peaks, indicating that the mixing reaction between CS and CMC involves only electrostatic interactions rather than the formation of new chemical bonds [21].
Figure 1b shows the appearance of a new peak in the 849 cm -1 , which is related to the C-C strain due to the addition of PVA.This indicates the presence of strong miscibility in the CCP membrane.The peaks of C-O and C-O-C strain in the 1028 and 1148 cm -1 , respectively, experienced a shift.Figure 1c shows the N-H bending shift (1587 cm -1 ) and C=O stretching (1647 cm -1 ), along with the mixing reaction between chitosan with CMC and PVA.This intensity shift indicates an electrostatic interaction that causes a change in characteristics.
In addition, a decrease in intensity occurs in the 1647 cm -1 (C=O strain) as the concentration of CMC added increases.Figure 1d shows that the N-H (3362 cm -1 ) and O-H (3302 cm -1 ) strain peaks on the CS-CMC and CCP membranes experienced widening and shifting towards higher wavenumbers.The peak in 2877 cm -1 is the C-H stretching, which is found in all membranes.The interaction between chitosan, CMC, and PVA is illustrated in Figure 2.
XRD Analysis of Membrane
Crystallinity analysis using XRD was carried out on all membranes presented in Figure 3. Crystal peaks at 2θ of 17.38° and 18.5° were found in the XRD pattern of pure chitosan.The broad peaks observed in the XRD pattern around 10° of 2θ indicate the average intermolecular distance of the amorphous part, and small peaks are centered at around 20° of 2θ [22].The new crystal peak at 2θ is 39.7° (which indicates the PVA peak) found in the CCP membrane pattern [23].The CCP membrane pattern consists of characteristic reflections of CS, CMC, and PVA.The peak intensity decreases and becomes broader, implying that CS, CMC, and PVA are complex and can mix well and form a homogeneous mixture.This indicates that pure chitosan, CS-CMC, and CCP are amorphous.
Surface Structure of Membrane
According to Figure 4a, the pure chitosan membrane contains fewer pores and exhibits non-uniformity.In contrast, the CS-CMC membrane (Figure 4b) shows more pores distributed across the membrane body, resulting in a smoother and denser surface.The incorporation of PVA into the CCP membrane increases the asymmetry of the pores on the membrane surface [24].Thus, the CCP membrane (Figure 4c-e) becomes more hydrophilic and has low fouling, high mechanical strength, and strong pH stability.This shows that modifying chitosan with CMC and PVA enhances the membrane's porosity.
Physicochemical Properties of Membranes
The mass and thickness values of the membrane indicate the uniformity of its composition [25].The more composition that makes up the membrane, the higher the mass and thickness values of the membrane.The mass and thickness of pure chitosan and modified membranes are presented in Table 2. Membrane porosity is a crucial factor in determining permeation performance and water flux during dialysis [26].Greater membrane porosity results in improved permeation and water flux.The pores create spaces that can be filled by water, so larger membrane pores provide more opportunities for water absorption [27].
The porosity of pure chitosan and modified membranes is shown in Table 3.The data in Table 3 indicates that membrane porosity increases with modification.The significant increase in porosity demonstrates that CMC and PVA integrate into the chitosan structure, forming pores throughout the membrane body.This observation aligns with Salmasi et al. [28], who stated that incorporating highly hydrophilic CMC and PVA into the membrane structure enhances membrane porosity.
The degree of swelling indicates the membrane's ability to expand within a solution system.Water-filled cavities within the membrane influence its expansion ability as the membrane size increases [29].The swelling of pure chitosan and modified membranes are presented in Table 3.Based on Table 3, it can be seen that the swelling of the membrane has increased with modification.The porous structure of the membrane body due to the inclusion of CMC and PVA in the membrane body affects the ability of the membrane to adsorb water in the membrane environment [30].The water absorption capacity of pure chitosan and modified membranes is presented in Table 3.The water absorption value increases with increasing concentrations of CMC and PVA.The highest water absorption capacity was obtained on the CCP3 membrane, which was equal to 11.97%.The water uptake capacity value can be influenced by the active groups and the distribution of pores on the membrane surface.The carboxyl group (-COOH) in CMC facilitates interaction between the membrane and water molecules [31].
The contact angles of pure chitosan and modified membranes are shown in Table 3.According to Table 3, the modification reduces the membrane contact angle.This decrease becomes more pronounced with increasing amounts of CMC, resulting in the membrane becoming more hydrophilic [32,33].
Resistance to pH
The decrease in mass resistance of pure chitosan and modified membranes under various pH conditions is presented in Figure 5a.Chitosan-based membranes are damaged in acidic conditions (pH 3 and 5), except for those modified with PVA, which remain resilient against acidity.In neutral to alkaline environments (pH 7-11), all membranes exhibit only a slight reduction in mass, indicating their stability in these conditions.This aligns with the research by Ma et al. [34] that neutral and alkaline solutions that enter the membrane will cause deprotonation of the polymer at the amine group.As a result, the amine group acquires a negative charge, becoming more hydrophobic.
Biodegradation
Membrane biodegradation is the main benchmark for environmentally friendly membranes [35].The degree of membrane decomposition can be influenced by the activity of soil microorganisms.Figure 5b shows the reduction in degradation mass for pure chitosan and modified membranes.The degradation rate of pure chitosan is faster compared to modified chitosan membranes.This aligns with research by Zong et al. [36] that chitosan is a biopolymer compound that can be degraded quickly.Meanwhile, modifications to chitosan make the membrane structure different from the original.These structural changes require microorganisms to spend more time recognizing and adapting their enzymes to degrade the membrane, which increases the time required for decomposition.
Membrane Permeability Test
The most important property in dialysis membranes is permeability.The results of creatinine and urea membrane permeation are shown in Table 4. Creatinine permeation increased by 20-62%, and urea permeation increased by 17-65% as the concentration of CMC in the membrane body increased.This enhancement in permeation demonstrates that the additional functional groups of CMC effectively recognize and bind urea and creatinine, facilitating their transport and subsequent release on the permeate side.In this study, the urea permeation value exceeded that reported in previous research conducted by Lusiana et al. [29].Specifically, the urea permeation results using a sulfonated PEG/PVDF membrane were only 39.66%.This underscores the superior performance of the present investigation.The repeatability test of the membrane used in the permeation process was conducted to evaluate the performance of membranes previously utilized.The membrane permeation process was repeated three times.Data on the repeatability of membranes used in the creatinine and urea permeation process are shown in Figure 6.An FTIR analysis was performed to assess the repeatability of the membrane during the permeation process, providing insight into functional changes in pure and modified chitosan membranes.These functional changes, observed after repeatability testing, are shown in Figures 7 and 8. Figure 6 shows a decline in the permeation repeatability of the used membrane.This is likely due to the fouling process, where the target compounds (creatinine in Figure 6a and urea in Figure 6b) become trapped on the membrane surface.
The spectra presented in Figures 7 and 8 reveal differences between the membrane spectra before after use in the repeatability test.A significant shift occurs in the O-H and N-H group spectra, in the 3200-3400 cm⁻¹ range.This shift is likely due to electrostatic interactions between urea and creatinine and the hydroxyl or amine groups.These shifts also suggest hydrogen bonding between the membrane, creatinine, and urea [26].
Figure 9 shows that the CCP3 membrane pores become fewer after the creatinine and urea permeation process.This indicates that fouling occurred during the permeation process.Target compounds (creatinine and urea) are trapped on the membrane surface of the filtration system, and the shelf life of the membrane [37].
Conclusion
Membranes made from chitosan with a mixture of CMC and PVA have been successfully prepared and physically and chemically characterized.Based on the SEM image, the membrane becomes more porous with modification.Modification with CMC and PVA can increase the porosity, swelling, water uptake, and hydrophilicity of the membrane.However, modification reduces the membrane degradability.CMC and PVA enter the chitosan structure homogeneously, not through chemical reactions but through electrostatic interactions.Membrane permeation increased with increasing concentration of added CMC.Urea permeation increased by 17-65%, and creatinine permeation increased by 20-64%.The membrane can be consistently used repeatedly but experiences a decrease in permeation of 10% from the first use.
Figure 5 .
Figure 5. (a) decreased mass of pure chitosan and modified membranes under various pH conditions (b) degradation mass of pure chitosan and modified membranes
Figure 6 .
Figure 6.Test the repeatability of membrane used in the permeation process of (a) creatinine and (b) urea.R0: initial state of membrane, R1: first repetition, R2: second repetition, R3: third repetition
Figure 9 .
Figure 9. CCP3 membrane morphology after the permeation process of (a) creatinine and (b) urea
Table 2 .
Mass and thickness of pure chitosan and modified membranes
Table 3 .
Porosity, swelling, water uptake, and contact angle of pure chitosan and modified membranes
Table 4 .
Creatinine and urea permeation | 3,943.8 | 2024-04-23T00:00:00.000 | [
"Medicine",
"Materials Science",
"Chemistry"
] |
The Understanding of Security in the Postmodern Society
The rapid development of science and technology has led to the emergence of a crisis in society. Science "pushes" religion, but does not offer a new moral code in its place. The definitions of "security" are almost as many and as controversial as postmodernism. For the purposes of this study, however, it will be sufficient to define security as "the functional state of a system that provides for the neutralization and counteraction of external and internal factors affecting or potentially damaging the system." This scientific article presents a study, which seeks to answer the question why the paradigm "security" is so important in the postmodern society, what are the roots of its influence and meaning, and to seek conclusions and guidelines for its increase. The main features of the postmodern society, some prerequisites and their implications, as well as their impact on the security of all its levels are examined. The subject of the study is the postmodern society with its specific features, and the subject - the security as an indissoluble element of the social relations in the contemporary world.
INTRODUCTION
It is accepted that modern times begin with the bourgeois revolutions in France and England (16th century) and finish after the end of the Second World War (the middle of the 20th century). The main processes that take place during these times appear also to be its characteristics that radically distinguish it from the previous ones. More significant of them are as follows: The Age of discovery that extremely widens the views of the man of the time, and beside that it confirms some major thesis of the then science, namely the idea that Earth is not flat and that it is not the centre of the Universe, etc.
The economical relationships and interests coming to the fore, which, in the course of the times remove completely the previous ones, built on the grounds of aristocracy. Education and Social Sciences, 15-17 October 2018-Istanbul, Turkey Science development that leads to improvement and elaboration of new production technologies.
Proceedings of ADVED 2018-4th International Conference on Advances in
The gradual reduction of the influence of religions and particularly of Christianity upon social conscience.
The listed characteristics are interrelated through positive reverse relationscience development creates new technologies that, in their turn, change the economic environment and the social relations and remove religious beliefs. On the other hand, the better and more precise technologies as well as the social trust in science establish environment for its faster development. This also leads to extremes like, for example, the so called Laplace's demon according to which if we knew the laws of universe and its initial status, we could predict its behaviour for a unspecified period of time, or its status at any moment.
The rapid progress of science and technologies leads to the emergence of crisis in society. Science "pushes aside" religion but does not suggest a new moral code in exchange. Even the contrary, the wide application of new technologies in the times of both world wars and this mainly for making weapons of mass destruction shows that scientific results application without moral restraints leads to creepy consequences. The lack of "goals" and "reference points" in the society's moral grounds of societal consciousness creates preconditions for loss of the system of values in practically several generations. Unfortunately, this process has not finished even today. Probably just this is the reason for the restoration of radical Islamism. Number of movements and philosophies arise in the years after the Second World War, which try to fill this vacuum in social relations. Typical examples for that are the hippie movement in the USA, "The Angry Young Men" in England and many more. Anyway they did not succeed in establishing clear and acceptable social platform, because of which they disappear and leave only blurred trace.
All this is important because it creates the preconditions modern society builds up on.
The term "postmodernism" means what comes after modernism. As some researchers say, the affix "post" is very convenient because it shows that modernism evolves directly from modernism and the transition from one to the other has evolutionary rather than revolutionary nature. The term "postmodernism" (2017a) is used for the first time yet in 1917 in the works of Rudolf Pannwitz, German writer and philosopher-essayist. Later the term is used by number of autors with various meanings, but mainly to present modern experimental streams in art, which do not follow the ideas of "modernism" specially in abstract art. Actually, most often what is understood behind the term "postmodernism" in art is returning to realism and creation of works (artistic, literature, etc.) accessible for understanding by the wide masses or, in other wordsdenial of modernism. The term gains real popularity with the publication of the book " The Language of Post-Modern Architecture" by the American architect Charles Jencks in 1977. Later the meaning of the term "postmodernism" extends and range over philosophy, sociology and practically all spheres of modern science and art. This happens with the occurrence of a stream in the French philosophy (Althusser, Derrida, Lyotard, etc.). Actually, at the moment there are quite a number of definitions of "postmodernism", used and founded by various schools in science and art, and great part of them are completely mutually exclusive. Here are some examples: Jürgen Habermas, Daniel Bell and Zygmunt Bauman define postmodernism as result of politics and ideology of neoconservatism, characterized by aesthetic eclecticism, turning objects for consumption into fetishes and other specific features of postinductrial society; Umberto Eco interprets postmodernism in a broad sense, as mechanism for change of one cultural period with another; Hassan, Welch, Lyotard consider that postmodernism is a common cultural denominator of the second half of the 20th century, unique period, where the world is reviewed as chaos; According to Letten and Suleiman postmodernism does not exist, it could be reviewed as giving a new meaning to the cultural postulates of modernism, but the independent postmodern reaction is a myth; Küng and Tarnaspostmodernism is a period that comes to change the European "New Age". The latter is characterized by infinite trust in the power of mind and progress. The crash of the value system has happened during the First World War. As a result, the Euro-centric paradigm is exchanged for a global polycentrism, and the infinite trust in mind gives up its place to the "interpreting thinking".
Naturally, there are number of critics towards the thesis of postmodern society. Most of them are directed to the statement that postmodernism does not exist and this is rather late modernism (Chomsky, Dennett, Solzhenitsyn, etc.) It is obvious that modern world is not the same as it was during the first half of the 20the century. Public, social and economic relations undergo changes, transform in new and different things, which most of the Proceedings of ADVED 2018-4th International Conference on Advances in Education and Social Sciences, 15-17 October 2018-Istanbul, Turkey time are unknown from the history of humankind. This shows we really are in a new period. Whether we'd call it postmodernism or another way is not an essential issue in this case. It is more important to know its characteristics and to be prepared for the coming changes. All authors, who develop the thesis about postmodernism and the society in this period, unite around one idea. The chaos in public, social, political and economic relations, the crash of the old value systems and the lack of new ones to exchange them are features showing that actually this is a transitional period, transitional process while the society (as a system) transfers form one status to another. Unfortunately, the new status and the new relations are not outlined yet and practically cannot be defined and clearly pointed. Historical references point that these transitional periods are characterized by long continuance and very often by impetuous social events, wars and bloodshed.
Logically, as extension of the reasoning above, the term "security" comes. And here, almost all authors examining postmodernism, are unanimous about security being one of the leading paradigms of modern society. Its significance grows exponentially I would say, during the last years and this reviewed mainly from practical point of view rather than in theoretical respect.
The definitions of the term "security" are almost as many and controversial as the ones of postmodernism. But it would be sufficient, for the goals of the present study, to define security as "given system's functional status that ensures its neutralization and counteraction to external and internal factors, which influence or able to impose destructive influence upon the system" (2017b).
The goal of the present study is to look for an answer to the question why the paradigm "security" is so important in the postmodern society, what are the roots of its influence and significance and respectively, to look for conclusions and directions for its enhancement. The main characteristics of the postmodern society, some preconditions and their consequences as well as their impact upon security at all its levels would be reviewed.
The study's subject is the postmodern society with its specific features and its objectthe security as an inseverable element of social relations in the modern world.
POSTMODERN SOCIETY
The defining of the term "postmodern society" is quite heterogeneous and controversial according various researchers. This is the reason not to focus on particular definitions but to focus our attention upon its characteristics. Similar to the short description of the modern society, actually, the basic processes going during the relevant period appear to be also its main characteristics. On this grounds the main processes that define the difference of postmodernism in regard to the previous periods could be systematized in the following groups: globalization; loss of legitimacy of "the state"; technological revolutions; the crash of big ideologies; the end of the individual "role" models; the end or the transformation of "communities"; the new generations (Y, Z, α).
It is possible to derive number of other processes and characteristics of postmodern society but the author considers that the listed above are the more significant and most of the rest could be reviewed as subsystems or elements. I'd point as an example "the death of God". Actually, this could be reviewed as element of the crash of the big ideologies.
As it was also mentioned above, the listed characteristics are interrelated with a whole system of straight and reverse relations, which makes the understanding of postmodern society extremely complicated and often evokes researchers' feeling that all in the modern world is chaotic, deprived of any logic and order.
The main part of the characteristics of postmodern society mentioned above, are taken from N. Slatiniski (Slatinski, 2014a), Z. Bauman (2017c) and other sources, and this compilation as well as the adding of the last element are suggestions of the author of the present study. There is no pretension for comprehensiveness and full inclusion of all characteristics of postmodern society, still more; practically each researcher elaborates his own system of characteristics and criteria for their study and the section between these multitudes is almost empty.
Let us review briefly each of the listed characteristics of postmodern society and some major relations between them.
GLOBALIZATION
"Globalization is the process of the enhancement of economic, social, technical, political and cultural interrelations and relationships between the individual states, organizations and people. It is connected with Proceedings of ADVED 2018-4th International Conference on Advances in Education and Social Sciences, 15-17 October 2018-Istanbul, Turkey global distribution and mutual penetration of ideas, capitals, technologies and elements of culture.
Although contacts between comparatively remote regions exist yet in Antiquity, they acquire global scope hardly after the Age of Discovery in the 14th -16th century. The rapid development of transport and communication equipment in the 20th century leads to rapid intensification of the process of globalization by the expansion of international trade and transfer of capitals and the establishment of the first political institutions of global scale.
Globalization is a process with controversial effect. There are various points of view and positions regarding the benefits and the damages of it, which, in many cases, are influenced by more general ideological views. While globalization supporters review it as factor for economic progress on a world scale, its opponents consider it is cause for economic and ecologic damages" (2017d).
As far as the present study is focused on security, we will focus our attention on this process's negative effects. As Zygmunt Bauman states in an interview for BBCthe biggest trouble is that globalization we are dealing with today is of strictly negative nature. It is grounded on ruining barriers, allowing the globalization of capital, the commodities movement, the information, crimes and terrorism, but not political and legal institutions, which grounds is national independence. The means available now for protecting law and citizens from violation, are obviously insufficient for gaining command of the global forces, which in essence are over-territorial. The events of September 11th and the bomb attacks in Madrid and London showed that these means are, roughly said, useless (2017c). This could briefly be explained the following waynational state, no matter how big and powerful is, does not have power and means to resist global processes that seize the whole world, and global means for fighting the process's negative effects do not exist yet. Actually, the latter is the connection between this process and the state's loss of legitimacy.
The freedom of capitals moving allows moving productions in zones in the world, where economic conditions are more favourable for the relevant companies. This, in its turn, sharply increases the risk of unemployment and long-term lack of employment of people. This process supports also the processes of crime distribution, which, as always, is extremely flexible and able of taking advantage of even the smallest lapses in the security systems.
The freedom of information dissemination, beside all positive effects, allows easy communication of criminal and terrorist groups all over the world. More, the inability for the ruling people to manipulate the information flows contributes to the loss of effectiveness of ideological actions and treatments. However, just the opposite process is observed in the recent yearsnamely substitution of information, creation of fake news that could direct whole world's public opinion in certain directions for achieving the wanted results. Yet, this process difficultly affects intelligent and critical people, who tend to check information sources and to review situations from all sides. One more aspect of information's global dissemination is the terrorism. Well-known truth is that the power of terrorist acts does not come from the number of victims, sufferers and damages but from swamping the world with information about them and spreading the feeling of fear among the people in the affected and threatened states.
LOSS OF LEGITIMACY OF "THE STATE"
According to (2017e) legitimacy is a concept for the grounds of the political regime and its representatives that make citizens accept their actions and orders as lawful and true. Legitimacy does not ensue from the official laws and acts, but from the public approval and acceptability, which state governance raises in the governed ones. It is result of not regulated norms and values that are subject to the consent and the approval of most of the citizens.
As it was mention in the previous item, one of the main reasons for the loss of legitimacy of authority is the inability of national state to fight the global processes and threats. The events from the last years, mainly the activity of terrorism worldwide, showed that even a union of several states (the European Union) does not have enough powers to fight these threats.
However, this is not the most important element of the studied process. In (2017c) Bauman says "instead of doing their duties for protecting the citizens against the insecurity and the resulting fear, governments call to higher flexibility on the labour market and all other spheres regulated by the market forces. And this means more insecurity. The thing they are calling us to is not reduction but increase of risk. One of the consequences of the state withdrawing is legitimacy crisis of state authority. This authority required subjection, discipline and respect for the law, promising the citizens security and just life. But these promises have been neglected one after the other, including free education, basic medical aid, pension insuring and the basic aid for the unemployed. The state is with tied hands; it itself is left in the hands of market powers. If Education and Social Sciences, 15-17 October 2018-Istanbul, Turkey it dares to oppose the market forces, the capital would flow to a place, where it could grow easily and comfortably. Then the nation would run into the scourge of unemployment and poverty". Coming from the definition presented in the beginning of the present item, the described state's behaviour evokes the dissatisfaction of its citizens, and from there loss of legitimacy based on the public approval.
TECHNOLOGICAL REVOLUTIONS
According to (2017f) the term "revolution" is defined as radical, deep, fundamental, quality change in the development of the society, nature or knowledge, related to abrupt disconnection of the relations with the previous status of the relevant object. It is not by chance that the name of this item is in plural.
Historically, technological revolutions begin yet in the end of the 19th and the beginning of the 20th century with the progress of the steam machine. According to us the significance of that invention has two very important aspects: Humankind, for the first time in its history, begins to use source of energy, which is practically besides natural forces; The progress of the steam machine creates a new scientific field -theory of automatic management, which later turns into base for the development of technical systems and particularly computers.
From this moment on inventions and technological developments start almost one right after the other, which changes considerably not only scientific and technological world but also economic processes and then social relations. As Z. Bauman says in (Bauman, 2003a) "the novelty of our times is in the fact that the periods between compressed and accelerated changes called "revolutions" are not periods of routine any more. The changes themselves are not short periods, separated by long eras of unstable way of life, which makes long-term planning, prognosis and building of "life projects" possible any more. Today we live in a status of permanent revolution. Revolution became form of life of modern society".
Why technological revolutions are so important that they are brought out as one of the major characteristics of postmodern society? First, they created the environment (the ether), where the process of globalization is possible, through the progress of transport and communications. Not less important is the labour productivity increase, which, in its turn, allowed the world gradually to pass to the so called "postindustrial society ", although this process is in the bud to this moment and the states that comply with the definition for postindustrial have rather moved their production capacities to third countries. Next, the technological novelties reaching each separate individual created environment for change of people's way of thinking, which we'd study in details in the section about the new generations.
It may summarized be said that technologies penetrated impetuously in all fields of life of the independent individual as well as the one of the society in general. Public and individual consciousness is definitely unprepared for these processes and this causes number of conflicts in moral, educational and purely worldly aspect.
The crash of big ideologies
In (2017c) N. Slatinski emphasizes that in modern society "the state is Ideology but something more, it was society of ideologies". He uses the term "meta-narratives", which means big stories, big scenarios, big explanatory diagrams. Again there, they are systematized as follows: ideology (Marxism, Liberalism, Fascism, Anarchism, etc.); religion (Christianity, Islam, Buddhism, etc.); history; science; art; democracy; security; freedom and even Society.
The French philosopher and researcher of postmodernism Lyotard says "If we simplify things to the utmost, we may accept that "postmodern" is mistrust in the meta-narratives" (Liotar, 1996a). Again in (2017c) "According to that thesis postmodernism is mistrust for, disappointment of and critics, crisis and erosion of meta-narratives". The main reason for these processes is the crash of the big ideologies of the 20th century (Fascism, Communism, etc.). Even Democracy more and more often is subject to critics and causes disappointment in individual persons as well as in bigger social groups. This, in its turn, leads to attempts for restoration of old laws and order in some countries in the world.
Big stories are removed by small ones and the latter "do not pretend for universality, truth, rationality, security and are always situational, temporary, spontaneous, cover local events, not large-scale global concepts" (2017c).
One of the consequences of these processes as well as of the communities' transformation, which would be studied further on, is the loss of representativeness of the political objects in society. Parties' role has always Proceedings of ADVED 2018-4th International Conference on Advances in Education and Social Sciences, 15-17 October 2018-Istanbul, Turkey been to represent and defend the interests of a particular public stratum. The lack of ideological grounds as well as the merging of strata deprived these objects of their core and tore the connections of political elite with ordinary people, their main problems and worries. Parties and political objects coming to life in the recent years are very often named "people's", i.e. representing the interest of almost whole nation, or bear names that are not engaged with certain strata and elements of society (GERB, DPS, etc.). Also typical is the mixing of left and right ideas in the political platforms of nowadays parties, which clearly shows the lack of purposefulness and public representativeness of these objects. Thus, nowadays elections turn from democratic process directed towards the minds and hearts of people into competition of media manifestations, populist slogans and unfeasible promises.
The end of individual "role" models "Role" models are typical, above all, for the pre-modern society. Actually, by the term "role model" we will understand the role and place of the separate individual in the society.
Not so long ago, a person's role in society used to be defined not by his capabilities and potential but by the place and environment, he was born in. The farmer's son used to become farmer, blacksmith's sonblacksmith, bar keeper's sonbar keeper. This predetermination was considered normal status quo. Actually, the role change was not completely impossible, but the social necessary to be paid for such a change was excessively high and unachievable for the most of the people. The situation with women was slightly different, but generally, social-role model was in power for them, too. Farmer's daughter used to marry a farmer, etc. The role acquired by birth used to accompany the person through his whole life and used to pass to his children.
The modern society's role in this model is that number of new professions emerged during this period (miners, industrial workers, technicians, etc.), which gave chance for many people to try to escape from the acquired by birth roles. However, that does not destroy the model as a whole. The worker's son didn't have chance to become engineer because of the very expensive education and number of other factors. The model continued functioning at full, there was only slight change in roles.
Economic crises as phenomenon of industrial society, played special role in the studied model, by actually, putting most of the people in lower social strata and only single ones succeeded on the grounds of speculation or luck to earn and to raise their status. Typical example for that is the Great depression in the USA in the 30s of the 20th century.
This model is completely destroyed in postmodern society. The main reason for that is the reduction of the value of education and its practical accessibility for everyone who wishes to study. This way each one gets the freedom to choose his role in society depending on his capabilities and potential. Actually, the change of social role has its price but it is considerably lower and accessible for many people, especially in the developed world. The more important is that no one feels bound to his place in the society and could change it at any moment paying relevantly the necessary social price. Actually, this is one of the positive characteristics of postmodernism as far as it gives people the freedom to find the role that suits them best and makes them happy.
An interesting phenomenon in this direction is the so-called "American dream". On one hand, it turns into symbol of consumer society, but on the other is a stimulus for many people to try to find their place in society, to look for new ideas and possibilities for realization.
As a summary, it can be said that the key word in the process of destroying the role model is "Freedom".
The end or the transformation of "communities"
According to Zygmunt Bauman (Bauman, 2003b) disintegration of communities is characteristic of postmodern society. Actually, he alludes to, as he calls them "warm communities", where a person feels secure, protected, understood and mostly at his place in the world. These communities are typical for the pre-modern world and are very close to the definition for "commune" in their essence. Actually, these communities appear to be miniature model of societyself-satisfying their basic needs and having all basic roles for ensuring it. Here the connection between role model disintegration and the disappearing of this type of communities is obvious. Actually, this process begins yet in the modern world. The appearing of new roles and the labour force migration from agriculture to industry destroys "warm communities" disturbing the balance of roles inside them and exhausting their young blood.
The ideology of communism tries to exchange that model for the so-called class model, but the social experiment "socialism" showed later on that the levelling and the artificial grouping of people according some Education and Social Sciences, 15-17 October 2018-Istanbul, Turkey features is not a successful social system. As George Orwell says, "all animals are equal but some are more equal than the others". Actually Bauman studies also the artificially established communities -"the ghetto community " and defines their basic differences regarding the "warm" ones.
Proceedings of ADVED 2018-4th International Conference on Advances in
Postmodern society is not deprived of communities. Particularly in the recent years, the development of technologies and especially Internet allowed people's communication from any point in the world. However, the new communities differ significantly from those of the pre-modern society. Actually, I think that the word "societies" could be used as basic principle for forming these; people's interests are of new type. The other feature is that in most of the cases the communication inside them is virtual, i.e. is conducted with the help of technical means. However, this does not prevent the participants from feeling understood and being at their place and to some extent even protected. Typical examples for such communities or societies are the forums. There are several big forums in Bulgaria having numerous members, which, beside the virtual communicating, conduct active social and charity activity. They are ready to help their members as well as everyone else, who needs. Such forums are "BGMamma", "Offroad", etc. The MMO (mass multiplayer online) computer games that gained great popularity in the last 10 years have similar effect. The community that forms in them is often international. It is not a surprise at all to see in such a game an American and a Russian fighting shoulder to shoulder for a common cause. This type of games does not limit their activity to only entertainment for people but participate actively in scientific and social projects (Eve online). Some create parallel worlds with completely functioning economic system and operate with real monetary units (Entropia, Second Life). It is already real the possibility for people to make their living through work in virtual game worlds.
The so-called social media (Facebook, G+, Twitter, etc.) are a comparatively new phenomenon. They create "friendships" between people and allow them to share news, opinions, gossips, but also knowledge.
Generally, virtual communication and communities are new fields that are to be subject of serious studies and analysis.
The new generations (Y, Z, α)
In (McCrindle, 2009a) the Australian sociologist Mark McCrindle develops the thesis about modern generations. As the author states, the book is directed mainly to parents and teachers in order them to become acquainted with the characteristics of youngsters and to know how to raise and educate them. At first sight the topic is not connected to postmodernism and security but, as we are going to see further, there is a significant difference in the way various generations think and accept the world. This means that when they take he power in their hands at state and world level, they would take decisions and would act just according their established view of life.
Generations are groups of people, who are born and raised within the same period of time and have, if not identical, then close views and view of life, common culture and value system. The main factors that shape each generation are the cultural-social environment, where they form as personalities and the technologies that accompany their growing. It might be said that technologies start playing significant role in the forming of the new generations during the last century.
It is well known that in times of social cataclysms, of accelerated technological development or of mass migration processes the difference between generations sharply grow and affect their essential characteristicssometimes radically change the way of life, the value dominants, their evaluation of the past and future. As a result, generation waves shorten significantly, and they come each 10-15 years instead the standard 20-25 years. Children of various generations, particularly in the listed conditions, differ from each other significantly.
Representatives of the following generations live in the world to date: the so called generation of buildersborn from 1930 to 1945; the baby boomersborn after the end of the Second World War to about 1965; Xthe children born from 1965 to 1980; Ythe ones born from 1980 to 1995; Zthe children that were born in the period 1995 -2010 and the Alpha generation of the ones born since 2010 to date.
Of course, this division is relative and depends highly on the place in the world, where the children are born, which defines social, cultural and technological differences within the same generation. If, anyway, we accept the generations division presented by McCrindle, to date the representatives of the Y generation have already entered social and public life and even have representatives of theirs in the authorities' structures at regional and national level. For comparison, the newly elected president of France -Emmanuel Macron is born in 1977. The Z generation is at the entrance of higher education, and the Alpha generationof school.
Proceedings of ADVED 2018-4th International Conference on Advances in Education andSocial Sciences, 15-17 October 2018-Istanbul, Turkey The X generation met and "conducted" to a high degree the digital revolution. The Y generation grew in the course of it, and the Z generation is practically born in a "new" digital world.
According to the factor degree of use of information technologies, the following four categories of people outline: Strangers to digital. These are the generations of Elders and Builders. A characteristic of theirs is that they are complete predecessors of technologies. Internet, podcast, on-line games, social networks are completely strange terms to them; Digital Emigrantsthe Baby boom generation. They have matured without digital technologies. A part of them take advantage of new technologies successfully, and the rest accept them reluctantly; Adapting themselves to digitalthe X generation. Digital technologies start appearing (in large-scale) in their teen years. They eagerly embrace the new technologies; Digital natives -Y, Z and Alpha generations. They live entirely dipped in the digital technologies.
We will review in short the major characteristics of the basic generations that perform more essential role in public life to date.
Baby boomers generation: Born and grown after the end of the Second World War, in the period of sharp confrontation between both, existing then, social systemsthe Cold War. However, the period is comparatively peaceful and saturated, especially in the West, with movements and fights for freedom, equality and democracy. The media that shaped their mind are the newspapers (written word) and the radio. This is time of powerful ideological treatment of public consciousness. As a whole, baby boomers are responsible people, socially and communally engaged. To date, they are retired or just before retiring and their possibilities to influence significantly social processes are comparatively limited. As mentioned above, they are digital immigrants, although they have actively participated in the conduction of information revolution. New technologies are strange to them and comparatively small part of them works successfully with modern technical and information tools.
The X generation: They are born and grown in the 70-80 s of the 20 the century, a period distinguished for peaceful existing together, the start of disarmament processes and relative resumption of the connections on both sides of the "Iron Curtain". Television is the main media that formed them as personalities. Information from all over the world started reaching them despite ideological treatment and censorship, through it. The processes of globalization come into existence comparatively slowly. They are known as "the children with key around the neck" -raise by busy with work and social activities parents, left to define their daily life and priorities themselves. This logically forms them as individualists and people, capable of taking decisions and respectively of bearing the responsibility for them. To date, they comprise the main part of labour force and authority structures worldwide. Regarding technologies, they are in the base of all technological revolutions conducted in the recent years and, as we mentioned above, accept novelties readily and take advantage at almost full value of their possibilities.
The Y generation: They are born and grown in the boom of computers, which turn to be their major toy. Internet appeared in the period of their adolescence; still it is new technology for them. Democracy bloom is the social environment that formed their personalitiesthe period of the Berlin wall fall, of world opening and of a lot of freedom. These gave them many choices and one of their important characteristics is that they have the right of choice as unabolished datum. It might be said this is the first generation of postmodern society. The world is really "one big village" for them.
The Y generation children are raised by the late baby boomers and the early X -parents that are excessively caring, controlling and imposing their views. These form them as people, who follow rules but are dependable on the others approval. Therefore, they are highly team-oriented but not in the positive meaning of the term. There is rather indecisiveness and fear to take personal responsibility while taking whatever decisions about them and that is why they need team for distributing the responsibility. They manage difficultly with individual assignments, not because of lack of knowledge but because of fearing to take individual responsibility.
Another characteristic feature of this generation, originating mainly from the technological environment of their growing up is the speed and multichannel perception of information. They are capable of watching video, listening to sound and perceiving text information at the same time through and from various channels and of various natures. Modern televisions take advantage of that feature and broadcast various information at the same time, mainly in the news emissions. Actually, the Ys "scan" information flows without entering into the contents. They pay a little bit more attention to what draws their attention but as a whole, perception stays at shallow level. All this leads to forming non-linear, visual thinking in them.
They are quite pragmatic, strongly oriented towards growing in the career. On the other hand, they are highly dependable on entertainment and perceive work and career only from the point of view of ensuring means for more and more quality entertainment. Here are the roots of the fact that they do not tend to conduct any activities outside their direct obligations and working hours as well as activities not corresponding to their personal goals and ambitions.
One, according to me, among the most important characteristics of this generation is the lack of authorities. As McCrindle says, with them a process of knowledge flowing from the younger to the older generations is for the first time observed in human history. As far as they are considerably better acquainted with new technologies, cases when they are made to teach representatives of previous generations and even their parents and teachers to work with them are not exclusion. This naturally leads to lack of respect towards "the wise elderly". On the other hand, a principle of theirs is "Google knows everything", which leads to not understanding the requirement to fill their heads with large, complicated and "unnecessary" knowledge. They rely on continuous "connectivity" to the net and on the possibility to derive necessary information from it. This, at its turn, becomes reason for nobody to be able to impress them with erudition, culture and knowledge.
Only in some 5-10 years, this generation's representatives would comprise the main part of the labour force worldwide. Their representatives would occupy key positions in all authority structures at all levels.
The Z generation: As McCrindle says, if we square all features of the Y generation, we will get a vague notion of the Z generation's characteristics. I will not go into details about them, because, as I mentioned above, to date these children are at the entrance of higher education.
Alpha generation: Here I would only afford to quote a colleague and friend of mine, who characterizes them by two words -"homo tabletus".
Differences between generations are objective reality that affects social and political processes throughout the world. Perhaps it is suitable here to mention an interpretation of one of the Old Testament's parables, namelywhy Moses, taking Judaic people out of the slavery in Egypt, walked around the desert for 40 years before taking it to the Promised Land? The answer, although rude is very demonstrativein order the generation that remembers slavery to disappear and the new Judaic community to start living without that psychological, emotional and social burden.
The following basic conclusion can be derived from the written up to here: Postmodern society appears to be transitional process for the society passing towards a new social system. As far as practically, we are in the beginning of the process, the characteristics of new cannot be defined and clearly outlined. Most probably the leading element of the new social relations would be based on technologies and scientific progress. The main characteristics for these processes are lack of stationariness, chaotic state and turmoil social processes and events as well as vibration around the main axes, i.e. staggering to unclear new ideas followed by returning back to the old and traditional (Slatinski, 2010a, Slatinski, 2004a.
CONCLUSION
Postmodern society possesses all characteristics of a transitional process, typical for complex systems going from one status to another. Historical check ups show that such processes are characterized by long duration and as far as to date we are practically in its beginning, the main characteristics of the coming new still cannot be outlined. However, obviously, not only technologies change but also the entire structure of society as well as the way of thinking of people, especially the one of the people of the new generations. This, quite logically and naturally, leads to the feeling of security abating in each single individual as well as in communities and in society as a whole. Thus, the scientific community interest towards studying that paradigm raises, but also the interest of "ordinary" people as far as the practical aspects of theses studies concern their everyday life and existence.
More and more, the world cannot be reviewed as multitude of states and regions that exist separately. Global problems require global solutions. This stimulates researchers to pay more attention to the greaterscale levels of security -"The security of a Community of states" and "Security of the World". There are really gaps and opportunities for studies and analysis in this direction. | 9,399.6 | 2018-10-15T00:00:00.000 | [
"Computer Science"
] |
The Transmuted Generalized Gamma Distribution : Properties and Application
Abstract: The generalized gamma model has been used in several applied areas such as engineering, economics and survival analysis. We provide an extension of this model called the transmuted generalized gamma distribution, which includes as special cases some lifetime distributions. The proposed density function can be represented as a mixture of generalized gamma densities. Some mathematical properties of the new model such as the moments, generating function, mean deviations and Bonferroni and Lorenz curves are provided. We estimate the model parameters using maximum likelihood. We prove that the proposed distribution can be a competitive model in lifetime applications by means of a real data set.
Introduction
Standard lifetime distributions usually present very strong restrictions to produce bathtub curves, and thus appear to be unappropriate for analyzing data with this characteristic.The three-parameter generalized gamma ("GG" for short) distribution (Stacy, 1962) includes as special models the exponential, Weibull, gamma and Rayleigh distributions, among others.It is suitable for modeling data with hazard rate function (hrf) of different forms (increasing, decreasing, bathtub and unimodal) and also useful for estimating individual hazard functions and both relative hazards and relative times (Cox, 2008).The GG distribution has been used in several research areas such as engineering, hydrology and survival analysis.Its probability density function (pdf) and cumulative distribution function (cdf) are given by (for x > 0) and G(x) = γ(ν, [x/a]p )/Γ(ν ), respectively, where Γ(ν ) = R ∞ ων −1 e−ω dω (for ν > 0) is the gamma function and γ(x, ν ) = R x ων −1e−ω dω is the incomplete gamma function.For the density function (1), a > 0 is a scale parameter and p > 0 and ν > 0 are shape parameters.The Weibull and gamma distributions are special cases of (1) when ν = 1 and p = 1, respectively.The GG distribution approaches the log-normal distribution when a = 1 and ν → ∞.We denote by a random variable having density function (1).
The GG distribution includes all four more common types of the hrf: monotonically increasing and decreasing, bathtub and unimodal (Cox et al., 2007).This property is useful in reliability and survival analysis.This model has been used in several applied areas such as engineering, economics and survival analysis.Yamaguchi (1992) used it for the analysis of permanent employment in Japan, Allenby (1999) Another family of distributions arises from the general rank transmutation (GRT) defined by Shaw and Buckley (2007).Suppose we have two cdfs F (x) and G(x) with common sample space, then the GRT is defined as (3) Note that the pair in (3) takes the unit interval [0, 1] and under suitable assumptions are mutual inverses and satisfy PRij (0) = 0, PRij (1) = 1.A quadratic rank transmutation map (QRTM) is obtained by considering PR12 (t) = t + λt(1 − t).Then, it follows that the cdfs are related by and the corresponding pdf is given by Here, G(x) and g(x) can be understood as the cdf and pdf of the baseline distribution, respectively.Note that this generator, called the transmuted class (TC) of distributions, is a linear combination of 9 Sadraque E.F.Lucena, Ana Herm´ınia A. Silva, Gauss M. Cordeiro 189 the baseline and Exp-G distributions with power parameter equal to two.Also, note that λ = 0 in (5) gives the baseline distribution.Further details can be seen in Shaw and Buckley (2007).
Some distributions belonging to the TC class have been proposed recently.Aryal and Tsokos (2009) studied the transmuted Gumbel distribution and its application to climate data.Aryal and Tsokos (2011) pioneered the transmuted Weibull distribution and used it for modelling the tensile fatigue characteristics of a polyester/viscose yarn.Khan and King (2013) intro-duced the transmuted modified Weibull distribution and Ashour and Eltehiwy (2013) defined the transmuted exponentiated Lomax model.In the present study, we provide some mathe-matical properties of a new lifetime model named the transmuted generalized gamma (TGG) distribution.
The sections are organized as follow.In Section 2, we define the TGG model.Some of its mathematical properties are investigated in Section 3.An application to a real data set is reported in Section 4. Section 5 ends with some conclusions.
The TGG distribution
The cdf of the TGG distribution can be obtained from (4) as where |λ| ≤ 1, a > 0, ν > 0 and p = 0. Henceforth, a random variable X having the cdf (6) is denoted by X ∼ T GG(λ, a, ν, p).We can prove that the TGG distribution is a linear combination of the GG and the exponentiated generalized gamma (EGG) distributions, the last one with power parameter two.The pdf of X is given by The TGG family model has many distributions as special cases.Some of the sub-models encompassed by the TGG family are listed in Table 1.
The hazard rate function (hrf ) of X is given by Some possible shapes of the density function (7) and the hrf are plotted in Figure 1.We conclude that the hrf r(x) is very flexible, assuming different shapes.
Properties of the TGG distribution
Let g(x) and G(x) are the pdf and cdf of a baseline model.A random variable is Exp-G distributed with power parameter α > 0 if its cdf and pdf are given by Hα (x) = G(x)α and hα (x) = αg(x)G(x)α−1, respectively.We obtain some properties of the TGG distribution.If the baseline is taken to be the GG distribution, the random variable Y is said to be Exp-GG distributed, say Y ∼ Exp-GG(α, a, ν, p) distribution.The density function of X can be expressed as 9) reveals that the TGG distribution is a mixture of GG distributions.This result enable us to derive some mathematical properties of the TGG distribution.
Moments
Some of the most important features and characteristics of a distribution can be studied through moments like tendency, dispersion, skewness and kurtosis.For a random variable Z having the GG(a, ν, p) distribution, the sth moment of Z becomes IE(Z s ) = as Γ(ν + s/p)/Γ(ν ).Based on equation (9), the sth moment of X is given by These moments can be computed numerically by standard statistical softwares.
Generating Function
The moment generating function (mgf) of Z, say Ma,ν,p (s) = IE(esZ ), has an explicit expression using the Wright function (Wright, 1935).It can be expressed as 9) and (10), the mgf of X reduces to where Ma,ν * ,p (x) is the mgf of Z with parameters a, ν * and p.This result reveals that the TGG mgf is a mixture of GG mgfs.
Skewness and Kurtosis
The qf of X, say x = Q(u), follows by inverting the cdf (6).It is given by where QGG denotes the qf of the GG distribution with parameters a, ν and p.
There are several robust measures in the literature for location and dispersion.The median, for example, can be used for location and the interquartile range.Both the median and the interquartile range are based on quantiles.From this fact, Bowley (1920) proposed a coefficient of skewness based on quantiles given by where Q(•) here is the qf of the TGG distribution given by (12).Moors (1988) demonstrated that the conventional measure of kurtosis may be interpreted as a dispersion around the values µ + σ and µ − σ, where µ is the mean of the distribution and σ is its standard error.Thus, the probability mass focuses around µ or on the tails of the distribution.Therefore, based on this interpretation, Moors (1988) proposed, as an alternative to the conventional coefficient of kurtosis, a robust measure based on octiles given by These measures are less sensitive to outliers and they exist even for distributions without moments.Figure 3 displays the plots of the Bowleys skewness and Moors kurtosis for the TGG distribution, respectively.
Application
We use a real data set to show that the TGG distribution provide a better fit than that one based on the GG distribution.We also emphasize that the fitted TGG distribution is better than the fitted EGG (Cordeiro et al., 2011), BGG (Cordeiro et al., 2012) and Marshall-Olkin generalized gamma (MOGG) distributions to these data.The corresponding cdf 's are given by where α > 0, λ > 0, Ga,ν,p (x) denotes the GG cdf and ̅ ,, () is the survival function.The data represent the times between successive failures (in thousands of hours) in events of secondary reactor pumps studied by Salman et al. (1999).Table 2 gives some statistic measures for these data, which indicate that the empirical distribution is skewed to the left and platycurtic.
proposed a dynamic model based on it and Cox The Transmuted Generalized Gamma Distribution: Properties and Application et al. (2007) presented a parametric survival analysis and taxonomy of its hrf.Some extensions of the GG distribution has emerged recently.For example, Pascoa et al. (2011) proposed the Kumaraswamy generalized gamma (KwGG) distribution, Ortega et al. (2011) proposed the generalized gamma geometric distribution and Cordeiro et al. (2012) defined the beta generalized gamma (BGG) distribution.Now, we de_ne an extended form of the density function (1) (for x > 0) given by , , () Where p is not zero and the other parameters are positive.The cdf corresponding to (2) becomes Several continuous univariate distributions have been extensively used in the literature for modelling data in many areas such as engineering, economics, biological studies and environmental sciences.However, applied areas such as lifetime analysis, finance and insurance clearly require extended forms of these distributions.Thereby, classes of distributions have been pro-posed in the literature by extending and creating new families of continuous distributions.These extensions generalize distributions giving more flexibility by adding one or more param-eters to the baseline model.They were pioneered by Gupta et al. (1998), who proposed the exponentiated-G ("Exp-G") distribution, by raising the cdf G(x) to a positive power parameter.Many other classes can be found in the literature such as the beta generalized (BG) family of distributions proposed by Eugene et al. (2002), the Kumaraswamy (Kw-G) family of distri-butions introduced by Cordeiro and de Castro (2011) and the exponentiated generalized (EG) family defined by Cordeiro et al. (2013).
) 1 Figure 1 :
Figure 1: Plots of the TGG pdf and hrf for some parameter values Setting u = x/a, we have By expanding the first exponential in power series and the result∫ −1 ∞ 0 exp(− ) = −1 Γ( + /) , we obtain The above equation holds for p = 0. Additionally, for a given p > 1, it can be expressed in terms of the Wright generalized hypergeometric function (Wright, 1935) defined by This function exists if 1 +∑ =1 − ∑ =1 > 0. By combining the last two equations, we obtain Then, from equations (
Figure 2 :
Figure 2: Bonferroni and Lorenz curves for some parameter values.
Figure 3 :
Figure 3: Skewness and Kurtosis of the TGG distribution for different values of λ.
Figure 4 :
Figure 4: Histogram and estimated densities of the BGG, TGG, MOGG and GG models (left panel) and empirical cumulative function (right panel) of times between successive failures (in thousands of hours) of secondary reactor pumps
Table 2 :
Descriptive statistics for the times between successive failures (in thousands of hours) of secondary reactor pumps | 2,591.4 | 2021-03-07T00:00:00.000 | [
"Mathematics"
] |
Online Reconstruction and Calibration with feed back loop in the ALICE High Level Trigger
ALICE (A Large Heavy Ion Experiment) is one of the four large scale experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most compute-intense task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC) and the Inner Tracking System (ITS). The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach for fast ITS standalone tracking. As second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
Abstract. ALICE (A Large Heavy Ion Experiment) is one of the four large scale experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most compute-intense task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC) and the Inner Tracking System (ITS). The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach for fast ITS standalone tracking. As second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
The ALICE Detector
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN in Geneva [1]. While the other large experiments focus mainly on protonproton collisions, the main purpose of ALICE is to study heavy ion collisions. This enables the investigation of matter under extreme conditions of high temperature and pressure. In ion physics mode, the LHC collides lead nuclei at an interaction rate of around 8 kHz. ALICE employs several detectors to measure particle trajectories, energy deposition, and to identify particles.
The High Level Trigger (HLT)
The High Level Trigger (HLT) [2] is an online computing farm consisting of around 200 compute nodes for the online processing of the collisions recorded by ALICE. In contrast to the posterior longrunning offline physics analysis, the HLT performs the first processing and analysis in real time. This involves the reconstruction of the events, calibration of the detectors, data compression to reduce the amount of data stored to tape, online QA, as well as triggering the readout or tagging of physically interesting events. The HLT receives the data from the experiment via several hundred optical links. Inside the HLT, independent processing components perform individual steps of the reconstruction and processing. A custom data transport framework transfers the data between the processing components on the same servers via shared memory or on different servers via an Infiniband network. The maximum possible data input rate over the detector links is above 60 GB/s while in normal operation the HLT receives up to 30 GB/s of recorded data. The HLT is capable of full real time event reconstruction of the data recorded by ALICE. The computationally most intensive step is the reconstruction of particle trajectories, also called tracking.
Online reconstruction and online calibration
Several of the ALICE subdetectors are sensitive to environmental conditions such as ambient pressure and temperature. Precise reconstruction of particle trajectories requires the calibration of these detectors. Since the environmental conditions change during data taking, calibration must be performed regularly as a single calibration step at the beginning of a run is insufficient. Performing the detector calibration (or a part thereof) online in the HLT has several advantages: • If the calibration result is made available to the online reconstruction in the HLT, this can significantly improve the quality of online reconstruction.
• Performing the calibration while the data is recorded allows an immediate and better QA (Quality Assurance) already during data taking.
• Online calibration can render certain offline calibration steps obsolete possibly reducing the computational load during offline reconstruction.
• Looking ahead to future experiments like ALICE in LHC Run 3 or FAIR at GSI [3], data compression will rely on reconstruction which makes online calibration a necessity.
Approach for online calibration
On one hand, the ALICE calibration is based on reconstruction results like particle trajectories; on the other hand, the calibration results should be used to improve the reconstruction. This imposes a cyclic dependency between reconstruction and calibration. On top of that, calibration involves long running tasks that can last many seconds if not minutes. This makes it impossible to apply the calibration result to the reconstruction of the events that are used for the calibration. That would require caching of data for a considerable amount of time which is not possible in the HLT. Instead, the HLT employs a different approach for online calibration. The ambient conditions that affect the calibration are stable over a certain time period. Even in case of a sudden, total weather change, pressure and temperature will change smoothly such that the calibration created for a certain point in time is valid for a certain period. In the following, we will discuss the TPC drift velocity as an example. In this case, the calibration is assumed to be valid for the following 15 minutes. The calibration is performed in multiple consecutive intervals organized in a pipeline with three steps: • Step A: Incoming data is reconstructed using the last valid calibration (or the default calibration at the very beginning). Based upon the reconstruction, new calibration is computed. This is performed as long as needed such that sufficiently many events are processed to produce valid calibration. • Step B: The calibration result is propagated back to the beginning of the reconstruction chain such that it can be applied to the reconstruction. Necessary postprocessing steps are performed as well such as the preparation of a new TPC cluster transformation map 1 . • Step C: The new calibration is now used for the reconstruction as long as it is valid, or until a new calibration is available. Figure 1 illustrates the online calibration scheme. The steps are processed in a pipeline, such that while the calibration result is fed back (Step B, brown) and then used in the reconstruction (Step C, purple), a new calibration is computed in parallel (Step A, blue). Accordingly, the reconstruction is not calibrated for events recorded at the very beginning of a run before Step B finishes. The final reconstruction objects stored at the end of the process for offline use are valid for all events, also those at the beginning. This scheme is feasible as long as the total time of steps A, B, and C is below the stability interval, e. g. 15 minutes in case of the TPC drift time calibration. Naturally, a single instance of the calibration software running on a single processor can not compute the calibration objects in time. The HLT runs the calibration component on 172 of its 180 compute nodes with 3 instances of the calibration per compute node, for a total of 516 calibration processes that run in parallel. These instances process the incoming events in a round robin fashion and they regularly send their calibration data to one single calibration merger process running on a dedicated calibration compute node. This calibration node merges all the calibration data and creates
Uncalibrated
Calibrated During this period, calibration must be stable Figure 1. Illustration of the approach for online calibration. The blue boxes are the intervals where the calibration data is aggregated. Afterwards there is a short delay to prepare new TPC transformation maps based on the calibration and distribute them in the cluster (brown box). Finally, the HLT reconstruction uses the calibration as long as it is stable (purple box). the final calibration objects. The objects are stored for offline use and in parallel shipped back to all compute nodes by the feedback loop, such that the compute nodes can perform the reconstruction based on the new calibration. Figure 2 illustrates the process.
Requirements for online calibration
On top of the time constraints from the above approach, online calibration poses a couple of requirements on the reconstruction and data transport framework, which are discussed in the following. This subsection concludes with a list of requirements that needed to be solved to run the TPC drift time calibration in the ALICE HLT. The following sections show how ALICE deals with these challenges.
The custom data transport framework in the HLT can be seen as a directed graph. The fibres from the detector are input nodes in the graph, the network connections to DAQ (Data Acquisition) are output nodes, and the remaining nodes in the graph are processing components. The links in the graph define the data flow. All processing components process the events in an event-synchronous way, i. e. they process one event after another in a pipeline. Long running tasks as needed for the calibration pose a problem, because they stall the pipeline during the processing of the event. All processing components which are in the processing graph before the stalled process become stalled as well as soon as the buffers run full. Another issue is that the HLT processing chain must be loop-free for technical reasons by design. This stems from the event-synchronous approach. A loop in the graph would impose a cyclic dependency, i. e. two processing components will wait for each other to finish processing the same event.
The TPC drift time calibration matches tracks reconstructed in the TPC to tracks in the ITS. This means the HLT must at least provide track reconstruction for TPC and for ITS. On top of that, the reconstruction of the tracks for these detectors should be standalone, i. e. independent, in order to exclude the introduction of a bias. Although the ITS delivers significantly less data than the TPC, the tracking for ITS is compute-intensive due to combinatorics because the ITS sits in the center of ALICE where the track density is the highest. Also, scattering in the silicon layers is more complicated than inside the gas volume of the TPC. Finally, ITS tracks have only up to 6 hits compared to more than 100 in the TPC, which requires a much more robust seeding procedure in order to find all tracks. Therefore, the default HLT approach for ITS tracking is to prolong TPC tracks into the ITS and then collect the ITS hits close to the extrapolated tracks. This poses two problems: first, it could introduce a bias to use prolonged TPC tracks for the calibration of the TPC. Second, the prolongation into ITS where the track density is very high requires high precision and works well only if the TPC is calibrated. This leads to a chicken and egg problem.
Normal physics analysis in ALICE is based on the ROOT analysis and statistic software package. The reconstruction creates C++ ROOT objects (see Section 5), which are then used by the calibration tasks. One paradigm in the ALICE HLT for online calibration is to use the same code for the calibration tasks that is used offline. This reduces code duplication and simplifies the verification of the calibration code. Processing components in the HLT are individual processes which cannot exchange C++ objects via pointers because they to not share a common address space. The standard approach is serialization and deserialization of the object which causes significant CPU load. Hence, HLT software should use only flat data structures which can be shipped to other processes directly. This is incompatible with the standard Offline reconstruction programming model. Usually, calibration is performed relative to a default calibration. For instance, the TPC measures the clusters in row, pad, and time coordinates which must be transformed into spacial coordinates in order to run the track reconstruction and then the calibration. This transformation is performed using the initial, default calibration. The calibration uses the transformed clusters, and the exact transformation that was used to obtain the clusters's spacial coordinates has an impact on the calibration output. In other words, the calibration must run on clusters transformed according to the default calibration but not on calibrated clusters.
List of requirements for online TPC drift time calibration in the ALICE HLT: • Fast reconstruction algorithms for ITS and TPC.
• Independent standalone tracking algorithms for ITS and TPC.
• Support for long-running tasks in the HLT framework.
• Support for loops in the HLT data flow.
• Data structures enabling fast data exchange between processing components.
• The feedback loop must apply the calibration only for the tracking, but not to the calibration component.
• Calibration process and feedback loop must not take longer than the time during which the calibration remains stable.
Framework improvements
Adding the feedback loop directly to the loop-free HLT processing chain is a major architectural change. We wanted to avoid this since the current HLT data transport is thoroughly tested and we preferred incremental changes. In particular, we prefer adding additional processing or communication components over changing the basis of the framework. Considering the general approach for online calibration in the HLT, the calibration does not need to be event-synchronous. The calibration result is not fed back to the reconstruction of the same event but at some later point in time. This can happen asynchronously. The HLT uses additional source and sink components based on the ZeroMQ data transport library for new communication channels not foreseen in the original framework [4, §5.3]. The processing rate in the HLT is usually between 1 kHz and 3 kHz. It is impossible to run longrunning calibration tasks that last many seconds at this rate due to limited CPU resources. Hence, the calibration task has to run only for a subset of the events, which is totally sufficient to gather enough statistics for the calibration. Still, a long-running task blocks the chain even if it processes only a fraction of the events. The problem is that it stalls the processing of that event for too long such that it affects all the other processes in the chain. This means, even if the average processing time of the events would be short enough because many events are skipped, single events that need much time already block the chain. This deficit of the event-synchronous processing approach is overcome by complementing the HLT with asynchronous processing components, which spawn a subtask in an asynchronous individual process, and then continue the fast synchronous event processing. The result of the subtask is then used as soon as it is available [4, §5.1]. In order to protect ALICE data taking from fatal errors in the calibration code, the asynchronous processing can optionally happen in a completely isolated operating system process. In this way, a possible memory leak or segmentation violation does not interfere with data taking but only breaks the processing of the calibration for few events until the process is restarted.
Track reconstruction in the TPC
The HLT employs GPU-accelerated track reconstruction for the ALICE TPC that is based on a Cellular Automaton principle to build track seeds: short track candidate of around 5 to 10 hits. Afterwards, it uses the Kalman filter for track fitting and track following [5,6]. The HLT employed 64 NVIDIA Geforce GTX480 GPUs during LHC Run 1. The new HLT farm for LHC Run 2 is now equipped with 180 AMD FirePro S9000 GPUs. One major concern with the original GPU tracker code was that it was based exclusively on the NVIDIA CUDA framework and was thus vendor-dependent. For Run 2, an OpenCL implementation was created, which uses the AMD OpenCL C++ kernel languages extensions. The code is written in a generic way, such that the same source code can be used with CUDA, OpenCL, and also for the CPU [4, §4]. This gives the greatest possible flexibility for hardware selection and reduces the maintenance effort.
Parallelization is implemented such that during the Cellular Automaton phase, 1 GPU thread handles 1 TPC cluster. During the track following, 1 GPU thread handles 1 TPC track. This allows for simple and efficient parallel processing, as the threads can operate almost fully independently. Considering the amount of TPC clusters and tracks in a typical Pb-Pb event as well as the number of threads a GPU executes concurrently, this scheme resulted in full GPU utilization at the time it was implemented, e. g. for the GTX480 GPUs during Run 1. Naturally, pp events with much less tracks do not use the GPUs efficiently, which is no problem however, because the GPUs are fast enough for pp reconstruction anyway. In the meantime, the number of threads a GPU needs to execute in parallel to achieve full performance has increased significantly. The number of tracks increased only slightly when the LHC moved from 3.5 TeV/Z to 6.5 TeV/Z in Run 2. Overall, we see that now even single central Pb-Pb events are unable to load the S9000 GPUs of Run 2 to the full extent. Looking ahead to Run 3, this problem becomes even more severe because new GPUs will feature even more parallelism.
The maximum data rate the TPC can deliver to the HLT is defined by the number of optical links times the link speed. The HLT was tested with data replay at this maximum possible speed and it was proven to be able to run the full TPC reconstruction still having some margin to run reconstruction for other detectors. Thus, development of HLT TPC track finding for Run 2 is complete. Now, the focus lies on testing improvements for the Run 3 online computing facility already in the HLT during Run 2.
As first step, we increase the parallelism by processing multiple events on one GPU concurrently. While during the implementation of the first GPU tracker version, GPUs could only execute one kernel at a time, modern GPUs can execute many independent kernels, even from different host applications, in parallel. The HLT can run several instances of the tracker on one GPU processing multiple events concurrently, as long as there is enough GPU memory, which in case of Run 2 is sufficient for 3 central Pb-Pb events with pile up. Table 1 shows a first result. Naturally, the wall time for a single event increases, which is not relevant. (For reference, the time between a central Pb-Pb event reaches the HLT input until it leaves the HLT is in average around 3 seconds.) In contrast, the processing throughput increases by 31.8 %. Using all compute nodes at full capacity, the HLT can reconstruct 40,000,000 TPC tracks per second. Fortunately, the foreseen TPC readout scheme for Run 3 plays into our hands. ALICE plans a TPC upgrade with continuous readout. The online computing facility will no longer process single events but time frames with many overlapping events. This will offer enough parallelism to use the GPUs to the full extent. It is not clear yet whether the GPU memory will be sufficient to process the track finding of an entire time frame at once. One solution is to slice the time frames along the time / beam axis, process the slices individually, and merge the track segments afterwards. The feasibility of this approach is already shown by the track reconstruction in the current HLT, which processes the TPC sectors individually but concurrently, and merges the tracks segments afterward [5].
Another foreseen development for Run 3 is the porting of additional online reconstruction components on the GPU, in particular, because modern GPUs can execute multiple kernels at the same time. Canonical candidates for GPU adaptation are the ones before and after the GPU track finding in the HLT reconstruction chain (see Fig. 7 in the last section). As a first prototype, the final track refit, a substep of the TPC track merger and track fit component that merges the track segments reconstructed by the GPU track finding was ported to GPUs. The prototype needs in average 6.8 ms per event for the refit compared to 125.5 ms on a single CPU Core (Intel Nehalem 2.8 GHz, the same events as in Table 1). The bottleneck in this case is the PCI Express transfer, which takes longer than the computation itself. Therefore, the most reasonable approach is to bring multiple successive components of the HLT chain onto the GPU, such that the data must not be transferred forth an back in between.
We plan to implement every new GPU processing component using a generic shared source code for CPU and GPU -in the same way as for the TPC tracking. In addition, the implementations should be flexible enough to be applicable in the new software framework for online computing in Run 3 but also in the current HLT framework. This will allow us to test new developments and benefit from them already in Run 2.
Scheme of ITS tracking in the HLT
The initial ITS tracking in the HLT which starts from prolonged TPC tracks is unsuited for online calibration. Conversely, a full standalone ITS tracking has to deal with the excessive combinatorics inside ITS and would need too many CPU resources. The HLT thus employs a hybrid approach with two independent ITS tracking branches. The first is the traditional chain with prolonged TPC-ITS tracks. In order to ensure good tracking results, the TPC needs to be calibrated. In parallel, a second chain performs fast ITS tracking. This is a fast standalone ITS tracking with some limitations. In particular, it is not required to have maximum efficiency, i. e. it does not need to find all tracks. It only needs to find sufficiently many tracks for the ITS TPC matching in the calibration and for the luminous region estimation (see Section 4.3). The scheme is visualized in Fig. 3. The following subsection describes the ITS standalone tracking in the HLT.
ITS standalone tracking ESD Creation
With calibrated TPC Without calibration Figure 3. Approach for ITS tracking in the HLT with two branches. The lower branch runs the ITS standalone tracer providing ITS tracks in any case. The upper branch uses TPC to ITS prolongation resulting in a higher efficiency, but it needs a calibrated TPC in order to achieve reasonable resolution.
Fast ITS standalone tracking
The aim of the online ITS standalone reconstruction is to provide a sample of primary ITS tracks sufficient for calibration, without attempting to maximize the track-finding efficiency. Instead, the emphasis is made on the processing speed and correctness of the tracking (minimization of random clusters attached to tracks). The algorithm uses as an input the ITS clusters from all 6 layers and the primary vertex provided by the HLT components running upstream. The latter is obtained as a position to which the maximum number of vectors connecting the two innermost ITS layers (silicon pixel detectors, SPD) converge. The track reconstruction starts by rebuilding these vectors (tracklets), i. e. finding pairs of SPD clusters seen under nearly the same angle from the vertex point. The procedure is the optimized version of the off-line tracklet finding described in [7].
At the following step the tracks are found by following the tracklets to ITS layers at larger radii, where the ITS Silicon Drift and Strip Detectors (SDD and SSD, respectively) are . For every tracklet a Kalman filter is initialized by the momentum estimated from the vertex and a pair of SPD clusters. It is propagated outwards starting from the vertex, considered as a measured point. The Kalman prediction/update is done first with already attached SPD clusters, then on the SDD and SSD clusters closest to extrapolation point, provided they meet a strict track-to-cluster χ 2 cut. The magnetic field is taken to be a constant solenoidal one (approximation correct to ∼ 10 −3 in the ITS volume) while the multiple scattering is accounted for using average material budget per layer. At least 4 clusters are required per track in the reconstruction in standard layout with all 6 ITS layers present and tracks with 2 consecutive layers without contribution are rejected. Once the outermost layer is reached, the track outward kinematics is recorded (for the further matching with TPC) and inward Kalman propagation is performed to obtain the kinematics at the vertex region.
The benchmark with simulated data (on a single core of i7-2600 CPU @ 3.40GHz) shows in pp and p − Pb events more than 2 kHz processing rate with a reconstruction efficiency (with respect to reconstructable Monte Carlo tracks) exceeding 90% at p T > 300MeV/c and fake tracks contamination staying below 3%. Minimum bias Pb − Pb events are reconstructed at 18Hz rate (40Hz skipping 15% of most central collisions). The efficiency above p T > 300MeV/c drops to ∼ 85% and the fake tracks contamination is below 10%.
Luminous region estimation based on ITS tracking
The volume where the particle beams overlap and most interactions take place is called the luminous region (LR). It can only be reliably determined by the experiment itself by means of reconstructing and localizing each interaction and statistically determining the size and shape of the beam overlap volume. This measurement, performed in (quasi-) real time, is used by the LHC to optimize the beam parameters. The computation of LR requires accurate tracking close to the interaction point. TPC tracks extrapolated ≈ 80 cm towards the vertex have insufficient accuracy in this respect if the TPC is not fully calibrated. A more robust method is to use ITS standalone tracking which does not require large extrapolation steps or the same degree of time dependent alignment and calibration as the TPC. The ITS standalone tracker has sufficient tracking efficiency and resolution. This method is successfully implemented in the HLT and provides real time LR information to the LHC.
The vertex determination is implemented in the same component which performs the ITS standalone tracking. First the tracks are propagated to the beam-line, then a fast linear fitter is deployed to find a vertex as a point minimizing the distance of the closest approach for maximum number of tracks. The outlier tracks rejection is achieved by means of bi-square weighting filter, as described in [8].
Connecting The Dots 2016
A flat data structure
As discussed in 2.1, one of the crucial features of the calibration procedures running in the HLT is the fact that they use the same code as used in the offline calibration. This was developed to work on the output of the offline reconstruction, i. e. on the C++ structures called Event Summary Data (ESD) on which the ALICE analyses are based. The ESDs are not suited to be used in the HLT framework since the multiple processes that may run in the HLT do not share the same address space. This implies that shipping C++ objects (possibly in the form of ROOT objects) between HLT processes can be done only through their serialization and deserialization, which introduce an unacceptable overhead in the framework for every process that would access the ESDs. To overcome such difficulty when wanting to run in the HLT any task that would take as input the standard ESDs when running it offline (e. g. calibration tasks, Quality Assurance, analysis...), the output of the HLT reconstruction is stored in flat structures, which are exchanged between the different HLT reconstruction, calibration, and QA components complying with the HLT framework requirements.
The development and implementation of the flat ESDs is based on the C++ concepts of inheritance and polymorphism. A common base class for the flat ESDs and the standard ESDs allows to run the same calibration algorithm online in the HLT and offline.
While in the ESDs the different objects that come out of the reconstruction are stored inside the ESDs as different C++ objects, in the flat ESDs they are simply stored consecutively in memory and the bookkeeping of their position in the object is used to access them. One special case among these objects is the so-called ESD friends, which is an object meant to store information not needed for analysis, but specifically for calibration. For example, the information about the clusters used to form the tracks are there. Since every track can have at maximum 159 clusters associated in the TPC (due to the same number of pad-rows in the detector), the amount of data in the friends is very large, and the size of the friends can be several times the one of the ESD tracks.
Benchmarks
We have evaluated the performance of the flat structures compared to standard ESDs with several benchmarks. These are based on the output of the HLT reconstruction when stored in the standard ESD objects, or in the flat structures. Figure 4 shows the time needed to create and manipulate (serialize, deserialize) the standard ESDs and the flat ESDs as a function of the track multiplicity of an event (based on a sample of around 500 Pb-Pb events taken in 2015). As one can see, the time for the different stages grows linearly with the track multiplicity. The time needed to create the flat ESDs including the friends is ∼10 times smaller than the time needed to create the standard ESDs with friends. The serialization (and deserialization) of the standard ESDs including friends is between 2 and 3 times (and ∼ 1.5) times slower than their creation. The reinitialization of the flat ESDs is a step needed in order to restore the virtual table of the flat ESD object, which is needed for the common interface of ESD and flat ESD. The plot also shows the case when the standard ESDs are created and serialized without the friends, although not appropriate for calibration purposes. The time in this case is of the same order as the one needed to create the flat ESDs with the full calibration information. The behaviour of the standard ESDs without the friends compared to the flat structure is better visible in Fig. 5: also in this case the flat structures with the complete calibration information are more performant. Figure 6 shows the time needed by the TPC drift time calibration task when running on standard ESDs and on flat ESDs. As one can see, the task performance is comparable when using as input the standard ESDs or the flat ones, being a few percent faster in case of the flat ESDs at high multiplicities. This is an additional minor advantage of the flat ESDs. The main advantage of the flat structures lies in the speed with which they are created and can then be shipped between different HLT components, more than in how fast they can be processed by the calibration task. This is anyway running asynchronously with respect to data taking, and as such it does not impose any performance limitation. Table 2 summarizes the resources needed to create the standard ESDs and the flat ESDs (with and without friends) in the HLT in number of CPU cores. Considering the total amount of resources available in the HLT cluster, i. e. ∼ 2000 CPU cores, one realizes immediately that while the creation of standard ESDs as natural output of the HLT reconstruction is an expensive but well affordable process, adding the extra information needed to perform calibration tasks (i. e. what is stored in the friends) would require half of the total HLT resources, which is completely infeasible. In opposite, the flat ESDs can be created in the HLT together with the extra calibration data using a limited amount of resource, which is even smaller than those needed to generate the standard ESDs without friends. Therefore, all the requirements of online calibration in the HLT listed in Section 2.1 are met. A first test of the above-described new features with online calibration was performed in December 2015. The calibration components were running online under real conditions during Pb-Pb data taking in ALICE at LHC design luminosity. This test proved that the HLT can handle the online calibration in a high load scenario with the highest data rates.
A first analysis of the processing rate of the calibration component shows the following: the processing time of the calibration task for individual events can be quite long. In particular, it depends superlinearly on the number of tracks in the event because of the TPC ITS matching. During this first test, it took up to 15 minutes for the largest events, which is too long in order to use the results in the feedback loop. Fortunately, the fraction of events which need more than 5 minutes for the calibration is only 2 %. In particular, it is not necessary to run the calibration for the large events at all. It is essential to have enough tracks for the TPC ITS matching in total. It is faster to process the same number of tracks in many smaller events compared to fewer large events. Excluding the small fraction of events Figure 7. Overview of all processing components and data flow in the HLT.
Connecting The Dots 2016 that take very long, the processing of 5000 events (Step A), which is considered sufficient for the TPC drift time calibration in Pb-Pb, takes less than 5 minutes. The feedback loop (Step B) takes around 20 seconds. If the calibration is used for 5 minutes afterwards (Step C), the total time is below the limit of 15 minutes where the calibration is stable. On top of this, this first test used the plain offline calibration software. For the future, we plan to apply code optimizations to speed up the calibration task. | 8,157.8 | 2016-01-01T00:00:00.000 | [
"Physics"
] |
On the existence of solutions of a set-valued functional integral equation of Volterra–Stieltjes type and some applications
This paper is concerned with the existence of continuous solutions of a set-valued functional integral equation of Volterra–Stieltjes type. The continuous dependence of the solution on the set of selections of the set-valued function will be proven. As an application, we study the existence of solutions to an initial-value problem of arbitrary fractional-order differential inclusion.
Introduction
Consider the set-valued functional integral equation of Volterra-Stieltjes type First, we establish some notation. We will denote by I = [0, T] a fixed interval, where T > 0 is arbitrarily fixed and by C(I) = C[0, T] the Banach space consisting of all continuous functions acting from the interval I into R with the standard norm Define the Banach space X = C(I) × C(I) with the norm (x, y) X = x C + y C . Definition 2.1 Let F be a set-valued map defined on a Banach space E, f is called a selection of F if f (x) ∈ F(x), for every x ∈ E and we denote by the set of all selections of F (for the properties of the selection of F see [1][2][3]).
Definition 2.2 ([4])
A set-valued map F from I × E to family of all nonempty closed subsets of E is called Lipschitzian if there exists k > 0 such that, for all t ∈ I and all x 1 , x 2 ∈ E, we have h F(t, x 1 ), F(s, x 2 ) ≤ k |t -s| + |x 1x 2 | , (2.1) where h(A, B) is the Hausdorff distance between the two subsets A, B ∈ I × E.
(For properties of the Hausdorff distance see [5].) The following theorem [5,Sect. 9, Chap. 1, Th. 1] assumes the existence of a Lipschitzian selection. In what follows, we discuss a few auxiliary facts concerning functions of bounded variation (cf. [7]). To this end assumes that x is a real function defined on a fixed interval [a, b]. By the symbol b a x we will denote the variation of the function x on the interval [a, b]. In the case when b a x is finite we say that x is of bounded variation on [a, b]. In the case of a function u(t, s) =: [a, b] × [c, d] → R we can consider the variation q t=p u(t, s) of the function t → u(t, s) (i.e., the variation of the function u(t, s) with respect to the variable t) on the interval [p, q] ⊂ [a, b]. Similarly, we define the quantity q s=p u(t, s). We will not discuss the properties of the variation of functions of bounded variation, we refer to [7] for the mentioned properties. Furthermore, assume that x and φ are two real functions defined on the interval [a, b]. Then, under some extra conditions (cf. [7]), we can define the Stieltjes integral (more precisely, the Riemann-Stieltjes integral) of the function x with respect to the function φ on the interval [a, b] In such a case, we say that x is Stieltjes integrable on the interval [a, b] with respect to φ.
In the relevant literature, we may encounter a lot of conditions guaranteeing the Stieltjes integrability [7][8][9]. One of the most frequently exploited condition requires that x is continuous and φ is of bounded variation on [a, b].
Next, we recall a few properties of the Stieltjes integral which will be used in our considerations (cf. [7]).
Lemma 2.5 Let x 1 and x 2 be Stieltjes integrable functions on the interval
In the sequel, we will also consider the Stieltjes integrals of the form b a x(s) d s g(t, s), where g : [a, b] × [a, b] → R and the symbol d s indicates the integration with respect to the variable s. The details concerning the integral of such a type will be given later.
Existence of at least one continuous solution
Consider now the set-valued integral equation (1.1) under the following assumptions.
(i) p : is a Lipschitzian set-valued map with a nonempty compact convex subset of 2 R + . (iii) ϕ : I → I is continuous function.
(iv) f 2 : I × R → R is continuous and there exist two constants a and b such that (v) The function g i is continuous on the triangle i , for i = 1, 2, where (vi) The function s → g i (t, s) is of bounded variation on [0, t] for each t ∈ I (i = 1, 2). (vii) For any > 0 there exists δ > 0 such that, for all t 1 ; t 2 ∈ I such that t 1 < t 2 and t 2t 1 ≤ δ, the following inequality holds: It is clear that, from Theorem 2.3 and assumption (ii), the set of Lipschitz selection of F 1 is non-empty. So, the solution of the single-valued integral equation where f 1 ∈ S F 1 , is a solution of inclusion (1.1).
It must be noted that f 1 satisfies the Lipschitz selection Obviously, we will assume that g i satisfies assumptions (v)-(viii). For our purposes, we only need the following lemmas.
Further, let us observe that based on Lemma 3.3 we infer that there exists a finite positive constant K i , such that where T > 0 is arbitrarily fixed and i = 1, 2.
We now introduce some functions that will be useful in our further studies: In our considerations, we will examine the double Stieltjes integral of the form , 2) and the symbol d y indicates the integration with respect to the variable y (similarly, we define the symbol d s ). Now, let then the nonlinear functional integral equation (3.1) can be written in the form Proof Define the set Q r by where r = p * +f * 1 K 1 1-kK 1 + aK 2 1-bK 2 with kK 1 < 1, bK 2 < 1. It is clear that the set Q r is nonempty, bounded, closed and convex. Let A be any operator defined by where for u = (x, y) ∈ Q r , and from Remark 3.5 we have Then Then From the above estimate we derive the following inequality: Hence, AQ r ⊂ Q r and the class {Au}, u ∈ Q r is uniformly bounded. Now, for u = (x, y) ∈ Q r , for all > 0, δ > 0 and for each t 1 , t 2 ∈ [0, T], t 1 < t 2 , such that Further, for the operator A and u ∈ Q r we have Then This means that the class of functions Au is equi-continuous on Q r . Then by the Arzela-Ascoli theorem [11] the operator A is compact. It remains to prove the continuity of A : Q r → Q r . Let u n = (x n , y n ) is a sequence in Q r with x n → x, and y n → x and since f 1 (t, y(t)) and f 2 (t, x(t)) is continuous in C[0, T] × R then f 1 (t, y n (t)) and f 2 (t, x n (t)) converge to f 1 (t, y(t)) and f 2 (t, x(t)), thus f 2 (t, x n (ϕ(t))) converges to f 2 (t, x(ϕ(t))) (see assumption (ii)). Using assumption (iii) and applying Lebesgue dominated convergence theorem, we get
(t), A 2 x(t) = Au(t).
Since all conditions of the Schauder fixed-point theorem [12] hold, A has a fixed point u ∈ Q r , and then the system (3.3), (3.2) has at least one continuous solution u = (x, y) ∈ Q r , x; y ∈ C[0, T].
Consequently, the functional integral equation (3.1) has at least one solution x ∈ C[0, T].
Existence of a unique solution
In this section, we study the uniqueness of the solutions x ∈ C[0, T] of the functional integral inclusion (1.1). Proof Let x 1 and x 2 be two solutions of Eq. (3.1), then Using the Lipschitz condition for f 1 , we obtain Using Lipschitz condition for f 2 , we obtain Then This proves the uniqueness of the solution of the functional integral equation (3.1). Proof Let f 1 (t, x(t)) and f * 1 (t, x(t)) be two different Lipschitzian selections of F 1 (t, x(t)) such that
Continuous dependence
then for the two corresponding solutions x f 1 (t) and x f * 1 (t) of (1.1) we have Thus from last inequality, we get This proves the continuous dependence of the solution on the set S F 1 of all Lipschitzian selections of F 1 . This completes the proof.
Volterra integral inclusion of fractional order
In this section, we will consider the fractional integral inclusion, which has the form where t ∈ I = [0, T] and α ∈ (0, 1). Moreover, Γ (α) denotes the gamma function. Let us mention that (5.1) represents the so-called nonlinear Volterra integral inclusion of fractional orders. Recently, the inclusion of such a type was intensively investigated in some papers [13][14][15][16][17][18]. Now, we show that the functional integral inclusion of fractional orders (5.1) can be treated as a particular case of the set-valued functional integral equation of Volterra-Stieltjes (1.1) studied in Sect. 3.
Indeed, we can consider the functions g i (w, z) = g i : i → R (i = 1, 2) defined by the formulas Note that the functions g 1 and g 2 satisfy assumptions (v)-(viii) in Theorem 3.6; see [10,19]. Now, we can formulate the following existence results concerning with the Volterra integral inclusion of fractional order (5.1).
Existence of the maximal and minimal solutions
In this section, we establish the existence of the maximal and minimal solutions of the nonlinear Volterra integral inclusion of fractional order (5.1). It is clear that, from Theorem 2.3 and assumption (ii) of Theorem 3.6, the set of Lipschitz selections of F 1 is non-empty. So, the solution of the nonlinear functional integral equation of fractional order where f 1 ∈ S F 1 , is a solution of inclusion (5.1).
where one of them is strict.
Suppose f 1 and f 2 are monotonic nondecreasing functions in x, then Proof Let the conclusion (6.2) be false, then there exists t 1 such that From the monotonicity of the functions f 1 and f 2 in x, we get x(t 1 ) < y(t 1 ).
This contradicts the fact that x(t 1 ) = y(t 1 ). Then
x(t) < y(t).
Now, for the existence of the continuous maximal and minimal solutions of the nonlinear functional integral inclusion (6.1) we have the following theorem. To do this let x(t) be any solution of (6.1), then x(t) = p(t) + I α f 1 t, I β f 2 t, x ϕ(t) , (6.5) and also x (t) = p(t) + I α f 1 t, I β f 2 t, x ϕ(t) , x (t) = p(t) + I α f 1 t, I β f t, x ϕ(t) + I β + I α , x (t) > p(t) + I α f 1 t, I β f 2 t, x ϕ(t) . (6.6) Applying Lemma 6.2 and (6.5) and (6.6), we get From the uniqueness of the maximal solution (see [12]), it is clear that x (t) tends to m(t) uniformly in [0, T] as → ∞.
Similarly, we can prove the existence of the minimal solution. We set and thus we prove the existence of a minimal solution.
Differential inclusion
Consider now the initial-value problem of the differential inclusion (1.2) with the initial data (1.3). Proof Let y(t) = dx(t) dt , then the inclusion (1.2) will be y(t) ∈ I α F 1 t, I 1-τ y(t) . | 2,809.8 | 2020-02-07T00:00:00.000 | [
"Mathematics"
] |
A modular and divergent approach to spirocyclic pyrrolidines
An efficient three-step sequence to afford a valuable class of spirocyclic pyrrolidines is reported. A reductive cleavage/Horner–Wadsworth–Emmons cascade facilitates the spirocyclisation of a range of isoxazolines bearing a distal β-ketophosphonate. The spirocyclisation precursors are elaborated in a facile and modular fashion, via a [3 + 2]-cycloaddition followed by the condensation of a phosphonate ester, introducing multiple points of divergence. The synthetic utility of this protocol has been demonstrated in the synthesis of a broad family of 1-azaspiro[4,4]nonanes and in a concise formal synthesis of the natural product (±)-cephalotaxine.
Introduction
The pyrrolidine ring system is common to numerous pharmaceutical compounds and natural products of structural and biological importance. This is highlighted in its ranking as the 5 th most commonly occurring nitrogen-containing heterocycle in FDA-approved pharmaceuticals. 1 Furthermore, there is a widespread abundance of the pyrrolidine scaffold within the architecturally complex polycyclic ring systems of natural products. 2 In particular, the Cephalotaxus alkaloids (for example cephalotaxine, Scheme 1A) are characterised by a complex spirocyclic pyrrolidine scaffold, namely the 1-azaspiro[4,4]nonane framework, and as such have experienced a continuous wave of synthetic attention since their isolation. 3 This has been catalysed by the development of homoharringtonine, one member of the family, into an FDA-approved medication (Synribo®) for treatment of chronic myeloid leukaemia in patients resistant to the more standard tyrosine kinase inhibitors. 4 More recently, there has been a concerted effort in medicinal chemistry programmes to move towards fragment libraries comprising of more sp 3 -rich motifs due to their advantageous physicochemical attributes. 5 Furthermore, spirocyclic scaffolds possess uniquely rigid structures, not only enabling greater predictability of conformation using in silico docking but also contributing to a less severe decrease in conformational freedom, and therefore entropy, upon protein binding. 6 For these reasons, the development of efficient, broad-scope strategies for the construction of spirocyclic pyrrolidine systems, such as the 1-aza-spiro[4,4]nonane framework (Scheme 1A), has become a target of signicant importance in synthetic method development. 7 In this context, Gaunt and coworkers recently disclosed a general spirocyclisation method employing photoredox catalysis to generate a-amino radicals capable of 5-exo-trig/dig cyclisations (Scheme 1B (1)). 8 From another access point, and focussing on the role these scaffolds play in fragment-based drug discovery, Spring and co-workers reported the use of a ring-closing metathesis strategy to afford a broad class of spirocyclic structures. 9 The group went on to highlight qualitatively the molecules' favourable physicochemical properties as well as the diverse 3D chemical space these products occupy (Scheme 1B(2)). Additionally, an elegant report from Shibasaki and coworkers employed a Rh-catalysed intramolecular C-H amination of isoxazolidin-5-ones to construct a series of b-proline-derived spirocycles (Scheme 1B (3)). 10 In a continuation of our programme for the development of new approaches towards structurally complex, sp 3 -rich nitrogen-containing architectures, 7e,11 we recognised the importance of divergent and expedient access to this burgeoning chemical space. We reasoned that a modular reaction design, incorporating commercial or readily accessible building blocks, would provide multiple points of diversity, enabling an efficient and complementary route to the 1-azaspiro[4,4]nonane framework. Such a strategy could serve as the key complexitybuilding sequence in the synthesis of natural products, 12 organocatalysts, 13 or pharmaceutically relevant structures, and herein we wish to report our ndings.
Results and discussion
We envisaged exploiting an intramolecular Horner-Wadsworth-Emmons (HWE) reaction of a suitable b-ketophosphonate moiety with a pendant ketone as the key spirocyclisation reaction. 14 The high exergonicity of such a transformation could allow the formation of hindered and highly-substituted cyclopentenones. In order to construct this privileged HWE precursor, we predicted the reductive ring cleavage of a 2,3dihydroisoxazole moiety (referred to henceforth as isoxazolines) would efficiently reveal the desired ketone (Scheme 2B). This would be attractive due to the ready accessibility of such isoxazolines (2) via [3 + 2] dipolar cycloadditions between prolinederived nitrones (1) and terminal alkynesa well-documented and reliable method for the incorporation of the desired quaternary centre. 15 Furthermore, the use of proline-derived feedstocks, as well as simple terminal alkynes, would serve as a low-cost and widely available framework on which to develop and apply our methodology. The requisite b-ketophosphonate functionality would arise from the condensation of a suitable phosphonate anion with the pendant ester in 2. Notably, numerous phosphonate esters are commercially available or are otherwise rapidly accessed from Michaelis-Arbuzov reactions and alkylation of dimethyl methylphosphonate. By exploiting these simple and dependable transformations, one could afford a broad range of ketophosphonate precursors (3). However, the feasibility and generality of this cycloaddition/phosphonate condensation sequence was unknown and would need to be probed in order to assess fully the functional group tolerance of the subsequent spirocyclisation reaction. In addition, while related N-O cleavage reactions are well-documented, 16 this ambitious reductive cleavage/HWE cascade on such a densely Scheme 3 Scope of the three-step sequence with respect to (A) the alkyne, (B) the phosphonate ester and (C) the nitrone. a 6 eq. of alkyne, b reaction time ¼ 3 days, c reaction time ¼ 4 days and toluene as solvent, d reaction time ¼ 2 days e extra eq. of base used, f modification to general procedure used (see ESI †), g extra equivalent of phosphonate and reaction temperature held at À78 C for 7 h. functionalised isoxazoline is, to the best of our knowledge, unreported. In order to orchestrate the desired cascade sequence successfully, a carefully chosen reductant and set of reaction conditions would need to be established.
Nitrone 1a, derived from L-proline benzyl ester, was chosen as a model substrate and its cycloaddition with phenylacetylene was investigated. Aer a concise solvent screen (ESI, Table S1 †), effective conditions were established to afford isoxazoline 2d in 67% yield as a single regioisomer. 15c Subsequent condensation of dimethyl methylphosphonate with the ester proceeded in high yield (70%) using LiHMDS as a basea notable result given the considerable steric repulsion arising from the aquaternary centre (ESI, Table S2 †). Despite this progress, anticipating that the conjugated system in aryl-substituted isoxazolines may impair the desired reductive cleavage, ketophosphonate 3a (R ¼ tBu) was chosen as a simpler framework on which to investigate the spirocyclisation cascade (Scheme 2C). A brief scouting of precedented N-O reducing agents and electron transfer reagents resulted in no desired spirocyclisation product (Scheme 2C, entries [1][2][3][4][5]. 17 However, standard Pd/C hydrogenation conditions afforded the intermediate ringopened ketone in 31% yield (Scheme 2C, entry 1). Inspired by reported reductions of other weak bonds, 18 radical anions of aromatic compounds (e.g. naphthalene and 4,4 0 -di-tert-butylbiphenylide) were investigated (Scheme 2C, entries 6 and 7). Despite typically being employed in detosylation, dehalogenation and similar transformations, there are scattered reports of alkali metal naphthalenides effecting N-O cleavage reactions in related systems. 19 Pleasingly, sodium naphthalenide (NaNap) in THF afforded the desired spirocycle 4a 0 in 26% yield, following an alkaline work-up to trigger the HWE reaction (Scheme 2C, entry 7). Transient protection of the ketophosphonate moiety as its anion using one equivalent of LiHMDS and careful control of sodium naphthalenide equivalents (see ESI for details †) improved the yield to 63% (entry 8). Due to the inherent instability of the resulting secondary amines and their challenging purication, a di-tert-butyl dicarbonate (Boc 2 O) "quench" was successfully implemented at the end of the sequence allowing isolation of desired N-Boc-protected spirocycle 4a in 87% (entry 9).
With optimal conditions established, the modularity and the scope of the sequence with respect to the terminal alkyne, the substitution on the phosphonate, and the substitution of the pyrrolidine ring, were investigated. Variation in the terminal alkyne enabled introduction of a range of functionalities and carbon frameworks at the 5-position 20 of the isoxazoline (2a-i, Scheme 3A). Condensation of dimethyl methylphosphonate successfully afforded the corresponding ketophosphonates (3ai) in moderate to good yields (61-84%). From cycloadduct 2d, variation at the a-position of the ketophosphonate was introduced by the condensation of a-substituted phosphonates (Scheme 3B). Despite a more challenging addition, phenyl (3j), benzyl (3k), vinyl (3l), allyl (3m), thiomethyl (3n) and methyl (3o) groups were all introduced in synthetically viable yields. 21 To investigate how substitution of the pyrrolidine backbone would affect the reaction sequence, nitrones suitably substituted at the 3-, 4-and 5-positions 20 were synthesised via modied literature procedures (1b-d, ESI, Scheme S2-S4 †). Notably, Bull's directed C-H activation of L-proline was employed to arylate the 3-position of the pyrrolidine ring. 22 Utilising tert-butylacetylene as the alkyne, these substituted nitrones were submitted to the cycloaddition which proceeded in a diastereoselective fashion Scheme 3C,. For 4-and 5-substituted bicycles, the standard procedure was followed to afford ketophosphonates 3p and 3r successfully. In contrast, given the more hindered ester moiety, the 3substituted bicycle required longer reaction times and greater excess of phosphonate to improve the yield of ketophosphonate 3q.
With a broad library of ketophosphonates in hand, the scope of the spirocyclisation was investigated. Following successful spirocyclisation of 4a, substrates bearing alkyl side chains were initially submitted to the reaction conditions (Scheme 3A, 4a-c, 4g). Pleasingly the cyclised product was observed in all cases, with yields highest for bulkier alkyl side chains (e.g. tert-butyl, cyclohexyl). Aryl-substituted isoxazolines proceeded in appreciable, though somewhat reduced, yields (4d-f). This decrease in yield from alkyl substrates is rationalized by the electronaccepting properties of the isoxazoline moiety when conjugated with an aryl group, potentially leading to undesired side reactions. Unsuccessful results with strongly electronwithdrawing aryl substituents support this rationalisation. 23,24 Further varying the isoxazoline substitution, a free tertiary alcohol (3h) was successfully implemented by using an additional equivalent of base prior to NaNap addition. This enabled construction of 4h and structurally-complex spiroenone 4i derived from mestranol, an FDA-approved hormone therapy. 25 a-Substituted ketophosphonates delivered desired spiroenones 4j-4o (Scheme 3B) with the highest yields obtained for substrates having relatively small a-side chains. This demonstrates the method's applicability for the synthesis of highly congested cyclopentenones. Furthermore, incorporation of allyl and vinyl groups could facilitate downstream functionalisation at the a-position of the cyclopentenone.
Encouragingly, substitution of the pyrrolidine backbone had no detrimental effect on the efficacy of the spirocyclisation, with spiroenones 4p-4r all elaborated in synthetically practical yields (Scheme 3C). As such, the absolute stereochemical conguration of the quaternary carbon may be set to afford the spirocycles in single diastereomeric seriesa particularly pertinent result given the application of 1-azaspiro[4,4]nonane derivatives in asymmetric organocatalytic manifolds. 13 In order to demonstrate the synthetic utility of this threestep sequence, its application towards the synthesis of (AE)-cephalotaxine was investigated. The Cephalotaxus alkaloids remain valuable synthetic targets and novel strategies to this class of molecules are desirable. 26 It was envisaged that the use of trimethylsilylacetylene in the cycloaddition would introduce a substituent at the 5-position of the isoxazoline that could later be removed via protodesilylation, thereby intercepting synthetic intermediates reported by Mariano and Mori. 27 Deploying the standard conditions, spiroenone 4s was rapidly synthesized, thus conrming the facile incorporation of the desired silyl moiety (Scheme 4). Following this success, a modied acylative quench was investigated in which an acid chloride derived from homoveratric acid was employed to introduce the remaining core carbon atoms, via the pyrrolidine nitrogen. Pleasingly this modication was implemented with no issue, allowing key spirocycle 5 to be elaborated in 65% yield. This procedure could be carried out on a larger scale with only a modest depreciation in yield to 56%, affording 1.5 g of 5.
Due to low reactivity inhibiting the desired protodesilylation, 28 spirocycle 5 was reduced to the corresponding allylic alcohol, utilising Mariano's Meerwein-Ponndorf-Verley conditions, prior to amide reduction affording tertiary amine 6 in moderate yields. 27b Given the close structural similarity to Mori's Friedel-Cras cyclisation precursor and the instability of vinyl silanes to protodesilylation in acidic media, 6 was submitted to Mori's Friedel-Cras conditions. To our delight, 6 was indeed converted to the desilylated cyclisation product 7, thus completing the formal synthesis of (AE)-cephalotaxine. 27a However, this use of polyphosphoric acid provided inconsistent results and was operationally challenging. As such alternative acids and dehydrating reagents were investigated (see Table S4 †) and Eaton's reagent (7.7 wt% P 2 O 5 in MsOH) 29 emerged as the most efficient, affording 7 in 73% aer 5 h stirring at room temperature. This 7-step sequence from commercially available L-proline benzyl ester hydrochloride signicantly expedites synthetic access to Mori's tetracyclic intermediate 7. 27b,30 Conclusion A concise and divergent approach to 1-azaspiro[4,4]nonane derivatives, which features a novel sodium naphthalenidemediated reductive-HWE cascade reaction, has been developed. By variation of the alkyne, the phosphonate ester, and the pyrrolidine backbone, a large class of highly substituted and densely functionalised spirocyclic pyrrolidines was constructed, signicantly broadening the synthetic access to this chemical space. We believe the modularity of this sequence lends itself well to applications in medicinal chemistry and natural product synthesis alike. To this end, the utility of the reaction pathway was demonstrated by its successful application in a formal synthesis of (AE)-cephalotaxine, accessing Mori's tetracyclic intermediate in only 7 steps. Furthermore, this work demonstrates and opens up the underexploited paradigm of deploying isoxazolines as masked ketone equivalents in reductive cascade sequences.
Conflicts of interest
There are no conicts to declare. | 3,033.8 | 2020-08-07T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Wound Healing Modulation through the Local Application of Powder Collagen-Derived Treatments in an Excisional Cutaneous Murine Model
Wound healing includes dynamic processes grouped into three overlapping phases: inflammatory, proliferative, and maturation/remodeling. Collagen is a critical component of a healing wound and, due to its properties, is of great interest in regenerative medicine. This preclinical study was designed to compare the effects of a new collagen-based hydrolysate powder on wound repair to a commercial non-hydrolysate product, in a murine model of cutaneous healing. Circular excisional defects were created on the dorsal skin of Wistar rats (n = 36). Three study groups were established according to the treatment administered. Animals were euthanized after 7 and 18 days. Morphometric and morphological studies were performed to evaluate the healing process. The new collagen treatment led to the smallest open wound area throughout most of the study. After seven days, wound morphometry, contraction, and epithelialization were similar in all groups. Treated animals showed reduced granulation tissue formation and fewer inflammatory cells, and induction of vasculature with respect to untreated animals. After 18 days, animals treated with the new collagen treatment showed accelerated wound closure, significantly increased epithelialization, and more organized repair tissue. Our findings suggest that the new collagen treatment, compared to the untreated control group, produces significantly faster wound closure and, at the same time, promotes a slight progression of the reparative process compared with the rest of the groups.
Introduction
Cutaneous wound healing is an important physiological process to restore the skin barrier after trauma. This process includes complex and dynamic pathways that are classified into three main consecutive but overlapping stages: the inflammatory, proliferative, and maturation/remodeling phases [1]. This involves the simultaneous actuation of soluble mediators, blood cells, the extracellular matrix, and epithelial and parenchymal cells [2].
The inflammatory phase occurs immediately after tissue damage; the hemorrhage triggers the coagulation cascade to restore hemostasis and prevent further blood loss. Hemostasis begins with the formation of a platelet plug, followed by a provisional fibrin matrix that will favor the migration of inflammatory cells. In addition to forming the clot, the thrombocytes secrete various mediators, growth factors and cytokines, which, together easily absorbed by the human body, allowing for immediate signaling. A study carried out in dogs [21] demonstrated that hydrolyzed collagen powder enhanced the percentage of epithelialization after seven days of treatment compared to the control.
Other commercial collagen-based products, derived from bovine cartilage in the form of powder, have been shown to be effective in the treatment of wounds by secondary intent such as pressure ulcers, venous stasis ulcers, and diabetic ulcers, as well as second-degree burns, post-radiation dermatitis, and wounds unresponsive to conventional treatments [22].
Researchers and companies have developed and marketed a wide variety of products presenting different compositions, but all focused on promoting wound healing.
Taking into account all these factors, the present preclinical study was designed to compare the effects on wound repair of two collagen-based powder products, a new hydrolyzed bovine dermal collagen powder (not yet on the market) and a non-hydrolyzed commercial collagen derived from bovine cartilage, in a murine model of cutaneous healing. The reparative process was assessed with respect to the evolution over time of the defect and inflammatory response, and the formation and maturation of new tissue.
Experimental Animals and Ethics
Female Wistar rats (n = 36) weighing around 250 g were used. The care of the animals used in this study and the experimental procedures were in accordance with current protocols on the use of animals in experimentation (European Directive 2010/63/EU, European Convention of the Council of Europe ETS123 and Spanish Royal Decree 53/2013), and the study was approved by the Animal Experimentation Ethics Committee of Universidad de Alcalá, Spain. In this study, a rat model of wound healing was developed to evaluate the effect of different collagen treatments on the tissue reparative process.
Animals were individually housed under controlled temperature and illumination conditions, with a complete diet (Harlan Laboratories, Houston, TX, USA) and water ad libitum.
Study Groups
The animals were randomly distributed into three study groups (n = 12), according to the treatment administrated: Catrix ® is a bovine cartilage collagen powder. The particles are composed of natural macromolecules (particle size 35 µm) of collagen arranged in the form of a threedimensional network.
This is a hydrolyzed bovine dermal collagen powder, mainly containing low-molecularweight (3 KDa) type I peptides. These collagen peptides contain a unique composition and a high number of essential amino acids, such as proline, hydroxyproline, and glycine.
Surgical Technique and Sample Collection
Excisional wounds on the dorsal surface in rats is one of the most commonly used and standardized wound healing models [23,24]. These wounds are generated by the surgical removal of all skin layers (epidermis, dermis, and subcutaneous tissue) from the animal. This model allows the investigation of the inflammation, granulation tissue formation, reepithelialization, angiogenesis, and remodeling process.
The animals were anesthetized in an inhalation chamber with a mixture of isoflurane (Forane ® ; AbbVieS.L.U., Madrid, Spain) at 4-5% and oxygen at a flow rate of 0.6-0.7 L/min. During the surgery, anesthesia was maintained by a face mask connected to a calibrated vaporizer, providing an inhalation dose of isoflurane of 2.5-3%. The fur on the backs of the Biomedicines 2022, 10, 960 4 of 15 rats was shaved with an electric razor, and the skin was disinfected with iodopovidone. After anesthesia, a circular (1.5 cm diameter) full-thickness defect, previously marked by a calibrated metallic punch and centered approximately 0.5 cm caudal to the scapulae, was created ( Figure 1). The skin was removed with a scalpel and surgical scissors, following the previously marked line. The animals did not require postoperative analgesia. To minimize stress caused by the surgical procedure and individual housing, the animals were in visual, auditory, and olfactory contact with other rats, and environmental enrichment was provided every week.
gical removal of all skin layers (epidermis, dermis, and subcutaneous tissue) from the animal. This model allows the investigation of the inflammation, granulation tissue formation, reepithelialization, angiogenesis, and remodeling process.
The animals were anesthetized in an inhalation chamber with a mixture of isoflurane (Forane ® ; AbbVieS.L.U., Madrid, Spain) at 4-5% and oxygen at a flow rate of 0.6-0.7 L/min. During the surgery, anesthesia was maintained by a face mask connected to a calibrated vaporizer, providing an inhalation dose of isoflurane of 2.5-3%. The fur on the backs of the rats was shaved with an electric razor, and the skin was disinfected with iodopovidone. After anesthesia, a circular (1.5 cm diameter) full-thickness defect, previously marked by a calibrated metallic punch and centered approximately 0.5 cm caudal to the scapulae, was created ( Figure 1). The skin was removed with a scalpel and surgical scissors, following the previously marked line. The animals did not require postoperative analgesia. To minimize stress caused by the surgical procedure and individual housing, the animals were in visual, auditory, and olfactory contact with other rats, and environmental enrichment was provided every week. Once the wound was created, the corresponding treatment was administered to all groups except the control group. Treatments were applied at 0, 3, 5, 7 and 9 days, covering the area with wound-protective devices ( Figure 1). The scab covering the wound was removed before each treatment application to ensure its accessibility in the open area, and no debridement of the wound was performed in any case. The same procedure was carried out in the control group.
At the end of the established study times, the animals were euthanized in a CO2 inhalation chamber. The scar tissue was photographed for morphometric evaluation and subsequently excised. The tissue samples were sectioned into two halves transverse to the body axis for histological processing.
Morphometric Studies of Wound Evolution
Following surgery, animals were regularly weighed and monitored daily to evaluate the evolution of skin scarring. Immediately following surgery, during the study and at the end of the established study times, measurements of the defect were performed. Once the wound was created, the corresponding treatment was administered to all groups except the control group. Treatments were applied at 0, 3, 5, 7 and 9 days, covering the area with wound-protective devices ( Figure 1). The scab covering the wound was removed before each treatment application to ensure its accessibility in the open area, and no debridement of the wound was performed in any case. The same procedure was carried out in the control group.
At the end of the established study times, the animals were euthanized in a CO 2 inhalation chamber. The scar tissue was photographed for morphometric evaluation and subsequently excised. The tissue samples were sectioned into two halves transverse to the body axis for histological processing.
Morphometric Studies of Wound Evolution
Following surgery, animals were regularly weighed and monitored daily to evaluate the evolution of skin scarring. Immediately following surgery, during the study and at the end of the established study times, measurements of the defect were performed.
To evaluate the wound closure after the defect was performed and at the time of euthanasia, cenital photographs, from a plane just above the experimental animals, of the defects and scar tissue were taken with a ruler for calibration. For this assessment, the initial area of the defect caused at the time of surgery (diameter: 1.5 cm), the area of the defect that remained unclosed after 7 and 18 days, and the contraction area were measured using ImageJ software (National Institutes of Health, Bethesda, MD, USA) (https://imagej.nih.gov/ij). These measurements were used to calculate the relative values of the processes of wound closure, epithelialization, and contraction ( Figure 2). euthanasia, cenital photographs, from a plane just above the experimental animals, of the defects and scar tissue were taken with a ruler for calibration. For this assessment, the initial area of the defect caused at the time of surgery (diameter: 1.5 cm), the area of the defect that remained unclosed after 7 and 18 days, and the contraction area were measured using ImageJ software (National Institutes of Health, Bethesda, MD, USA) (https://imagej.nih.gov/ij). These measurements were used to calculate the relative values of the processes of wound closure, epithelialization, and contraction ( Figure 2).
Figure 2.
Morphometrical analysis of the wound. Photographs were taken at different study times and at the time of euthanasia. Measurements of initial, non-contracted, and non-closed areas of each animal, using an image analysis program, were used to calculate the areas covered by the epithelialization process (non-contracted area-open area) and contraction process (initial area-non-contracted area) as well as the relative contribution of these process to the wound closure, expressed as a percentage over the initial area of the defect.
Morphological Studies
For light microscopy analyses, the tissue samples obtained were fixed with solution F13 (60% ethanol, 20% methanol, 7% polyethylene glycol, and 13% distilled water) and paraffin-embedded. Tissue blocks were cut with a Microm HM-325 microtome (Microm International GmbH, Walldorf, Germany) into 5-μm-thick sections and placed onto slides coated with 0.01% polylysine (Sigma-Aldrich, St. Louis, MO, USA). Finally, the sections were dewaxed, rehydrated, and stained with hematoxylin-eosin and Masson's trichrome (Goldner-Gabe variant), and Sirius red. Samples were examined under a Zeiss Axiophot light microscope (Carl Zeiss, Oberkochen, Germany). For each staining, six sections from the central zone of wound and three sections of the marginal edge were analyzed.
Both hematoxylin-eosin and Masson's trichrome staining allowed for the general observation of the repairing tissue, granulation tissue, distribution of collagen inflammatory cells, and neoformed vessels. Sirius red staining was utilized to evaluate the organization and maturation of collagen fibers in the repairing tissue. Despite the lack of complete specificity, type I collagen (mature) appears as a reddish-orange stain, while type III collagen (immature) takes on a yellowish-green stain when observed under the polarized light microscope [25]. Morphological examination was performed by two independent histologists in a blinded fashion. Photographs were taken at different study times and at the time of euthanasia. Measurements of initial, non-contracted, and non-closed areas of each animal, using an image analysis program, were used to calculate the areas covered by the epithelialization process (non-contracted area-open area) and contraction process (initial area-noncontracted area) as well as the relative contribution of these process to the wound closure, expressed as a percentage over the initial area of the defect.
Morphological Studies
For light microscopy analyses, the tissue samples obtained were fixed with solution F13 (60% ethanol, 20% methanol, 7% polyethylene glycol, and 13% distilled water) and paraffin-embedded. Tissue blocks were cut with a Microm HM-325 microtome (Microm International GmbH, Walldorf, Germany) into 5-µm-thick sections and placed onto slides coated with 0.01% polylysine (Sigma-Aldrich, St. Louis, MO, USA). Finally, the sections were dewaxed, rehydrated, and stained with hematoxylin-eosin and Masson's trichrome (Goldner-Gabe variant), and Sirius red. Samples were examined under a Zeiss Axiophot light microscope (Carl Zeiss, Oberkochen, Germany). For each staining, six sections from the central zone of wound and three sections of the marginal edge were analyzed.
Both hematoxylin-eosin and Masson's trichrome staining allowed for the general observation of the repairing tissue, granulation tissue, distribution of collagen inflammatory cells, and neoformed vessels. Sirius red staining was utilized to evaluate the organization and maturation of collagen fibers in the repairing tissue. Despite the lack of complete specificity, type I collagen (mature) appears as a reddish-orange stain, while type III collagen (immature) takes on a yellowish-green stain when observed under the polarized light microscope [25]. Morphological examination was performed by two independent histologists in a blinded fashion.
Statistical Analysis
The data were expressed as the mean ± standard deviation. To compare different study groups, the Mann-Whitney U test was used. All statistical tests were performed using the software package GraphPad Prism 5 (GraphPad Software Inc., San Diego, CA, USA). Significance was set at p < 0.05.
Postoperative Follow-Up
All animals completed the assigned study time; only one from the control group was excluded due to evidence of self-harm near the area of the defect. This animal was excluded from subsequent macro and microscopic assessments. On day 7, animals from all the groups exhibited the expected initial weight loss, although this was not significant with respect to the control group. On day 18 post-surgery, most of the groups displayed a weight gain.
Wound Closure Evolution
For this assessment, the initial mean wound surface area at the time of surgery (diameter: 1.5 cm) was slightly increased (230.9 ± 28.01 mm 2 ) compared to the theoretical area (176.7 mm 2 ) (Figure 3).
Biomedicines 2022, 10, x FOR PEER REVIEW 7 of 16 Figure 3. Evolution of the unclosed area of the defect over time. On day 3 post-surgery, statistical differences were observed between the control and T2 treatment (* p < 0.05). At days 5 and 7, no significant differences were detected between groups. At day 18, statistical differences were observed between the control and T2 groups (* p < 0.05).
Wound Contraction and Epithelialization
Seven days after surgery, the open surface had decreased with respect to the initial area, although this area was still considerable (Figure 4). The analysis of the evolution of the wound closure process revealed a greater proportional contribution of contraction over epithelialization (Figure 5a). On day 3 post-surgery, statistical differences were observed between the control and T2 treatment (* p < 0.05). At days 5 and 7, no significant differences were detected between groups. At day 18, statistical differences were observed between the control and T2 groups (* p < 0.05).
On day 3 post-surgery, animals from the control group showed a higher wound distension of area defects than the T1 and T2 groups, which showed a reduction in the open area of the wound, this being more important in T2. The control group showed a statistically significant difference (255.71 ± 69.33 mm 2 ) compared to the T2 group (182.30 ± 25.29 mm 2 ); however, no differences were shown versus the T1 group (210.60 ± 62.81 mm 2 ).
Five days after surgery, the wound distension was maintained (219.09 ± 59.04 mm 2 ) in the control group, and no significant differences were detected between the groups. On day 7, the open area had decreased and showed similar values in all groups, with no significant differences between them.
On day 18 post-surgery, all open areas from the different groups showed considerably reduced values, in all cases less than 10 mm 2 . At this time point, significant differences were also found between the control and T2 groups (5.38 ± 4.37 and 0.55 ± 0.66 mm 2 , respectively) ( Figure 3).
Wound Contraction and Epithelialization
Seven days after surgery, the open surface had decreased with respect to the initial area, although this area was still considerable (Figure 4). The analysis of the evolution of Biomedicines 2022, 10, 960 7 of 15 the wound closure process revealed a greater proportional contribution of contraction over epithelialization (Figure 5a). significant differences were detected between groups. At day 18, statistical differences were observed between the control and T2 groups (* p < 0.05).
Wound Contraction and Epithelialization
Seven days after surgery, the open surface had decreased with respect to the initial area, although this area was still considerable (Figure 4). The analysis of the evolution of the wound closure process revealed a greater proportional contribution of contraction over epithelialization (Figure 5a). Graphical representation of the percentage of wound closure expressed as mean % of the initial wound area covered ± SD by the contraction and epithelialization processes. On day 7 (a), no significant differences were observed in the percentage of epithelialization and contraction between any of the study groups evaluated. Regarding the contraction process, on day 18 (b), no significant differences were observed between groups; in the case of the epithelialization process, statistical differences emerged between the control and T2 group (* p < 0.05).
The highest contraction values were observed in the control group (42.91% vs. T1: 29.96% and T2: 39.30%) although no significant differences were observed.
Regarding the epithelialization process, no significant difference was observed between groups; mean values did not reach 25% of the initial area in all cases (Figure 5a). However, it must be taken into account that these values are in percentages and, in the groups in which less contraction occurred, there would be more surface to epithelialize.
Eighteen days after surgery and treatment, the open area was diminished with respect to after seven days. Macroscopically, in the majority of the groups, a centripetal pattern of closure was observed; the wound tended to close preferentially in the transversal axis ( Figure 4).
The analysis of the evolution of the wound closure process revealed, contrary to after seven days, higher epithelialization than contraction values. The contribution of shrinkage was very important in all study groups, with mean values above 65%, reaching almost 85% of the initial area, although no significant differences were detected between the var- 18. Graphical representation of the percentage of wound closure expressed as mean % of the initial wound area covered ± SD by the contraction and epithelialization processes. On day 7 (a), no significant differences were observed in the percentage of epithelialization and contraction between any of the study groups evaluated. Regarding the contraction process, on day 18 (b), no significant differences were observed between groups; in the case of the epithelialization process, statistical differences emerged between the control and T2 group (* p < 0.05).
The highest contraction values were observed in the control group (42.91% vs. T1: 29.96% and T2: 39.30%) although no significant differences were observed.
Regarding the epithelialization process, no significant difference was observed between groups; mean values did not reach 25% of the initial area in all cases (Figure 5a). However, it must be taken into account that these values are in percentages and, in the groups in which less contraction occurred, there would be more surface to epithelialize.
Eighteen days after surgery and treatment, the open area was diminished with respect to after seven days. Macroscopically, in the majority of the groups, a centripetal pattern of closure was observed; the wound tended to close preferentially in the transversal axis (Figure 4).
The analysis of the evolution of the wound closure process revealed, contrary to after seven days, higher epithelialization than contraction values. The contribution of shrinkage was very important in all study groups, with mean values above 65%, reaching almost 85% of the initial area, although no significant differences were detected between the various groups.
Evaluation of the Reparative Process
Seven days after the intervention, all the animals showed a large nonepithelialized defect area and an inflammatory exudate covering the superficial zones. In general, in the wound area, granulation tissue with numerous blood vessels and signs of evident inflammatory and proliferative phases of reparative process were observed in panoramic histological images ( Figure 6). The control group showed the greatest development and thickness of this tissue (Figure 6a). In groups T1 and T2, reduced granulation tissue formation with homogeneous thickness, fewer inflammatory cells, and induction of vasculature in the neoformed tissue were observed (Figure 6b,c). At a higher magnification, tissue edges adjacent to the wound showed an active and thickened epithelium with signs of cell proliferation and migration, covering neoformed granulation tissue and advancing between this and inflammatory exudate, compared to nondamaged epithelium (Figure 7). In these sections, it was possible to observe the border between the undamaged area showing dense dermal tissue and hair follicles, and the damage area with high inflammation and low collagen deposition (Figure 7). At a higher magnification, tissue edges adjacent to the wound showed an active and thickened epithelium with signs of cell proliferation and migration, covering neoformed granulation tissue and advancing between this and inflammatory exudate, compared to nondamaged epithelium (Figure 7). In these sections, it was possible to observe the border between the undamaged area showing dense dermal tissue and hair follicles, and the damage area with high inflammation and low collagen deposition (Figure 7). Animals from the control group showed accumulation of inflammatory cells and myofibroblasts in a loose and poorly organized extracellular matrix with evident immature collagen deposition, revealed by the yellow color of the Sirius Red stain (Figure 7). In animals from the T1 group, moderate signs of inflammation were observed in the superficial layer of neoformed tissue. In five animals, some remains of the treatment immersed in the repair tissue were observed, around which inflammatory-type cells were also located (Figure 7). Repair tissue appeared more organized and the synthesis of new matrix with collagen type I deposition was observed in deeper areas. The superficial layer of granulation tissue in the T2 group showed a slightly increased number of inflammatory cells. Beneath this layer, the repair tissue exhibited good organization and a large deposit of immature collagen (type III) (Figure 7).
Eighteen days after surgery, samples exhibited a significant approximation of the wound edges due to tissue contraction, as well as features of the initial stages of maturation and remodeling. Almost all the animals showed completely epithelialized wound areas; however, the control group displayed a higher mean open surface wound area (Figure 8). Panoramic histological images also showed some open area in the T1 group ( Figure 8b). Animals from the control group showed accumulation of inflammatory cells and myofibroblasts in a loose and poorly organized extracellular matrix with evident immature collagen deposition, revealed by the yellow color of the Sirius Red stain (Figure 7). In animals from the T1 group, moderate signs of inflammation were observed in the superficial layer of neoformed tissue. In five animals, some remains of the treatment immersed in the repair tissue were observed, around which inflammatory-type cells were also located (Figure 7). Repair tissue appeared more organized and the synthesis of new matrix with collagen type I deposition was observed in deeper areas. The superficial layer of granulation tissue in the T2 group showed a slightly increased number of inflammatory cells. Beneath this layer, the repair tissue exhibited good organization and a large deposit of immature collagen (type III) (Figure 7).
Eighteen days after surgery, samples exhibited a significant approximation of the wound edges due to tissue contraction, as well as features of the initial stages of maturation and remodeling. Almost all the animals showed completely epithelialized wound areas; however, the control group displayed a higher mean open surface wound area (Figure 8). Panoramic histological images also showed some open area in the T1 group (Figure 8b). In general, all study groups showed an advanced stage of inflammatory remission as well as progressive tissue maturation with reduced cellularity versus after seven days due to vascular retraction, decreased inflammation, and a low number of fibroblasts. Repaired tissue appeared more organized than after seven days, showing a significantly higher density of collagen with a more homogeneous distribution in the neoformed tissue. In some zones, fibroblasts and a fibrillar matrix were observed to be laid down parallel to the wound area, sealing the wound edges. In the tissue edges, adjacent to the uninjured area, enhanced collagenization of the neoformed connective tissue was observed ( Figures 8 and 9).
The control group presented a thicker neoepithelium than unwounded zones, where nontypical epidermal ridges invading the dermis were observed (Figure 9). The presence of inflammatory zones could be observed in the central portion of the wound that had not yet healed, where accumulations of inflammatory cells were present in the superficial and basal bands of the neodermis (Figures 8a and 9). In general, all study groups showed an advanced stage of inflammatory remission as well as progressive tissue maturation with reduced cellularity versus after seven days due to vascular retraction, decreased inflammation, and a low number of fibroblasts. Repaired tissue appeared more organized than after seven days, showing a significantly higher density of collagen with a more homogeneous distribution in the neoformed tissue. In some zones, fibroblasts and a fibrillar matrix were observed to be laid down parallel to the wound area, sealing the wound edges. In the tissue edges, adjacent to the uninjured area, enhanced collagenization of the neoformed connective tissue was observed (Figures 8 and 9). T1 was the group where epithelial development was the most delayed. Some nonepithelialized zones in the central region of the defect or with the absence of any epidermal layer could be observed; however, the dermis exhibited good collagenization and some inflammatory cells that looked similar to those of previous groups (Figures 8b and 9).
The greatest wound closure and epithelialization areas were observed in the T2 group. The epidermis was completely differentiated and stratified, including stratum corneum and active epithelial desquamation (Figure 9). The development of typical epidermal ridges alternating with well-collagenized dermal papillae could be observed. The dermis was dense and rich in collagen type I, and exhibited organized connective tissue with abundant fibroblasts and scarce inflammatory components (Figures 8c and 9).
Discussion
The main objective of the wide range of devices or treatments developed to promote wound healing is that this physiological process promotes wound closure as soon as possible and results in a functionally and aesthetically satisfactory scar. For this purpose, the organism must be able to reduce cutaneous discontinuity by generating new granulation tissue in the wounded area. An epithelial barrier must be able to develop on this tissue, The control group presented a thicker neoepithelium than unwounded zones, where nontypical epidermal ridges invading the dermis were observed (Figure 9). The presence of inflammatory zones could be observed in the central portion of the wound that had not yet healed, where accumulations of inflammatory cells were present in the superficial and basal bands of the neodermis (Figures 8a and 9).
T1 was the group where epithelial development was the most delayed. Some nonepithelialized zones in the central region of the defect or with the absence of any epidermal layer could be observed; however, the dermis exhibited good collagenization and some inflammatory cells that looked similar to those of previous groups (Figures 8b and 9).
The greatest wound closure and epithelialization areas were observed in the T2 group. The epidermis was completely differentiated and stratified, including stratum corneum and active epithelial desquamation (Figure 9). The development of typical epidermal ridges alternating with well-collagenized dermal papillae could be observed. The dermis was dense and rich in collagen type I, and exhibited organized connective tissue with abundant fibroblasts and scarce inflammatory components (Figures 8c and 9).
Discussion
The main objective of the wide range of devices or treatments developed to promote wound healing is that this physiological process promotes wound closure as soon as possible and results in a functionally and aesthetically satisfactory scar. For this purpose, the organism must be able to reduce cutaneous discontinuity by generating new granulation tissue in the wounded area. An epithelial barrier must be able to develop on this tissue, reaching continuity and avoiding re-openings through the formation of a resistant extracellular matrix. This results from cell proliferation, synthesis, and maturation of the extracellular matrix [3].
In general, the wound repair process occurs in almost all tissues after exposure to a destructive stimulus. It must be taken into consideration that, in this study, the physiological healing process was not compromised in any way. All the animals used were healthy animals without pathologies that might have affected healing; therefore, the model showed a standard healing pattern, with characteristics of the three phases of the skin repair process (inflammation, proliferation, and remodeling/maturation) appearing in a progressive and orderly manner until the achievement of newly formed connective tissue with reduced cellularity and a dense and organized extracellular matrix. With these requirements in mind, the treatments' ability to improve wound closure were evaluated.
Although the study and knowledge of wound healing in humans have advanced considerably, the difficulties and limitations make the use of experimental models necessary for research in this field. For decades, the development of multiple animal models has been used, including dorsal wounds in small rodents, rabbit ear defects or porcine models [26,27]. However, the most used models for the study of skin repair are based on rats and mice due to their easy handling and maintenance. It must be taken into account that the use of these animals has some limitations due to differences in healing mechanisms compared to humans. One of them is the greater proportional contribution of the contraction process over epithelialization that occurs in rodents [27]. Excisional wounds are one of the most commonly used wound healing models because they allow the investigation of inflammation, granulation tissue formation, re-epithelialization, angiogenesis and the remodeling process [28].
Based on our group's previous experience [29], we developed a murine model to evaluate both the initial phases of healing (7 days) and tissue remodeling (18 days) by using an excisional model. In the present experimental study, evidence for collagen powder as an adjunctive therapy in wound healing has been provided. We used one collagenbased product available on the market and obtained from the bovine tracheal cartilage. Catrix ® was approved by the FDA in 1998, and is frequently used in wound care. Several studies have indicated that this collagen powder could be an optimal agent to stimulate the wound healing process [22]. Clinical evidence from a prospective multicenter study treating resistant pressure ulcers showed a statistically significant improvement in complete healing using this product compared to untreated controls [30].
For these reasons, in this study we chose Catrix ® as a reference product to assess the performance of a new collagen-based product in a rat wound healing model.
In the classic stages of wound repair, inflammatory phase occurs immediately after tissue damage with the development of a platelet plug and a provisional fibrin scaffold that promotes migration of inflammatory cells [3,4]. New tissue formation occurs 2-10 days after injury and is characterized by cellular proliferation and the migration of different cell types [3]. The first event is the migration of keratinocytes to the injured dermis, as we observed in our study groups after seven days. Tissue edges adjacent to the wound showed an active and thickened epithelium, with signs of cell proliferation and migration; neoformed granulation tissue showed signs of inflammatory and proliferative phases of the reparative process and important angiogenesis. The most important positive regulators of angiogenesis are vascular endothelial growth factor A (VEGFA) and fibroblast growth factor 2 (FGF2; also known as bFGF) [4].
The results of our study revealed, after seven days, some differences between the groups. In our T1 and T2 groups, reduced granulation tissue formation with homogeneous thickness, fewer inflammatory cells, and the induction of vasculature in the neoformed tissue were observed with respect to untreated animals, which exhibited more inflammation and less organized repair tissue. Some other findings indicate that the application of pure undiluted bio-collagen extracted from bovine skin dramatically improved wound healing in rats after seven days in terms of collagen production, wound filling, and the migration and differentiation of keratinocytes, being three times more effective than the commercial Catrix ® [31].
In the later part of this stage, fibroblasts, which are attracted from the edge of the wound, are stimulated by macrophages, and some differentiate into myofibroblasts [32], contractile cells that, over time, bring the edges of a wound together and are responsible for the process of wound contraction. Some authors, trying to minimize the important process of wound contraction that occurs in rodents to replicate human physiology, have described models of wound healing utilizing wound splinting, verifying a greater deposition of granulation tissue and not being affected by the rate of re-epithelialization [33]. In our study, the evolution of the wound closure process revealed a greater proportional contribution of contraction over epithelialization. The highest contraction values in our study were observed in the control group compared to the rest of the groups, although no significant differences were observed between them.
Fibroblasts and myofibroblasts interact and produce an extracellular matrix, mainly in the form of collagen, which ultimately forms the bulk of the mature scar [34]. Evident immature collagen type III deposition was observed in the repair tissue in our model as the main component of the fibrillar matrix.
The third stage of wound repair-remodeling-begins 2-3 weeks after an injury and lasts for a year or more. Eighteen days after surgery, samples exhibited a significant approximation of the wound edges due to tissue contraction. At the end of the study, all groups showed similar macroscopic closures, except animals treated with the new collagen (T2 group), which showed accelerated wound closure compared to the untreated group. Despite making a limited contribution to total closure in the present model, epithelialization showed an improvement in this group of animals.
The effects of bovine collagen-derived powder treatments, similar to those applied in our T2 group, were evaluated on the healing of open wounds in healthy dogs [21] and, according to our results, improved wound epithelialization.
During this last stage of healing, all the processes that had been activated after injury progressively decrease their activity until they are finished. Most of the macrophages and myofibroblasts undergo apoptosis or exit from the wound, leaving a mass that contains few cells [35].
The morphological observations allowed us to verify that repaired tissue appeared more organized than after 7 days, showing higher density of mature collagen type I. Histological assessment of the group of animals treated with hydrolyzed collagen (T2 group) showed a newly stratified epidermis, with dense and organized connective tissue presenting a great number of fibroblasts and few inflammatory cells.
Other preclinical trials using modified collagen gel treatments versus untreated wounds have shown acute inflammatory cell and fibroblast recruitment, collagen I deposition, increased endothelial cells, upregulated vascular endothelial growth factor, and improved blood flow [36,37]. In accordance with our results, clinical studies [19] report that collagen-treated wounds displayed increased neoangiogenesis, less inflammatory granulation tissue, and more organized and well-formed collagen bundles compared to the primary closure of punch biopsies with nonabsorbable sutures.
Conclusions
Taking into account all the morphometric and histological results of this research, we can conclude that the new collagen treatment, compared to the untreated control group, produces significantly faster wound closure and, at the same time, promotes a slight progression of the reparative process, showing more mature and organized-and, therefore, higher-quality-repair tissue, compared with the rest of the groups.
However, additional studies are needed to confirm the effects of this collagen-based product in terms of a compromised healing process. | 8,435 | 2022-04-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
Dosimetry for FLASH Radiotherapy: A Review of Tools and the Role of Radioluminescence and Cherenkov Emission
While spatial dose conformity delivered to a target volume has been pushed to its practical limits with advanced treatment planning and delivery, investigations in novel temporal dose delivery are unfolding new mechanisms. Recent advances in ultra-high dose radiotherapy, abbreviated as FLASH, indicate the potential for reduction in healthy tissue damage while preserving tumor control. FLASH therapy relies on very high dose rate of>40Gy/sec with sub-second temporal beam modulation, taking a seemingly opposite direction from the conventional paradigm of fractionated therapy. FLASH brings unique challenges to dosimetry, beam control, and verification, as well as complexity of radiobiological effective dose through altered tissue response. In this review, we compare the dosimetric methods capable of operating under high dose rate environments. Due to excellent dose-rate independence, superior spatial (~<1 mm) and temporal (~ns) resolution achievable with Cherenkov and scintillation-based detectors, we show that luminescent detectors have a key role to play in the development of FLASH-RT, as the field rapidly progresses towards clinical adaptation. Additionally, we show that the unique ability of certain luminescence-based methods to provide tumor oxygenation maps in real-time with submillimeter resolution can elucidate the radiobiological mechanisms behind the FLASH effect. In particular, such techniques will be crucial for understanding the role of oxygen in mediating the FLASH effect.
Introduction:
) a) Typical dose-response curves in radiotherapy; maximum tumor control with minimal normal tissue complication is desired. b) Over last the few decades this has been possible due to spatial modulation of the beam, leading to an increasing use of small fields (<10 mm). Unfortunately, this led to complex dosimetric issues unique to small fields that standard dosimeters were not suitable for. The x-axis in the in b) denotes the time-line of major advancements that have happened in the field of external beam radiotherapy. Left scale denotes how typical field sizes have varied with these advancements and right scale shows the number of peer-reviewed publications per year based on a PubMed search of the phrase "Small Field Dosimetry". c) The effect of ultra-high dose rates on cell survival curves. Data adapted from Hornsey et al 1 , where decreased cell killing was seen in mouse intestine at high dose-rates versus low dose-rates. The decrease in cell-killing was attributed to rapid depletion of oxygen, which is required to 'fix' DNA damage, at high dose-rates.
Decades of research in radiation therapy has been focused towards increasing the therapeutic ratio 2 (Figure 1a), and many techniques such as inverse treatment planning optimization, intensity modulated radiation therapy, or on-board imaging guidance have achieved this goal primarily via higher spatial modulation of the primary beam ( Figure 1b). Temporal modulation of dose has also been widely exploited in its relation to repair of sublethal damage and the cell cycle, and this has been widely adopted in almost all clinical treatments through fractionated treatment plans. Yet, the role of higher dose-rate effects has been largely undeveloped in clinical treatment 3 . Interesting early studies 4-8 (1960-1980) had observed peculiar effects of reduced cell killing at ultra-high dose-rates, such as the study by Hornsey et al 1 , who illustrated reduced cell killing in mouse intestine at high dose rates( Figure 1c). However, a recent study (2014) by Favaudon et al 9 has sparked explosive interest in ultra-high dose rates again. Contrary to conventional radiotherapy techniques, which employ mean dose-rates of ~0.03 Gy/s with doses of ~2 Gy delivered over 10-30 fractions, the authors used an ultra-high mean-dose rate of 40 Gy/s with total irradiation time <500 ms to achieve an improved differential response between tumor and normal tissue. They reported less normal tissue damage with high dose-rate irradiation when compared to 'conventional' radiotherapy conditions, while observing similar anti-tumor response in both modalities. The authors termed this phenomenon as the 'FLASH' effect. Multiple groups have now replicated the FLASH effect in different murine organs 10,11 and in superficial treatments in animal models such as mini-pigs, cats 12 and zebrafish 13 . It has been shown that the FLASH effect can be triggered using electrons 9,14,15 , xrays 16,17 and protons 18,19 . Recently, the first human patient was treated with FLASH-RT 18 , with promising clearance of the lymphoma lesion with lower skin toxicity than has been seen in previous irradiations. The field of FLASH-RT has experienced wide interest and growth and could be a critical area of development for better normal tissue sparing in radiotherapy.
The calibration and quality assurance tools for dosimetry also need to be adapted accordingly to keep up with the ever-changing nature of radiation dose delivery Historically, spatial modulation in dose delivery has led to increased use of small radiation fields or beamlets for which accurate dosimetric characterization was found to be non-trivial. These problems associated with small fields are well documented 20 . This warranted a need for high-resolution detectors which most vendors now typically provide for the measurement of cumulative dose distributions 21 . With the emergence of FLASH-RT and other promising high dose-rate modalities, such as Microbeam Radiation Therapy (MRT) 22 and Synchrotron stereotactic radiotherapy 23 , it is expected that new dosimetric challenges will arise. Spatiotemporal dosimetry for small radiation beamlets delivered dynamically under high dose-rate conditions can be difficult. For successful translation of FLASH-RT to a clinical setting, dosimetry must be performed accurately and rigorously keeping in mind limitations of various detectors at high dose-rates and non-standard nature of the various FLASH irradiation platforms. The issue of dosimetric uncertainty in preclinical radiobiological studies and its effect on reproducibility and eventual translation to clinics has been highlighted by multiple authors 24,25 . The National Institute of Standards and Technology (NIST) has recommendations regarding accurate measurement and reporting of dose in preclinical radiobiological studies 26 . In a multi-institution audit of irradiator output 25 , it was found that only one facility was able to deliver a dose within 5% of the prescribed limit; other facilities had errors ranging from 12% to 42%. The issue is primarily because of the non-standard nature of irradiation platforms used in pre-clinical studies and a lack of protocols and guidance. To tackle this issue, the American Association of Physicists in Medicine (AAPM) has initiated Task Group No. 319 "Guidelines for accurate dosimetry in radiation biology experiments''. One of the major aims of the task group is to standardize dosimetry and review uncertainties associated with non-clinical units used in preclinical radiobiological studies. Therefore, this review is motivated by the need to assess the various different uncertainties associated with performing accurate dosimetry under FLASH and high dose-rate irradiation conditions.
To this end, dosimetric problems unique to high dose-rate environment such as FLASH will be identified in this review. Common radiation dosimeters based on different physical principles, mainly, luminescent, charged based and chemical detectors will be discussed in light of the dosimetric issues identified earlier. We hypothesize that based on the underlying physical mechanisms, luminescent based detectors can be used to perform accurate, realtime dosimetry, with predetermined corrections under reference conditions to account for non-ideal characteristics (e.g. quenching of scintillator response for high LET particles and energy dependence of Cherenkov radiation). It can be argued that the most important dosimetric aspects of FLASH-RT (discussed in detail in section 2) are dose-rate independence, spatial resolution, and temporal resolution of a detector. Comparing these three characteristics of the detectors, their typical usage, and the underlying physics, it can be predicted that luminescent detectors offer unparalleled spatial-temporal resolution and dose-rate independence ( Figure 2). Values in Figure 2 are based on typical usage and exceptions to these do exist; these exceptions will be noted, where appropriate, in the text. Another purpose of this review is to provide an overview of tools that have been used in high-dose rate conditions. Therefore, instances of different dosimetric tools used in FLASH-RT and other high dose-rate modalities will be consolidated and summarized. Finally, the mediating role of oxygen tension in FLASH-RT will be discussed briefly. The superior ability of luminescent based methods to sense oxygen tension and measure dose and LET (simultaneously, specifically for particle therapy) in real-time will be described for its potential in understanding the radiobiological mechanisms underlying the protective effect of FLASH on normal tissue. Figure 2) Spider plot comparing the three different categories of detector. The axes denote major dosimetric issues associated with FLASH-RT. It can be seen that luminescent detectors can provide ~ns time-resolution, with sub-millimeter resolution and dose-rate independence up to a dose rate of 10 5 Gy/s. It is instructive to first compare temporal structure of FLASH and conventional radiotherapy beams in order to understand problems unique to high dose-rate conditions. Typically, radiation sources used in radiotherapy are not continuous but rather pulsatile in nature. The repetition rate and duty cycle depend on the type of particle acceleration used. For instance, most clinical linear accelerators (linac) typically have a pulse duration of 3-5 μs with repetition rate of ~200-400 Hz. For cyclotron based proton beams, the beam can be considered to be quasi-continuous due to short pulse duration and repetition rates on the order of a few nanoseconds. Additionally, modern radiation techniques typically deliver dose in fractionated manner over a period of few days. Therefore, dose-rate can be either the defined over the course of a whole treatment, one fraction or within a single pulse. The different interpretations of dose-rate are illustrated in Figure 3a. Table in Figure 3b presents a side by side comparison of temporal beam characteristics of FLASH and CONV radiation therapy; the dose per fraction is denoted as the mean dose-rate, Ḋm. The dose-rate in a pulse or the instantaneous dose-rate is denoted as Ḋp, which is the ratio of dose delivered in a pulse (Dp) divided by the pulse duration. This is an important distinction because instantaneous dose-rates in conventional radiotherapy can be comparable or even higher when compared to the mean dose-rate (Ḋm) of 40 Gy/s used to trigger the FLASH effect in current preclinical studies. Note that the conventional beam characteristics are based on typical linac based clinical beams and the values for FLASH are based on prototype electron linacs that have been successfully used to elicit the FLASH effect 27 . Multiple studies have now rigorously explored the beam parameters needed to trigger a reproducible FLASH effect and it has been shown that the Ḋp and Dp play a critical role 27,28 . For FLASH-RT, these quantities can be orders of magnitude higher as shown in 3b), leading to issues of saturation, and non-linear response of standard dosimeters at large doses. Figure 4) Spatial distribution of dose-rate within a pencil beam scanned proton beam illustrating the role of scatter contribution from adjacent spots, which leads to an inhomogeneous distribution of dose-rate. A point detector or an imaging array with coarse detector spacing will be unable to measure dose-rate distribution with accuracy. Figure Many preclinical FLASH studies have been performed on small animal models and tumor volumes. As mentioned earlier, performing accurate dosimetry with small beamlets is notoriously difficult. If the size of the sensitive volume of the detector is comparable to the radiation field size, dose averaging effects can lead to erroneous measurement of absorbed dose and artificial broadening of the field penumbra. Multiple preclinical studies have been performed using the experimental Oriatron eRT6-6MeV and Kinetron 4.5 MeV linear accelerator (PMB-Alcen, Peynier, France). Design limitations of these linacs result in only small field openings. Small applicators used to confine the radiation field to the organs of interest can also introduce additional complex problems 30 . Additionally, clinical linacs modified to deliver flash dose rates 31 tends to utilize significantly shorter source to target distance to further increase the dose rate. Close proximity to source renders only small field sizes viable. Varian's Clinac 21EX was modified by Schüler et al 31 and it was found that mean dose-rate of 900 Gy/s was achievable near the transmission ionization chamber in the gantry head, where 90% of the field diameter was measured to be ~12 mm. For the beam energy used in the study (20 MeV), this falls in the realm of small field dosimetry, due to the lateral electronic disequilibrium. While small field dosimetry issues are not specific to FLASH, they nonetheless contribute to dosimetric complexity. A related issue specific to FLASH is that of spatial averaging of dose-rate. Accurate determination of fully spatiallyresolved dose-rate is critical for FLASH accuracy. This is even more important currently, since one of the main goals is to optimize the beam parameters so that a FLASH effect can be elicited in the clinical setting. A simulation study conducted by Marlen et al 29 explored the idea of spatial dose-rate distribution within a broad beam and its effect on FLASH response. Their study was based on pencil beam scanning with protons, but the results can be generalized to other modalities. Large field sizes are obtainable with protons when spot scanning techniques are used. However as reported in the study, dose rate in one spot will be affected by the low dose-rate scatter contributed by the adjacent spots (shown in Figure 4). They reported that only 40 % of the dose is delivered at FLASH rates for a spot peak dose rate at the center of 100 Gy/s. When the peak dose rate is increased to 360Gy/s, the contribution to FLASH dose rates increases to 75%. Rahman et al. also measured, using a scintillating sheet, as much as 41% standard deviation in maximum dose rate distribution for spot spacing as large as 10 mm (shown in figure 10 a) 32 . This will also hold true for broad electron beams, synchrotron produced x-ray beams and passively scattered proton beams, where scatter can contribute to an inhomogeneous distribution of dose-rates across the irradiated volume. This phenomena was also studied by Van de Water et al 33 where they investigated the possibility of achieving FLASH dose-rates with conventional proton pencil beam scanning and intensity modulated proton therapy techniques. They proposed a new metric to quantify spatially varying dose-rate in three dimensions, dose averaged dose-rate (DADR) and doseweighted average of instantaneous dose-rate of all spots. It remains to be seen if such a criterion has any potential clinical value, but it points towards the need of a high resolution imaging detector. Additionally, as FLASH-RT continues to move towards clinical translation, large field sizes will need to be delivered at high dose-rates. Spatial distribution of dose and dose-rate can only be reliably measured using imaging techniques or detector arrays. Thus, a high resolution detector array with small inter-detector spacing is needed, so that a 2D spatial distribution of dose-rate can be measured accurately, avoiding any volume averaging effects. A 3D distribution of dose-rate within patient geometry would be the ideal case.
Time Resolution
Another aspect of dosimetry under FLASH conditions is that of real-time dose monitoring vs passive dose monitoring. While dose-rate independence is an important requirement for FLASH-RT, the ability to verify machine output, dose delivered per pulse, and dose-rate in real-time is of considerable interest. For high dose-rates and dose per pulse conditions, real-time dose monitoring is non-trivial. Not only dose-rate independent dosimeters are required, but dosimeters with a high enough temporal resolution and high bandwidth read-out methods are needed. For instance, some excellent dose-rate independent detectors like radiochromic film, alanine and TLD's etc. can only provide passive dose monitoring. Additionally, even though certain dosimeters can provide online dose-monitoring, they encounter other issues at ultra-high dose-rates which limits their capability. For example, clinical linear accelerators employ a monitor chamber in the gantry head which records machine output in real-time and serves as a beam-off signal when the recorded dose matches the required dose. However, most commercial ionization chambers start to show saturation or decreased ion-collection efficiency at high dose per pulse conditions rendering online monitoring of dose problematic at FLASH dose-rates. It has been shown by Jorge et al 34 that despite correcting for ion-recombination effects at high doses per pulse, ICs can show deviations up to 15%.
Dynamic Range
Dynamic range of a dosimeter is another issue pertinent to dosimetry in FLASH-RT. It has been shown that the oxygen depletion effect is highly dependent on the total dose delivered [35][36][37] , with oxygen being depleted rapidly at high doses. If the oxygen depletion effect is indeed the underlying mechanism of the normal-tissue protection, it would be expected that preclinical studies will move towards hypo-or single-fractionated regimen with high doses per fraction to increase the therapeutic ratio. Currently, FLASH studies have been performed where total doses up to 40 Gy have been delivered in a short duration. For in-vivo dose verification, this implies that a dosimeter is required which does not suffer from saturation effects and maintains a linear or otherwise predictable behavior in response to dose. Radiation damage can also potentially be an issue for sensitive detectors such as MOSFET, however in most cases, the damage threshold for dosimeters is orders of magnitude higher compared to the dose threshold where dosimeters start showing saturation and non-linearity.
Other Ideal Characteristics
While, the previous few sections were primarily focused on dosimetric aspects unique to FLASH-RT, there are still other characteristics that make certain radiation dosimeters ideal for dosimetry. These characteristics go hand in hand with the ones discussed previously because they will eventually lead to erroneous measurement of dose and subsequently dose-rate. Mainly, an ideal dosimeter should be tissue-equivalent and not perturb the radiation field so that it can serve as an accurate surrogate for dose measurement inside the patient. A parameter closely tied to tissue equivalence is the energy dependence of the dosimeter. Ideally, a dosimeter that is energy-independent is required, because one would want the detector to respond uniformly to irradiation irrespective of the radiation quality. For example, certain non-tissue equivalent detector can over respond when there is significant low energy scatter present in the radiation field leading to erroneous measurements. These issues are amplified at small fields and can cause significant errors. In general, correction factors for small fields can be divided in two categories: 1) volume averaging, 2) other non-ideal characteristics. While volume averaging is a purely geometrical concept and can be minimized by using dosimeters which are small in volume, non-ideal characteristics are harder to correct for. The volume averaging correction factor on the central axis is within 1-2% if the size of the detector is 1/4 th of the diameter of the incoming beam 38 . For example, the unshielded Stereotactic Field Diode (SFD) diode (0.6 mm diameter) by IBA has a volume averaging correction factor of 1.003 for a 5 mm circular beam 39 . However, the overall correction factor for the diode has been typically found to be < 1 in literature, which implies that the diode tends to over respond to radiation 40 . This over-response is explained by the presence of high-Z silicon in the diode which offers an increased photoelectric absorption coefficient at lower energies. In FLASH-RT, these non-ideal characteristics can be critical because of the use of small fields as discussed above. FLASH beams. Advanced Markus Chamber IC response (charge collection efficiency) was calculated for three different bias voltages and the only charge-based detector to be tested in FLASH dose rates 41 . Model for diamond detector response (charge collection efficiency) and diode detector response (sensitivity) were only tested at conventional dose rates 42,43 . Most charged based dosimeters are based on the principle of creation of ion pairs or charges which can be collected and correlated to dose. In Ionization Chambers (IC), the collection of ion-pairs is facilitated by an application of an external bias across the electrodes. The voltage applied is typically high enough, such that all liberated charges are collected. However, at high doses per pulse, ion-pairs can recombine before they are swept across the E field and collected by the electrodes. This can lead to a decrease in sensitivity with increasing dose per pulse. While ionrecombination effects can be accounted for at moderate doses per pulse using Boag's model 44 , at high dose per pulse the model breaks down. However, it should be noted that certain commercial IC chambers, such as the Advanced Markus IC by PTW can efficiently collect ions at dose-rates as high as ~ 300 Gy/s and a dose per pulse of ~5 mGy. This is sufficient for conventional flattening filter free beams and pencil beam-scanned proton beams. Nevertheless, at high instantaneous dose-rates and doses per pulse typically used in FLASH, a correction factor would be needed to correct for ion-recombination. A study performed by Petersson et al 41 , looked into ion-recombination effects of the Advanced Markus IC with their FLASH setup. The authors came up with a model to account for ion-recombination and polarity effect for the aforementioned chamber ( Figure 5). It can be seen that the ion-collection efficiency has a strong dependence on dose per pulse above 1 Gy. With the correction factor applied, they concluded that the chamber could be used for FLASH dosimetry. In particular, they were able to measure a dose-per-pulse of 10 Gy. Interestingly, FLASH studies conducted with protons have employed ICs for dosimetric verification but without application of any correction factor. For example, Patriarca et al 18 employed a IC in their proton FLASH setup which used a cyclotronbased 230 MeV (IBA) proton beam as the radiation source. With cyclotron-based sources, high pulse repetition rates of around ~100 MHz can be achieved with pulse duration of around ~ 2ns. In this case, one can assume a 100% duty cycle, which drastically reduces the instantaneous dose rate or dose delivered in a pulse. Using Boag's method 44 , the authors found that the total recombination of ions was around 1% at a maximum mean dose-rate of 80 Gy/s . Another study conducted by Beyreuther et al 13 used a 224 MeV proton beam at a dose-rate of 100 Gy/s and concluded that the pulse duration (~100 ms) was an order of magnitude higher than the ion-collection time for the Advanced Markus chamber (~10 us).
Charge Based Dosimeters
Solid-state detectors such as diamonds and diodes have been used extensively for dosimetry in modern radiotherapy techniques due to their high sensitivity and small size. The operation of silicon diodes and diamond detectors is similar to IC chambers in that radiation produces electron-hole pairs which can be collected. Essentially, they can be thought of as solid-state ICs. Whereas, direct recombination is the dominant process in IC, charge recombination in solid state detectors is a more complex process dominated by indirect recombination, because of the presence of RG (recombination-generation) centers and impurities that can act as trap centers. In general, dose-rate dependence of solid-state detectors can be modeled as σ~Dr ∆ , where σ is the electrical conductivity, Dr represents the dose-rate and ∆ is a fitting parameter which describes the dose-rate dependency of the detector 45 . Multiple studies have investigated the dose-rate dependence of diodes and diamonds. Ade et al 46 found the fitting parameter ∆ for some diamond detectors to decrease by as much as 9% when the dose-rate was increased from 2.25 Gy/min to 3.07 Gy/min. Interestingly, diamond detectors have been have been reported to show increased or decreased sensitivity with increasing dose per pulse depending on their construction i.e. pure crystals, Chemical Vapor Deposition (CVD), high-pressure high-temperature HPHT 46 . While diamond detectors have not been extensively used for ultra-high doserate experiments, a microDiamond detector type 60019 (PTW) was used for one proof-of-concept proton-based FLASH study by Patriarca et al 18 . The sensitivity of the aforementioned diamond detector is shown in Figure 5. Recombination in diodes is also a complex physical process; whereas sensitivity of IC decreases with increasing dose per pulse, diodes are known to over respond at high dose per pulse [47][48][49] . The physical basis of this dependence is due to insufficient number of RG centers available for the excess minority carriers to recombine at high dose-rates and doses per pulse. Therefore, a larger fraction of charges is left behind and can be collected by the electrode, leading to increased sensitivity of the diode. To the best of our knowledge, no FLASH study has used diodes for dosimetric verification. Kinetic modelling of the recombination process in solid-state detectors has been carried out by multiple groups and we point the reader towards those references for a deeper understanding 42,[48][49][50] .
The time resolution of charged based dosimeters is mainly limited by the ion-drift velocity, mobilities of the different charge carriers present and other fundamental parameters such as the transit time (time taken for a charge to be completely collected) and minority carrier lifetime etc. For ICs with a typical external bias of 300 V, the temporal resolution usually ranges from a few ms to hundreds of ms 51 . For indirect band gap semiconductors, such as silicon, the time-resolution can be on the order of a few ms 52 . Pure diamond detectors, due to their superior electron and hole mobilities, can offer time resolution on the order of a few ns 53 , whereas most synthetic diamonds ( i.e. CVD based) have a minority carrier lifetime of a few us 54 . Therefore, real-time dose monitoring is indeed possible with diodes, diamonds, and IC. However, it should be reiterated that the main limitation for such devices is the change in sensitivity, non-linearity, and saturation due to charge recombination at high instantaneous dose-rates relevant in FLASH-RT. For pre-clinical FLASH studies, multiple authors 19,55 have circumvented this issue by using a Faraday cup; a conductive metal cup which accumulates charge when put in the beam's path. The main advantage of using a Faraday cup is that saturation, ion-collection, and recombination effects can be avoided. Additionally, Faraday Cups have been used with nano-second time resolution in studies conducted with high energy and highly pulsed charged particle beams [56][57][58] .
Typically, measurements performed with charge based detector are point (1D) or planar measurements (2D). The advantage of solid-state charge based detectors over gas-filled ICs is that they offer superior spatial resolution because of their increased sensitivity to radiation. For instance, the PTW microDiamond detector, if used in edge-on configuration (i.e. smallest dimension normal to the incident beam), exhibits a resolution of 1 µm. Another high resolution charge-based dosimeter of interest is the silicon single-strip detector (SSD) which has been touted as a potential dosimetric tool for Microbeam Radiation Therapy. The SSD was used at the European Synchrotron Radiation Facility and has been shown to demonstrate very high spatial resolution (~10 µm resolution) and high dynamic range. 59 Despite their high spatial resolution, one caveat of typical charge-based dosimeters is their energy dependence and tissue non-equivalence. This limits their usefulness in small field dosimetry which can lead to erroneous measurement of dose-rate in pre-clinical animal FLASH studies employing small beams.
Chemical Dosimeters
Certain materials that undergo structural changes, produce radicals, or change color when irradiated can be classified as chemical dosimeters. For example, when a solution of ferrous sulfate (Fricke) is irradiated, ferrous ions Fe 2+ are oxidized to ferric ions, Fe 3+ . The number of ferric ions produced is proportional to dose delivered and can be quantified by measuring the optical density of the solution. Fricke dosimeter was used by Hendry et al 55 in their study on effects of high dose-rate on oxygen concentration. However, diffusion of ferrous ions over time makes this technique sensitive to low dose-rates 60 . Similar in nature to the Fricke dosimeter, methyl viologen is another tool that was used by Favaudon et al 30 in their FLASH setup for online monitoring of dose. Dosimetry was performed by optical detection of the MV .+ radical at 603 nm. The authors were able to monitor dose synchronously with the electron-pulses, but a decay in the MV .+ radical with time (on the scale of a few minutes) was observed that ultimately led to loss in absorption; an issue which can play a major role at low dose per pulse/dose-rate conditions. Therefore, while, certain chemical dosimeters can provide absolute dosimetry and real-time detection of dose with ~ns resolution, the radiation induced species in these materials are generally not stable and can either diffuse spatially (Fe 3+ ) or decay with time(MV .+ ) which makes such setups unsuitable for real-time dose monitoring in FLASH-RT.
Fortunately, chemical dosimeters that produce stable radiation-induced species are available. One such dosimeter is Alanine and has been extensively used in preclinical FLASH studies 12,34 . Alanine is an amino acid, which forms a stable free radical upon irradiation. The concentration of the free radical is proportional to the absorbed dose, which can be probed using an electron paramagnetic resonance (EPR) spectrometer. Alanine dosimeters exhibit a linear response over a large dynamic range (2Gy-150kGy) and are therefore routinely used in industrial facilities. Although, at doses below 2 Gy, alanine can show considerable relative uncertainty of ~1.5% 61 . However, this might not be an issue for FLASH dosimetry because generally high doses are needed to elicit the FLASH effect. The real value of alanine dosimeters for FLASH dosimetry is in their excellent dose-rate independence (up to ~3 x 10 10 Gy/s 62 ). Recently, alanine was used at the European Synchrotron Radiation Facility (ESRF), which is capable of producing really high dose-rates (~ 10kGy/s) 63 ; the response of alanine was found to agree well with the PTW PinPoint IC, when the latter was corrected for ion-recombination effects.
Perhaps the major advantage of certain chemical based detectors in their inherent ability to provide planar or 3D measurements. One really popular and perhaps the major workhorse of the radiation dosimetry world, is the polydiacetylene based self-developing radiochromic film. Upon irradiation, the film undergoes color change by polymerization. The change in color is typically quantified in terms of the optical density as measured by a densitometer or in some cases via microscopy. Radiochromic films can be considered to be the ideal dosimeter, in that they are energy independent, tissue equivalent, and demonstrate really high spatial resolution (sub-micron) limited only by the digitizing method. Additionally, dose-rate independence of radiochromic films is well established in literature and they have been found to be independent of any dose-rate effects up to a dose-rate of 15x 10 9 Gy/s 64 . In fact, EBT3 Gafchromic Film (Ashland, Wilmington, DE) was used by Patriarca et al 18 to evaluate dose-rate independence of other dosimeters used in their FLASH setup. A detailed study was conducted by Jaccard et al 65(p3) on the suitability of radiochromic film for high dose-rate FLASH dosimetry with the Oriatron eRT6 electron linear accelerator (PMB-Alcen, Peynier, France) and they concluded that film was independent up to a Ḋp of 8 x 10 6 Gy/s. Additionally, radiochromic film has been used to measure dose homogeneity and verify field sizes in various different FLASH studies [9][10][11] and was also used to verify dose (along with alanine) for the human patient treated with FLASH-RT. One of the major drawbacks of radiochromic film is that measurements are performed offline, typically 24 hours post exposure to account for the fact that polymerization does not stop immediately after irradiation. In theory, realtime read-out of film can be performed with ms time resolution, since it has been reported in literature that polymerization is largely 'complete' within 2 ms of a 50 ns pulse 66 ; however, as started earlier, polymerization still continues post-exposure which can act as a confounding variable for near real-time dosimetry. Nonetheless, attempts have been made at real-time readout of radiochromic film 67,68 .
Certain chemical dosimeters can provide true 3D spatial dose distribution at high resolution and in patient geometry. This has been facilitated by the recent advent of gelatin-based polymers, which avoid the problem of ion diffusion encountered in Fricke dosimeters. However, diffusion in polymer gels can still occur in the first hour post irradiation and at high dose-gradients and high doses 69 . Essentially, polymer gels act as 3D radiochromic film, except that the change in optical density is probed in 3D as opposed to a single 2D plane. Multiple methods have been used to probe these radiation sensitive gels, such as MRI, x-ray CT, ultrasound and optical projection tomography (OPT) 69 . However, OPT has stood out as the more popular read-out method, because of the high spatial resolution it offers. Unfortunately, like all chemical dosimeters, some change in signal is expected with time. Change in signal postexposure coupled with the fact that complicated read-out machinery is needed to probe the response to radiation, renders this technique unsuitable for real-time dose measurement. More importantly, in FLASH-RT context, polymer based gels have been known to show dose-rate dependence, which might be attributable to competing radiation induced chemical reactions in the gel and the dose-rate dependence of water radiolysis products 70 . The dose-rate dependence seems to be a function of concentration of oxygen scavengers in the matrix, with less dose-rate dependence seen at high concentrations of O2 scavengers 71 . Dose-rate dependence is also a function of the type of monomer unit of the gel 70 . For a more detailed analysis on the origin of dose-rate dependence in polymer gels, we refer the reader to a comprehensive review by De Deene et al 70 .
Luminescent Dosimeters
In this text, luminescence refers to any technique which utilizes generation of optical photons in response to radiation as a surrogate for dose. This generally includes thermoluminescent detectors (TLD), optically stimulated luminescence detectors (OSLD), organic/inorganic scintillators and Cherenkov radiation. Physical properties that enable luminescent detectors to be of value in FLASH-RT will be discussed. When impurities are added to certain crystals, charge trapping occurs due to the added energy levels in the conduction-valence band gap. These additional energy levels act as traps for electrons and holes. Application of an external stimulus allows the trapped electrons and holes to escape allowing recombination at luminescent centers. It is this recombination process which leads to luminescence ( Figure 6). Depending on the external stimulus, the dosimeters can be classified as TLD (thermo-luminescent dosimeter) or OSLD (optically stimulated luminescent dosimeter). The luminescence is considered to be delayed because the electrons and holes can remain trapped over long periods of time (sometimes up to thousands of years) and can only be read-out after stimulation. The practical implication of this is that real-time dosimetry is not feasible using TLD or OSLDs. In some cases however, certain materials such as europium-doped alkali halides, may exhibit short trap emptying (~25 ms) and luminescent decay times (~1 μs) which may render real-time dose monitoring possible 72 .
3.3.1) TLD and OSLD
Despite the lack of real-time readout, TLDs and OSLDs are of great importance in high dose-rate dosimetry because of their excellent dose-rate independence. In fact, one of the earliest studies of dose rate effect on skin toxicity in mice (1980) by Inada et al 73 used a lithium borate TLD to verify dose. They confirmed the lithium borate TLD to be independent up to a dose rate of 1.5 x 10 9 Gy/s. Dose-rate dependency of TLDs has been investigated by multiple authors over the last few decades. Karzmarck et al 74 found LiF TLD to be dose-rate independent up to 2 x 10 6 Gy/s. Tochilin and Goldstien 75 found the same TLD to be dose-rate independent up to 1.7 x 10 8 Gy/s. More recently, Karsch et al 64 compared dose-rate independence of various detectors including TLDs and OSLDs. They found TLD and OSLD to be dose-rate independent up to 4 x 10 9 Gy/s within 2%. In the context of FLASH, Jorge et al 34 compared LiF-100 TLD (Thermo Fisher, USA) against two dose-rate independent dosimeters, alanine and radiochromic film. The Oriatron eRT6 linac was used for this study with dose-rates ranging from 0.078 Gy/s up to 1500 Gy/s and the results are presented in Figure 7. Alanine, film, and the LiF-100 TLD were found to agree within 3% at all dose-rates. In addition to dose-rate independence, TLD can be manufactured to be small and in powdered form. This is beneficial for small field dosimetry where high resolution is required. Additionally, the small form factor coupled with the ability to read out dose post irradiation renders TLDs as a viable tool for in-vivo dose verification. Indeed, for one FLASH study on mice whole brain irradiation 15 , dosimetric verification was performed in-vivo using 3 x 3 x 1 mm 3 TLD chips embedded inside the brain of a mouse cadaver at different points. A total dose of 10 Gy was either delivered in a single 1.8 us pulse or at conventional dose-rates (0.1Gy/s). The placement of the TLDs and the dose verification are shown in Figure 8. The dose measured by the TLDs agreed well with the prescribed dose of 10 Gy. Additionally, no dose rate effect was seen between measurements performed at 0.1 Gy/s compared to dose delivery in a single 1.8 us pulse (5 x 10 7 G/s). This was one of the first non-superficial, in-vivo measurements performed at FLASH dose-rates. The measured dose at different points. The black markers represent 10 Gy delivered in a single pulse of 1.8 us (FLASH-RT). The light gray markers represent the dose delivered at a dose-rate of 0.1Gy/s (i.e. conventional dose-rate). Error bars represent the relative uncertainty in the absorbed dose measurements (+8.2% in each case) 15 . Reprinted from "Irradiation in a flash: Unique sparing of memory in mice after whole brain irradiation with dose rates above 100Gy/s", Vol 124 / Issue 3, Figure 1, Copyright (2017), with permission from Elsevier.
One caveat of TLDs and OSLDs is that measurements are usually limited to a point. To overcome this, a few investigators have studied the possibility of using planar arrays of TLDs and OSLDs to measure spatial distribution of dose. One such TLD array was designed and tested at the European Synchrotron Radiation Facility (ESRF) by Ptaszkiewicz et al 76 . The study was primarily aimed at Microbeam Radiation Therapy (MRT); a novel external beam radiotherapy technique in which quasi-parallel beams with widths around 25-50 μm separated by 100-400 μm are delivered at ultra-high dose rates. The results are relevant to FLASH because dose-rates in MRT can reach up to a few kGy/s 77 . The TLD array consisted of LiF:Mg,Cu,P 10 x 10 x 0.3 mm 3 foils with different grain sizes (up to 150 μm). Dose read-out was performed using a 12-bit CCD camera with sub-millimeter resolution. Therefore, dose-rate independence and sub-millimeter resolution make this setup an attractive choice for FLASH-RT. Reusable 2D OSLD arrays have also been constructed with submillimeter resolution and large dynamic dose range 78,79 .
Another passive luminescent detector of note is the Fluorescent Nuclear Track Detector (FNTD) 80 . FNTDs employ a single crystal of Aluminum Oxide doped with Magnesium and Carbon (Al2O3:C, Mg) with additional oxygen vacancy defects. The underlying physics is similar to OSLDs but with minor differences. Specifically, exposure to radiation produces new recombination or color centers which can then be probed non-destructively using microscopy techniques. In contrast, new recombination centers are not formed in OSLDs when exposed to radiation. This technique has been used for dosimetry in MRT 81 where spatial resolution of 1 μm was achieved. Additionally, FNTDs have also been tested to be dose-rate independent up to 10 8 Gy/s and are capable of measuring dose over a large dynamic range ( 3 mGy to 100 Gy) 80 . Therefore, FNTDs are an attractive choice for dosimetry in FLASH-RT. Scintillation is the phenomena by which an interaction of certain material (scintillator) with high energy photon or charged particle results in emission of optical photons. Scintillators can be broadly divided into two different categories 1) organic and 2) inorganic 83,84 , with underlying physical mechanisms depicted in Figure 9. The process of scintillation in both material types follows a general mechanism composed of conversion, transport (migration), and luminescence. Organic scintillators are typically aromatic hydrocarbon compounds, that produce excited states by ionizing radiation, and subsequently luminesce due to allowed π electron transitions between excited singlet state S10 to various different vibrational sub-levels of the ground singlet state. When electronic transitions occur between singlet states, the emission and decay of luminescence is on the order of a few ns. However, most organic solutions do not exhibit high scintillation efficiency and are therefore used in conjunction with a solute. In this case, energy transfer occurs mainly via solvent-solvent interactions, and ultimately to the solute by dipole-dipole energy transfer 85 . In the case of polar solvent-based scintillators (e.g. quinine solution in water), ions and radicals are formed rather than excited states, and therefore ion recombination must occur prior solute excitation 86 .
3.3.2) Scintillators
Inorganic scintillators typically consist of single or poly-crystalline materials, often doped with impurities that can act as luminescent centers. In the initial conversion phase, a large number of excited electrons and holes is created upon interaction of high energy photon or charged particle with the scintillator matrix, followed by thermalization and transport of created excited states to a luminescent center. Unlike in OSLD, an external stimulus to facilitate the release and recombination of electrons and holes is not needed due to the presence of an allowed transition at luminescent centers, and minimal number of charge traps. Similar to organic scintillators, rise and decay times for inorganic scintillators can also be on the order of a few ns.
Owing to their excellent tissue-equivalence and the ability to be miniaturized, multiple investigators have recommended the use of organic scintillators for small field dosimetry 21,39,87 . In particular, organic scintillators, such as the commercially available Exradin W1 (Standard Imaging) can be used as reference detectors for small fields against which correction factors for other detectors can be derived 39 . In contrast to organic type, inorganic scintillators are usually made with high-Z materials and are therefore not tissue-equivalent; a scenario which need to be accounted for in radiation dosimetry. Nonetheless, they have a role to play in FLASH-RT. Fast rise and decay times, radiation hardness and high detection efficiency due to increased photoelectric cross section for x-rays, makes inorganic scintillators an ideal tool for applications where superior time-resolution is required.
Typically, measurements performed with scintillators can be either point, planar 2D or 3D measurements. For point measurements, the setup usually consists of a small scintillator coupled to an optical fiber and a photodetector. Recently, Archer et al 88 demonstrated the use of miniature BG400 plastic scintillator (10 μm thick) coupled to a fiber optic and a SiPM for dosimetry at the Imaging and Medical Beam-Line at the Australian Synchrotron with an average dose-rate of 4435 Gy/s, resolving beams of 50 μm width. In case of planar or 3D measurements, the setup typically consists of a scintillating volume imaged remotely at high spatial resolution (sub millimeter) with a CCD or a CMOS camera. The prompt emission of light, coupled with high frame-rate imaging capabilities of modern imaging sensors, make this technique suitable for online monitoring of Figure 10. Scintillating sheet imaged dose rates from a proton pencil beam scanning system with modulated beam parameters in the treatment plans. a. Maximum dose rate distribution with 10 mm spot-spacing b. Cumulative dose rate histogram for varying minimum spot weight of a treated layer 32 . machine output and dose delivery under FLASH irradiation conditions. Optical imaging of scintillation using cameras during external beam radiotherapy has already been widely implemented 84,89,90 . In a recent study 91 , a time-gated intensified CMOS camera was used to image complex stereotactic radiosurgery (SRS) plans at high dose-rates in a radioluminescent phantom. The authors were able to resolve complex and highly modulated dose distributions spatially and temporally. Due to its high spatio-temporal resolution, optical imaging has also been used for quality assurance purposes in pencil beam scanning (PBS) proton therapy; a technique which also utilizes high dose rates (up to 200 Gy/s near the Bragg Peak). For example, Vigdor et al 92 used a xenon gas scintillator coupled to large PMTs for monitoring 2D beam characteristics in real-time for pulsed and pencil beam scanning proton radiotherapy treatments. The authors demonstrated a spatial resolution of a few hundred microns. Additionally, they noted that the gas scintillator was able to measure up to a dose rate of 350 Gy/s, whereas an ionization chamber started exhibiting ionrecombination effects at much smaller dose-rates. In another study, Darne et al 93,94 used three CMOS cameras to image proton pencil beam scanning inside a phantom filled with a liquid scintillator. The authors were able to image at 91 frames per second with sub millimeter resolution. More recently, Rahman et al 32 were able to resolve spatiotemporal (10 ms and 1 mm resolution) dose-rate dynamics up to 26 Gy/s for proton PBS using a scintillating sheet and a CMOS camera. As shown in Figure 10, the imaging technique visualized the proton beam parameters that modulated dose rate distributions and introduced cumulative dose rate histograms that can potentially be used for optimizing dose rate distributions for patient planning in FLASH-RT. One FLASH study by Favaudon et al 30 , used a 2D scintillating array coupled to a CCD camera ( Lynx ® , IBA) for monitoring beam profiles. The scintillating screen was a 0.5 mm thick gadolinium based plastic material with an active area of 300 x 300 mm 2 and a spatial resolution of 0.5 mm. The detector was primarily used to assess field size, field homogeneity and dose linearity of the system and exhibited excellent linearity with increasing dose. The field homogeneity and FWHM values measured by the scintillating detector were within 1% at high and low dose rates. Peak dose-rate used in the study was around 2.4-3.5 x 10 6 Gy/s. The Lynx ® detector was also used by Beyreuther et al 13 for measuring field homogeneity in their proton FLASH setup. Multiple investigators have now used cameras and scintillation to reconstruct dose in 3D [95][96][97][98] . In the context of FLASH, this implies that a dose-rate independent detector that can measure dose in 3D with high spatial and temporal resolution ought to be able to measure dose-rate distribution in 3D in real-time. This information can potentially be used to predict the spatial distribution of the protective effect of FLASH in patient geometry. Figure 11. a) Huygens representation of Cherenkov radiation mechanism in dielectric medium. Light is generated in a cone at caustic angle θc around the trajectory of charged particle. b) Energy dependence of Cherenkov radiation for different materials (adapted from Glaser et al 99 ).
3.3.3) Cherenkov Radiation
Cherenkov radiation is the emission of optical photons in a dielectric medium when a charged particle travels at a phase velocity that exceeds the phase velocity of light in the medium. Electromagnetic fields associated with a charged particle can polarize the medium. If the particle moves slowly (relative to the speed of light in the medium), the relaxing dipoles experience a net destructive interference, and no light is emitted. However, if the particle's phase velocity exceeds that of light, asymmetric polarization can occur along the particle's trajectory, and the relaxing dipoles radiate energy with a net constructive interference, observed as visible Cherenkov radiation (Figure 11a). In contrast to scintillation, the Cherenkov light is not emitted isotropically, but rather in a cone with its axis aligned with particle trajectory. It has been shown by multiple groups that above the threshold for Cherenkov (261 keV in water) generation, the intensity of light emitted is proportional to dose 99-101 , albeit with prominent energy dependence for particles below ~1 MeV. Importantly, Cherenkov light is created instantaneously 102 (~10 -12 s) upon interaction of the charged particle with the dielectric medium; this is faster than what most scintillators are capable of because of the various non-radiative mechanisms specific to the process of scintillation. Multiple investigators have now made use of Cherenkov radiation as time-of-flight PET detectors 102-104 due to its fast time response. Cherenkov radiation has also found use in pulse radiolysis studies with pico 105 and femtosecond 106 time resolution. Additionally, Cherenkov radiation has been imaged in real-time 107(p),108 during multiple clinical radiotherapy treatments. The prompt nature of light emission, along with dose linearity makes Cherenkov emission an ideal tool for real-time dose monitoring. The general experimental setup for Cherenkov based dosimetric imaging is similar to the ones discussed earlier for scintillation dosimetry; an undoped optical fiber (i.e. production of Cherenkov and no scintillation) coupled to a photodetector or a volume capable of producing Cherenkov radiation imaged remotely with a camera. In the latter case, if a water phantom is subjected to radiation, Cherenkov emission can then be considered to be a waterequivalent dosimeter. However, due to inherent threshold below which no Cherenkov photons are generated, Cherenkov based detectors are expected to be energy dependent; a scenario which is not ideal for radiation dosimetry (Figure 11b).
In the context of FLASH, a Cherenkov probe was used for online monitoring of dose by Favaudon et al 30 . A number of tests were performed to confirm the efficacy of the Cherenkov detector. In one of the tests, a single 1 μs pulse of 3.9 or 5.0 MeV electrons were delivered to the probe. The area under the signal detected by the PMT (voltage against time) was found to be proportional to the energy of the beam. In another test, single pulses were delivered with increasing pulse widths (0.1 to 2.2 μs), which essentially translates into changing dose. The authors noted that the integral Cherenkov emission increased with beam energy, pulse duration and dose, without any saturation effects. Based on these results, the authors concluded that Cherenkov radiation has potential to be a useful tool for online-dose monitoring under high and low dose-rate conditions. A complete overview of the radiobiological underpinning of the FLASH effect is outside the scope of this study. However, the role of oxygen depletion will be briefly discussed here, since it is considered to be one of the major factors mediating the FLASH effect. Presence of molecular oxygen is known to make cells more susceptible to damage by radiation, as shown in the radiosensitivity curve in Figure 12a. This can be defined in terms of the oxygen enhancement ratio (OER), which is the ratio of dose needed to achieve the same biological effect under hypoxic and normoxic conditions. At ultra-high dose rates, it is hypothesized that transient hypoxia occurs which confers a protective effect on normal tissue. The improved differential response between tumor and normal tissue arises because the microenvironment surrounding solid tumors is already hypoxic 112 and remains largely unaffected by the depletion of oxygen. The time-scales over which oxygen depletion and reoxygenation occurs is important, since the underlying assumption is that oxygen is depleted at a rate faster than it can diffuse back into the normal tissue. Adrian et al 109 in vitro study supports the model and their results indicated there was no difference between cell death in hypoxic (5% oxygen concentration) cells, however cells under normoxic (20% oxygen concentration) oxygen conditions, showed increased survival from FLASH compared to conventional irradiation 109 . This can be attributed to larger gradient in radiosensitivity at normoxic oxygen concentration, thus a more prominent FLASH effect.
Biological Effects and Dosimetry: OER and LET
Luminescence imaging, in addition to dose and dose rate, can measure oxygen concentration and play an important role in testing the hypothesis in vivo. There are many indirect methods of estimating oxygen concentration in tissue including quantifying vascular parameters (intercapillary distance, distance from tumor cells to nearest vessel), perfusion, gene expression, protein levels, metabolism, DNA damage 113 . However, there are only a few direct methods of measuring oxygen concentration or tension directly, including the standard procedure of using a polarographic needle electrode system [114][115][116][117] . Collingridge et al 118 compared the standard polarographic method to an oxygen sensing system based on a time-resolved luminescence optical probe. The method relied on measuring the lifetime of the luminescence molecules from oxygen-quenching in the tip of the optical fiber and relating it to oxygen concentration. The authors confirmed that the time-resolved luminescence probe had the same degree of accuracy as a polarographic electrode system in measuring oxygen concentration. However, probes are invasive, measure at single points, and are scanned across the tissue to provide a histogram of oxygen concentration. Imaging techniques provide methods of quantifying oxygen concentration distributions. Positron emission tomography (PET) with F 18 labelled markers has been used to image hypoxia, but scans may take 2-4 hours, which is much longer than the time scale of FLASH effects 113 . Alternatively, F 19 based oximetry and magnetic resonance imaging can provide oxygen concentration at the multiple pixel level 119 . Electron paramagnetic resonance spectroscopy and imaging with oxygen sensitive particulate or water-soluble probes can also provide direct measurement of pO2 or [O2] with sub-millimeter spatial resolution and temporal resolution <<1s 120,121 . More recently, Cherenkov excited luminescent imaging (CELI) provided in vivo oxygen pressure maps based on lifetime imaging of fluorophore platinum(II)-G4 (PtG4) 122,123 . The Stern-Volmer equation was used to relate the decay of PtG4 to oxygen tension in tumors and normal tissue (pre and post euthanasia) ( Figure 13). The imaging technique achieved submillimeter resolution of 2 across the surface and near sub-surface of tissue and can potentially image 2 post irradiation from a FLASH beam. This could be used to help relate the dose rate distribution to the oxygen depletion distribution in the tissue and effects on the OER.
The ability of certain luminescent based detectors to quantify linear energy transfer (LET) can also play a crucial role for FLASH-RT. The OER is dependent on particle type and LET as shown in Figure 12b 110 . Heavy charged particles such as protons and alpha particles can have higher LET than photons or electrons and can reduce the effects of the OER. Thus, the oxygen depletion hypothesis brings into question whether heavy charged particles will provide the same degree of FLASH effects as electrons and photon FLASH beams. Furthermore, LET distribution of heavy charged particles are not homogeneous and have a drastic increase at the Bragg peak of the beam. So, quantifying spatial distribution of the LET may be important in describing the differences in FLASH effects of heavy particles and photons/electron beams. Currently, Monte Carlo methods or analytical methods are often used to determine LET distribution for treatment plans 124,125 . However, only certain detectors can measure LET of charged particle beams and majority of them are based on luminescent techniques. Fluorescence nuclear track detectors (FNTD) have been used to measure LET of individual proton tracks. 80,126 . However, FNTDs have a limited range of LET it can detect (5MeV/mm-1000MeV/mm), which does not include the range of LET distribution of proton beams. Alternatively, OSLD/TLD response dependency on LET can be utilized to determine both LET and dose distribution 127,128 . Nonetheless, these methods are passive and do not quantify dose or LET distribution in real time. Alsanea et al 129 showed that variable LET scintillation quenching in two different tissue equivalent organic scintillators can be utilized to determine dose and LET distribution in real time 129 . This method relies on Birk's law of scintillation quenching, requires a large difference in the quenching parameter, and requires the scintillators to be made of the same material (i.e. electron density). To note, silicon detectors have also been used to determine the mean LET of the proton and show its dependency on clinical proton beam energies (1-194MeV) 130 . A comprehensive list of dosimeters used in FLASH studies and other high dose-rate modalities is given in Table 1. The different columns represent some of the major issues identified in section 2. The values are based on typical values and usage encountered in literature. Exception to these values do exist; for example, radiochromic film is categorized as a passive detector, but attempts are real-time dosimetry with film has been made in the past 67,68 . The 'Measurement Type' column has bold entries in it, which indicate the way those dosimeters were employed in FLASH-RT studies. The time resolution values are based on the underlying physics of the dosimeters, as explained previously. This does not take in to account the available bandwidth of the read-out method. Of course, the dead-time of the read-out electronics should be considered while dealing with such dose-rates. While some of these issues are not necessarily unique to FLASH-RT, they nonetheless contribute to the overall dosimetric uncertainty.
Based on the unique temporal beam characteristics of FLASH-RT, dose-rate dependence, spatial resolution (in particular, accurate measurement of spatial distribution of dose-rate in a broad-beam), and time-resolution parameters shall be emphasized. To compare detectors based on these parameters (and the number of dimensions it has measured up to in literature), a spider chart in presented in Figure 14, where a), b) and c) refer to the three different categories of detectors. It can be seen that all luminescence based dosimeters exhibit excellent dose-rate independence. For chemical dosimeters, radiochromic film and alanine dosimeters also show dose-rate independence up to 10 9 Gy/s. Even though methyl viologen is shown to be dose-rate independent up to a high dose-rate in Figure 14, they tend to be dose-rate dependent at really low dose-rates or low doses per pulse conditions because of the diffusion/decay of radiation induced species with time. Therefore, such chemical dosimeters, while promising at high dose-rates, might not be suitable if they are to quantify an in-homogenous distribution of dose-rate. Charged based dosimeters tend to have a complex dependence on dose rate and are highly dependent on the temporal characteristics of the beam.
For measurement of dose in real-time, it can be seen that, luminescent detectors again tend to be superior when compared to chemical and charge based detectors. Most scintillator based detectors provide ~ns resolution, whereas Cherenkov radiation in this regard provides the best theoretical time-resolution (~ps); a fact which makes it an ideal candidate for online-monitoring of machine output without suffering from issues such as saturation or doserate dependence. Based on results presented by Favaudon et al 30 , it can be argued that Cherenkov detectors can play a role similar to that of monitor chambers in conventional radiotherapy. Other luminescent detectors, such as TLD/OSLD and FNTD are suitable for passive measurements. Nonetheless, they still have a role to play in FLASH-RT, because of their dose-rate independence. Most charge-based dosimeters also offer decent temporal resolution; however, they are limited by their dependence on dose per pulse/dose-rate. Chemical based dosimeters tend to be feasible only for offline measurements, due to cumbersome read-out methods and the general instability (temporal and spatial) of radiation induced species.
Due to an increase in use of small radiation fields, detectors have been miniaturized to the extent, such that a resolution of ~1 mm is achievable with most modern detectors. It is important to distinguish between spatial resolution of point detectors from that of imaging detectors. For point detector, the spatial resolution is defined in terms of the spatial extent of the sensitive volume. For imaging detectors, the inter-detector spacing is perhaps a more suitable measure of spatial resolution. While, point solid state detectors, such as diamonds and diodes can indeed be constructed to be small, imaging arrays based on these detectors typically tend to exhibit an inter-detector spacing of 3-5 mm. Dose-rate dependence coupled with sparse detector spacing, makes these imaging arrays unsuitable for FLASH purposes. Additionally, solid-state devices tend to be non-tissue equivalent and energy dependent which can further complicate dosimetry due to small field issues. Comparing chemical and luminescent detectors, it can be seen that radiochromic film provide the best possible spatial resolution. High spatial resolution, tissue equivalence and dose-rate independence make radiochromic films an ideal tool for measuring spatial distribution of dose-rate in FLASH-RT. However, luminescent based detectors based on optical imaging techniques can provide the aforementioned qualities of radiochromic film, with the added advantage of high temporal resolution which makes real-time dose monitoring possible.
In addition to traditional dosimetry, the bio-chemical dose response of FLASH-RT was also discussed. In particular, it was shown that luminescent techniques can sense oxygen tension in real-time and can also measure dose and LET simultaneously for particle therapy. These parameters are crucial to understanding the underlying radiobiological mechanisms of the protective effect of FLASH. Questions such as how the FLASH effect varies with LET, oxygen concentration etc. can be answered using these techniques. In conclusion luminescence was presented as a tool which can play a diverse role in the performing dosimetry and understanding the FLASH effect caused by ultrahigh dose-rates. | 14,117.6 | 2020-06-06T00:00:00.000 | [
"Physics",
"Medicine"
] |
Control of Q-switched laser radiation characteristics via changing the gain distribution
The advantages of simultaneous side and end diode pumping of Nd:YAG laser to enhance the spatio-temporal characteristics in the Q-switched mode of operation has been demonstrated. It has been shown that using a hybrid pump geometry in a short linear resonator provides a superior combination of output beam optical quality, energy, and pulse duration in contrast to the solitary use of an end-pump or a side-pump scheme at similar levels of average output power. We have demonstrated a compact single active rod Nd:YAG laser design in Q-switching mode with a pulse duration of 18 ns, pulse energy up to 3mJ, a repetition rate of 8 kHz, and М <2. © 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (140.3580) Lasers, solid-state; (140.3540) Lasers, Q-switched; (140.4780) Optical resonators. References and links 1. R. Iffländer, Solid-State Lasers for Materials Processing: Fundamental Relations and Technical Realizations (Springer Series in Optical Sciences, 2001), Vol. 77. 2. M. N. Zervas and C. A. Codemard, “High Power Fiber Lasers: A Review,” STQE 20(5), 0904123 (2014). 3. J. Speiser, “History, principles and prospects of thin-disk lasers” (German Aerospace Center Institute of Technical Physics, 2014), http://elib.dlr.de/94106/1/Speiser_thin_disk_hist_scaling.pdf. 4. S. G. Grechin and P. P. Nikolaev, “Diode-side-pumped laser heads for solid-state lasers,” Quantum Electron. 39(1), 1–17 (2009). 5. B. Oreshkov, K. Georgiev, S. Gagarskiy, V. Rusov, N. Belashenkov, A. Trifonov, and I. Buchvarov, “High Energy, High Repetition Rate, Q-Switched Diode Pumped Nd:YAG Laser Using an Unstable Resonator with Variable Reflectivity Mirror”, (CLEO EUROPE, 2017), Book of Abstracts, p.17. 6. N. N. Du Keming, J. Xu, J. Giesekius, P. Loosen, and R. Poprawe, “Partially end-pumped Nd:YAG laser with a hybrid resonator,” Opt. Lett. 23(5), 370–372 (1998). 7. W. A. Clarkson, N. S. Felgate, and D. C. Hanna, “Simple method for reducing the depolarization loss resulting from thermally induced birefringence in solid-state lasers,” Opt. Lett. 24(12), 820–822 (1999). 8. S. Mondal, S. P. Singh, K. Hussain, A. Choubey, B. N. Upadhyay, and P. K. Datta, “Efficient depolarizationloss-compensation of solid state lasers using only a Glan-Taylor polarizer,” Opt. Laser Technol. 45, 154–159 (2013). 9. J. Sherman, “Thermal compensation of CW-pumped Nd:YAG laser,” Appl. Opt. 37(33), 7789–7796 (1998). 10. J. J. Morehead, “Compensation of laser thermal depolarization using free space,” STQE 13(3), 498–501 (2007). 11. W. C. Scott and M. de Wit, “Birefringence compensation and TEM00 mode enhancement in a Nd:YAG laser,” Appl. Phys. Lett. 18(1), 3–4 (1971). 12. M. Ouhayoun, M. Boucher, O. Musset, and J. P. Boquillon, “A Nd:YAG laser with a vectorial phase conjugate mirror in the gain medium”, in Conference on Lasers and Electro-Optics Europe, France, 10–15 Sept. 2000, paper CTuN3. 13. P. K. Mukhopadhyay, K. Ranganathan, S. K. Sharma, P. K. Gupta, and T. P. S. Nathan, “Experimental study of simultaneous end-pumping to a diode-side-pumped intracavity frequency doubled Q-switched Nd:YAG laser,” Opt. Commun. 256(1-3), 139–148 (2005). 14. H. Kogelnik and T. Li, “Laser Beams and Resonators,” Appl. Opt. 5(10), 1550–1567 (1966). 15. W. Koehner, Solid-State Lasers (Springer Verlag US, 2008). 16. S. De Silvestri, P. Laporta, and V. Magni, “Novel stability diagrams for continuous-wave solid-state laser resonators,” Opt. Lett. 11(8), 513–515 (1986). 17. O. Svelto, Principels of Lasers (Springer Verlag US, 2010), Chap.5. Vol. 25, No. 26 | 25 Dec 2017 | OPTICS EXPRESS 32457 #307970 https://doi.org/10.1364/OE.25.032457 Journal © 2017 Received 26 Sep 2017; revised 20 Nov 2017; accepted 20 Nov 2017; published 13 Dec 2017 18. Y. V. Shen, T. M. Huang, C. F. Kao, C. L. Wang, and C. S. Wang, “Optimization in scaling fibercoupled laserdiode end-pumped lasers to higher power: Influence of thermal effect,” STQE 33(8), 1424–1429 (1997). 19. J. K. Jabczynski, K. Kopczynski, and A. Szczesniak, “Thermal lensing and thermal aberration investigations in diode-pumped lasers,” Opt. Eng. 35(12), 3572–3578 (1996). 20. A. Y. Vinokhodov, M. S. Krivokorytov, Y. V. Sidelnikov, V. M. Krivtsun, V. V. Medvedev, and K. N. Koshelev, “High brightness EUV sources based on laser plasma at using droplet liquid metal target,” Quantum Electron. 46(5), 473–480 (2016).
Introduction
Q-switched lasers are widely used for many research and technological applications such as precision surface treatment, engraving, micro-and nano-scale surface structuring/patterning, laser peening etc. Various demanding applications requires high average power and high peak power (on the order of hundreds of kilowatts up to several megawatts [1]), as well as a excellent optical beam quality.Many different solutions for selective and non-selective pumping of laser active elements have been suggested and realized to date to fulfill simultaneously the above criteria.The most impressive progress in developing sources with the output power on the order of several watts, up to tens of kilowatts and good output beam quality has been achieved in fiber lasers with a diode pumping [2].However, achieving a high peak power in compact and low-cost Q-switched all-optical-fiber laser system is practically impossible.High pulse energy and peak power are limited by non-linear processes (in particular, the cascade frequency conversion via Raman scattering) and laser-induced damage of surfaces and the fiber material itself.
An average output power of tens of kilowatts level is achieved also in diode-pumped solid-state lasers (DPSSL) based on disc active elements [3].However, the peak power of these lasers in Q-switched mode is limited by the amplified spontaneous emission (ASE) in direction perpendicular to the lasing axis.
State of the art systems lasing with both high peak and high average power rely on master oscillator (semiconductor, microchip-or minilaser with short rezonator) operating at high repetition rate with multistage and multipass amplification stages (master-oscillator power amplifier (MOPA) systems).This direction leads to rather complex and expensive laser systems.Besides, the amplification is usually accompanied by the beam quality deterioration due to thermo-optical distortions in the laser amplifier.Moreover, amplification-based systems could benefit by use of maximum possible pulse energy at the input of amplifying stages.So using of powerful master oscillator allows to decrease the number of the amplifying stages and therefore to simplify the MOPA system.
In order to achieve a low beam divergence, and consequently, a high intensity of the output radiation in short-base lasers, various techniques are used in addition to the traditional intra-cavity telescopes.There are a number of different methods for addressing gain coefficient distributions in transverse direction [4,5], or implementation of hybrid resonator setups [6].The heat transfer problems accompanying generation of a medium to high output powers are solved by various techniques for compensation of thermo-optical distortions in active elements [7][8][9][10][11][12].
When laser pulse repetition rates exceed the inverse fluorescence lifetime of the laser transition, master oscillators with continuous pumping (continues wave (CW) pump mode) are usually used.For a number of applications the laser should operate in a high-frequency pulsed-periodical mode when the maximum achievable pulse energy at the shortest possible pulse duration is required.To achieve this goal in Q-switched operation mode, the gain factor of the active medium should be well above the lasing threshold at the moment when the generated pulse starts and the cavity roundtrip time (i.e.cavity optical length) should be as short as possible.The first requirement can be achieved either by increasing the pump level per unit of resonator length (which is limited by thermal fracture threshold) or by increasing the active medium length (e.g. by adding more pumped active elements into the resonator).However, it leads to the increase of the resonator's optical length with following increase of the cavity round trip time and consequently of the pulse duration.
An alternative method to achieve simultaneously high average and peak power with relatively short pulse duration and high beam brightness is to combine transverse-and longitudinal diode pumping of the active rod.This approach benefits for getting advantages of the both pump laser technologies.In this scenario, a local zone is created along the axis of the active rod where the pump density and corresponding gain coefficient are much higher than when averaged over the active element's total cross-section.When the intracavity modulator is opened, the pulse generation starts exactly within this region, and since the gain coefficient there significantly exceeds the threshold value, the pulse duration should be rather short.Due to the specific design of the resonator, this pulse expands transversely over the total crosssection of the active medium until the saturation energy is reached.The peripheral regions of the active medium should stay lower a steady state lasing threshold.Under this condition the transverse distribution of the output radiation is expected to correlate with that of the seed pulse formed in the paraxial region.A similar approach was used in a Q-switched Nd:YAG green laser using intra-cavity frequency doubling in KTP [13].However, this experimental study was focused mostly on the enhancement of the green output power by adding properly adjusted longitudinal pump power in the end-pump channel.Despite the apparent simplicity of the suggested approach, its feasibility is not obvious because of non-homogeneous distribution of the imaginary part of the complex refraction index in the laser's active element in the presence of several competing processes, namely, pumping, absorption saturation and lasing.Hence, in order to achieve optimum parameters of the laser output in Q-switched mode of operation with simultaneous transverse-and longitudinal diode pumping of an active element a more comprehensive analysis is required.
Here, we report the results of numerical simulation of the suggested approach and the experimental realization with commercially available components is presented as well.
Analysis of the stability regions for a resonator with hybrid pumping
The "hybrid pumping" assumes that the following two zones can be distinguished within the active element when end-pump and side-pump are used together: -A paraxial zone corresponding to the diameter of the beam of the longitudinal pump, where the action of both the end-pump and side-pump is combined; -A peripheral zone, where only the side-pump is present.
In the simplest case of an empty linear resonator its geometry is determined by the optical base L b and the radii of curvature of the rear mirror and output coupler (R1 and R2, respectively) [14].In the absence of the pump radiation, the positions of the initial points in the stability diagram are given by the parameters g1 0 = 1 -L b / R1 and g2 0 = 1 -L b / R2.The approaches necessary to analyze resonators that contain additional intracavity components with optical power were discussed in [14][15][16][17].The homogeneous side-pump creates a thick lens extending over the total active element.It can be approximated to a reasonable accuracy by a thin parabolic lens with the principal planes located in the vicinity of the active rod center [14,15].A one-sided end-pump creates a region with optical pass distance difference in radial direction [18][19].Usually, it is located near the pumped end of the active element and extends only over a fraction of its aperture.
In the case of simultaneous diode-end-pumping and diode-side-pumped active rod the conditions for the formation of the output radiation caustic are different for the paraxial and the peripheral zones.The conditions are determined both by the combination of the parameters g1 0 , g2 0 and the parameters of thermally induced lenses which are result of the transverse distribution of the field in end and side pump respectively.
The conditions are determined both by the combination of the parameters g1 0 , g2 0 and the parameters of two or more thermally induced lenses with different optical pass distance distributions.
The analysis of the stability of the resonators with the ray transfer matrix technique shows that the values of the modified resonator parameters g1 and g2 depend on the refracting power of the lenses caused by the end-and side-pump (D e and D s respectively) and the distances between the lenses and the resonator mirrors.The focal length of these thermal lenses F e = D e −1 , F s = D s −1 and the localization of the principal planes are determined by the distribution of the absorbed pump power in the active elements as well as by thermo-physical and thermo-optic characteristics of the laser medium and the peculiarities of the design of the particular cooling system.Using the matrix approach [17] the parameters g1 and g2 can be estimated with the following expressions: ( ) where D e and D s are the optical powers of the lenses induced by the action of the end-pump and side-pump, respectively; z E is the distance from the back mirror with a radius of curvature R1 to the first principal plane of the thermal lens D E , z D is the distance between the second principal plane of the thermal lens D e and the first principal plane of the thermal lens D S , z S is the distance between the second principal plane of the lens D S and the output coupler with a radius of curvature R2.
The optical length of the resonator is represented as L b ≡ z e + z D + z s if both of the thermal lenses are induced.The parameters g1 and g2 are determined by the complete expressions (1a, 1b) for the paraxial zone where the influences of both of the end-and side-pump are considered.For the peripheral zone the g-parameters are practically determined only by the optical power of the D s lens (D e ≈0 in this zone).
We have analyzed several resonator configurations using the matrix approach to find out the hybrid pump power ranges corresponding to different position of the resonator in the stability diagram.
In the simulations we have used the parameters of the laser system studied in our physical experiments given in Section II.
The results of the simulation are presented in Fig. 1 as a 3D g-diagram showing the change in the position of the resonator in the stability diagram in terms of g1 and g2 (horizontal and vertical axes, respectively) against different values of product g1 0 *g2 0 .Parameter g1 0 *g2 0 characterizes the resonator when the pump sources are off and the refractive power of thermally induced lenses is zero (D e = D s = 0).The values of this parameter are presented along the axis normal to plane formed by axes 1 g and 2 g .The position of the corresponding plane of the g-diagram is determined by the values of g1 0 and g2 0 .They depend only on the geometrical parameters of the resonator.For a particular case of a simple two-mirror cavity configuration it depends on the mirrors curvatures and the resonator's length.The diagrams similar to one shown in Fig. 1 are rather useful in making preliminary predictions for the changes in the state of the resonator with a given geometry upon changes in the absorbed pump power.
Low-frequency pulsed pump (f<1 kHz)
At intended laser energy operation at low repetition rate (repetition rates of less than 1 kHz) leads to relatively low average pump power, and therefore the thermally induced lenses have low refractive power.In this case, the corresponding focal length of the lenses F e and F s is much longer than the resonator length L b .The experimental setup is shown in Fig. 2. Fig. 2. [EP] -end-pump source; M1 -dichroic rear mirror (HR@1064 nm, HT@808 nm); [SP] -side-pump diode module with Nd:YAG active rod; TFP -thin film polarizer; M2output coupler (R oc = 78%); W1, W2 -optical wedges; FF -optical filters ; SM -spherical mirror; PWM -power meter, PD -high-speed pin-photodiode with oscilloscope.
In the experimental setup the laser crystal was 60 mm long, 3 mm in diameter, 0.65% doped Nd:YAG rod, AR coated at 808 and 1064 nm.The crystal was end-pumped with a 70-W peak power Jenoptik (JOLD 70 QPXF 1L) multimode fiber-coupled laser diode operating at 808 nm.The radiation of the end-pumped system [EP] was introduced into the central part of the active rod along its axis by a focusing system through the dichroic rear mirror M1.Active rod of the laser was placed into standard side pumped laser chamber [SP] CEO 301C1 (4) equipped with 9 diode arrays (LDM) using 3 fold geometry of pumping (Fig. 3(a) with maximum total CW output power of 180 W. The resonator used plane rear mirror M1 and plane output coupler M2.The position of the focusing plane of the pump radiation was optimized using the criteria of the maximum averaged output power and maximum quality of the output beam with the hybrid pump.Thin film polarizer (TFP) and BBO crystal-based electro-optic switch deliver the Q-switching mode.The parameters of the resonator output emission were recorded with a power meter, a high-speed photodetector and oscilloscope.The thermal lens effect and the output beam quality were measured with a beam analyzer WinCam-D, an auxiliary He-Ne laser and spherical mirror SM.
Figure 3(а) shows the characteristics of the end-pump [EP] and side-pump [SP] modules used in the experiment.Figure 3(b) shows the output beam intensity distributions in the focal plane of the outer spherical mirror (11) when either the side-pump alone or the hybrid pump is used.In the case of the hybrid pump the measured М 2 parameter decreases more than two times compared to the side-pump configuration and it was about 1.15 at the maximum energy of the pump pulses.
The diagrams for the generated pulse's energy and duration on the end-pump module driving current for different levels of side pump peak power are presented in Fig. 3(c), 3(d).They were obtained in a pulsed-periodical pump mode with a pulse duration of 125 µs and a repetition rate of 100 Hz.The results show that in the absence of an end-pump, the maximum generated pulse energy is 1.5 mJ with pulse duration of 35 ns.The introduction of an additional pump into the paraxial zone of the active rod leads to the decrease of the pulse duration by 35-40% whereas the pulse energy increases up to 2.6 mJ.A further increase of the output pulse energy was limited by the laser-induced damage of the antireflection-coated surfaces of the used intracavity optical components.In the absence of side-pump there was lasing at the fundamental (TEM 00 ) mode.The emission pulse width was close to the minimum achieved in the experiments.The output pulses energy was slightly above 0.5 mJ at the maximum current.The combination of the both pump sources provides larger gain factor in the paraxial zone of the active rod, which facilitates a more rapid lasing build-up at the lowest transverse mode.As it has been suggested earlier, the radiation in the fundamental mode acts as a seed for the further development of lasing and essentially determines both the pulse length and the intensity distribution of the output pulses.
Continuous wave pump operation mode
To increase the repetition rate and the average laser power, the hybrid pump scheme was used in a CW mode.In this case, strong effects of thermally-induced lenses and birefringence occur in the laser media.
The experimental setup is shown in Fig. 4. It is similar to that shown in Fig. 2. The difference is in the type of the end-pump source and the optical Q-switch, as well as in the choice of radii of curvature of the resonator mirrors.The latter was necessary due to the formation of two combined thermally-induced lenses with focal lengths comparable with the resonator length.The resonator configuration with a schematic representation of the thermally induced lenses is shown in Fig. 3.
The end-pump source was a fiber coupled module MU55-808 (SvetWheel LLC) with the central generation wavelength of 808 nm.This module has a threshold current of I thr = 2A, maximum current of I max = 8A, and a slope equal to 8W/A.The output optical fiber diameter was 200 µm and fiber numerical aperture was NA = 0.22.We used a focusing objective with the same magnification (≈1).Quartz glass-based acousto-optical switch (AOS) with the carrier frequency of 50 MHz and an operating frequency up to 10 kHz was used for the Qswitch mode.The modulation depth was 35%.The level of losses introduced by the modulator was sufficient for the complete resonator switch-off between the pulses.AOSs of this type exhibit low sensitivity to thermally induced active rod birefringence.The dissipative losses in the resonator due to the radiation depolarization decreased from 35% (electro-optic modulator) to less than 1%.
The measured lasing build-up time in the paraxial zone of the active rod exceeded 650 ns at the maximum pump level.Thus, the change of an electro-optic switch for the acoustooptical modulator has no essential effect on the output pulse duration.In the optimal operation mode of the investigated laser with a hybrid pump, the peripheral zone of the resonator is in the unstable region (magnification coefficient М ≈1.45, see Fig. 5(b)) close to the stability region's boundary (point А0 in Fig. 5(а)).Thus, the gain coefficient in the peripheral zone is nearly own lasing threshold.At the same time, the paraxial zone of the resonator is within the stability region for endpump power in the range of W EP = 2.5-28 W (D = 1.7-5.5 m −1 , see points А1-A2 in Fig. 5(a)).Under the given conditions, the generation starts in the paraxial zone of the active rod with the maximum gain coefficient.Due to the spatial distribution of the gain coefficient and the smallness of the Fresnel number, the quality of the beam is high.Then as the generation pulse develops, the radiation starts filling up the peripheral zone due to diffraction and configuration of unstable resonator.Thus the "seed" radiation from the paraxial zone is further amplified in the peripheral one.The quality of the beam gets worse in the course but remains high enough as compared to that of a typical side-pump (Fig. 6).The output characteristics of the laser generator with a continuous hybrid pump for the chosen combination of the resonator parameters and pump power are given in Fig. 7.
Conclusion
It has been shown that using a combined diode end-pump and side-pump in a compact solidstate Q-switched lasers is beneficial for achieving both a high average power, which is typical for side-pumped lasers, and a high emission brightness and peak power, pertaining to endpumped lasers.The stability diagram of the resonator with two cases of thermal lenses has been investigated.A laser oscillator with a single Nd:YAG 3x60 mm active rod using simultaneously an end-and a side-pump has been shown to produce pulses with energies of up to 3 mJ, pulse duration below 20 ns, repetition rate up to 8 kHz at M 2 <2, and average power exceeding 20 W. The laser is intended as seed generator in XUV radiation sources based on laser plasma using the pre-pulse technique [20].
Fig. 1 .
Fig. 1.Stability diagram of the plane-spherical resonator with two thermally induced lenses with variable optical powers D e , and D s .The output coupler's curvature radius is R2 = −500 mm (а), R2 = ∞ (b), R2 = 500 mm (c), resonator length is L b = 135 mm.Zone I represents the stable region, zones II show the unstable regions for the resonator.When the thermal lenses refractive power D e and D s changes (see destination of corresponding axes in the plane (a)), the current values of parameters g1 and g2 of the resonator will fall inside the limiting region.An example of the parallelogram-like regions formed by points a, a', b' and b is shown in the plane (b) of Fig. 1.The regions boundaries depend on the induced thermal lenses parameters and the position of the initial point "a".The latter is given by the pair of parameters g1 0 and g2 0 .The 3D-diagram shown in Fig. 1 is plotted for resonators with a flat rear mirror (R1 −1 = 0) and an output coupler with various radii of curvature, R2.The position and the shape of each region a-a'-b'-b are determined by the powers of the thermal lenses and the values of g1 0 *g2 0 .For illustrative purposes, the figure shows the planes corresponding to the unstable (a), flat (b), and stable (c) empty resonators.The optical power of the thermally induced lens corresponds to a color temperature whose scale is shown on the right.The color temperature within the closed a-a'b'-b regions is introduced according to the total optical power supposed to be in a first assumption D ≈De + Ds.The lines inside the a-a'-b'-b regions correspond to the same values of total optical power of thermal lens (isochoric lines).The diagrams similar to one shown in Fig.1are rather useful in making preliminary predictions for the changes in the state of the resonator with a given geometry upon changes in the absorbed pump power.
Fig. 3 .
Fig. 3. а) Side pump scheme, transversal distribution of fluorescence in the active rod and power parameters of [SP] and [EP] systems (I SP , I EP are the side-and end pump systems working current values respectively; b) output beam intensity distributions for the side-pump (1) and hybrid pump (2) operation modes; c, d) pulse energy and duration vs end-pump peak power P EP for different levels of side pump peak power P SP.
Fig. 4 .
Fig. 4. Scheme of the laser with continuous hybrid pump.TH E and TH S -thermally induced lenses caused by end-and side pump radiation.The radii of curvature for the output coupler were chosen to compensate the bifocal thermal lenses in the active rod (0.65%Nd:YAG, [100]).The optical power of the thermally induced lenses in the paraxial and the peripheral zones of the laser element can be managed via a change of the driving currents I EP and I SP , variation of the laser diodes temperature and the position of the active rod.All these parameters affect the radiation's caustic formation.Figures 5(a) and 5(b) show the calculated boundaries of the g -parameters in the paraxial region of the resonator corresponding to the change in the end-pump power W EP in the range 0-50W.The lines a-a', b-b' correspond to the side-pump power values of W SP = 0 and 180 W, respectively.The lines a-b, a'-b' correspond to the limiting values of the end-pump of 0 and 50 W.The diagram corresponds to one of the planes in a 3D-diagram in Fig. 1 for the given radius of curvature of the convex output coupler mirror R2 = −500 mm used in the experiments.In the absence of the side-pump (W SP = 0), the paraxial zone of the resonator becomes stable at W EP = 12 W (D ≈1.5 m −1 , I EP = 2.2 A), then the resonator shifts to an unstable region at W EP = 39 W (D ≈5.5 m −1 , I EP = 6 A) (see line a-a' in Fig.5(a)).The peripheral zone of the resonator becomes stable upon increase of the side-pump (see the line a-b in Fig.6(a) only at W SP = 90 W (D ≈1.7 m −1 , I SP = 18 A).In the optimal operation mode of the investigated laser with a hybrid pump, the peripheral zone of the resonator is in the unstable region (magnification coefficient М ≈1.45, see Fig.5(b)) close to the stability region's boundary (point А0 in Fig.5(а)).Thus, the gain coefficient in the peripheral zone is nearly own lasing threshold.
Fig. 5 .
Fig. 5. Calculated dependences for the resonator with an output coupler radius of curvature R2 = −500 mm: a) g1g2 stability diagram; b) Magnification coefficient M for the resonator in the unstable configuration region.The area without isolines corresponds to the stable generation regime (M = 1).
Fig. 7 .
Fig. 7. Experiment using CW end and side pumping -Stage II.Results at Q-switched repetition rate f = 8 KHz, a) Output pulse energy vs end-pump power P EP (P EP = W EP in the case of CW pumping); b) pulse energy vs side-pump power P SP at P EP = 32 W corresponding to the max output energy in Fig. 7(a). | 6,407.8 | 2017-12-25T00:00:00.000 | [
"Physics",
"Engineering"
] |
AC Loss in REBCO Coil Windings Wound With Various Cables: Effect of Current Distribution Among the Cable Strands
To construct large-scale superconducting devices, it is critical to enhance the current carrying capability of superconducting coils. One practical approach is to utilise assembled cables, composed of multiple strands, for the winding. There have only been a few investigations of the dependence of current distribution among the strands on the AC losses of the cables and coils wound with these cables. In this work, we studied three types of cables; 1) 8/2 (eight 2 mm-wide strands) Roebel cable, 8/2 Roebel; 2) two parallel stacks (TPS) which have the same geometrical dimensions as the Roebel cable, 8/2 TPS; and 3) an equivalent four-conductor stack (ES) comprising four 4 mm-wide conductors, 4/4 ES. We proposed a new numerical approach that can achieve equal current sharing and free current sharing among the strands. We examined the loss behaviour of all three types of straight cables and two coil assemblies comprising two, and eight, stacks of double pancakes (DPCs) wound with these cables, respectively. 2D FEM analysis was carried out in COMSOL Multiphysics using the $H$ – formulation. The stacks were modelled in parallel connection, with the same electric field applied to all strands, so that current is distributed between the conductors. Simulated transport AC loss results in the straight 8/2 Roebel cable, 8/2 TPS and 4/4 ES were compared with previously measured results as well as each other. The numerical AC loss results in two coil windings 2-DPC and 8-DPC wound with the 8/2 Roebel cables were compared with the results in coil windings wound with the 8/2 TPS and 4/4 ES. No transposition was introduced at the connection between double pancakes, in order that current can be shared among the strands in the 8/2 TPS and 4/4 ES. The results indicate that AC loss in the straight 8/2 TPS and 4/4 ES is larger than that in the 8/2 Roebel cable. Current is more concentrated in the outer strands for the straight 8/2 TPS and 4/4 ES than the 8/2 Roebel cable and causes greater AC loss than the 8/2 Roebel cable. The 8-DPC coil winding wound with the 8/2 Roebel cable has the smallest loss and the coil winding wound with the 8/2 TPS has the greatest loss at two current amplitudes investigated. At 113 A, the AC loss value in the 8-DPC coil winding wound with the 8/2 TPS is 2.2 times of that wound with the 8/2 Roebel cable.
I. INTRODUCTION
Large scale high temperature superconducting (HTS) power applications present promising solutions for the electrical The associate editor coordinating the review of this manuscript and approving it for publication was Su Yan .systems within the transportation and energy sectors.Thanks to the remarkable capabilities enabled by HTS technology, HTS applications encompass high efficiency and power density, compact size, and lightweight design, etc.Many R&D projects and studies have been carried out on superconducting applications in the transportation and energy sectors, including superconducting transformers [1], [2], [3], [4], power transmission cables [5], [6], rotating machines [7], [8], fault current limiters [9], [10], Superconducting Magnetic Energy Storage (SMES) [11], [12], magnets [13], [14], etc.The integration of HTS-based devices in these sectors and systems enables us to tackle decarbonation and energy challenges more effectively and efficiently.
To achieve high power rating and high-power density, it is essential for the HTS devices to be wound with superconducting cables comprised of multiple tapes to carry high current [15], [16].One example of increasing current-carrying capability is Roebel cable, which is a fully transposed cable assembled by multiple punched ReBCO strands [15], [17], [18], [19], [20].Another technique is to assemble ReBCO tapes into a simply stacked cable with or without transposition of the tapes [21], [22], [23], [24], [25].In a transposed cable, the current in each strand is equally distributed.Whereas, in a simple stack without transposition, current distributes unevenly in each tape/strand, for example, with greater current flowing in outer tapes and less current in the inner tapes [26], [27], [28].This leads to an undesirable AC loss from the cables.In a coil winding wound with a simple stack, the unequal current sharing between the tapes will be more severe due to the difference in inductance for each tape and hence leads to large AC loss in the coil winding [29].It is important to note that these losses are closely linked with the device efficiency and lead to a thermal load that must be removed by the cooling systems, which has a high cooling penalty [30].Thus, for high current AC applications, the unequal current sharing between the tapes in an assembled cable is a critical issue concerning losses that must be studied and solved.
In our previous research, we explored the impact of unequal current distribution in non-inductive bifilar coils, wound with Roebel cable and simple stacks, for superconducting fault current limiter [31].We experimentally compared the measured transport AC losses of Roebel cables and simple stacks with, and without, transposition.However, there is limited work published on the AC loss in superconducting inductive coil windings comprised of double pancake coils wound with transposed, and non-transposed, cables and the impact of unequal current sharing on AC loss in the coil windings.
In this work, three types of cables are studied; i) 8/2 (eight 2 mm-wide strands) Roebel cable, 8/2 Roebel; ii) two parallel stack which have the same geometrical dimensions as the Roebel cable, 8/2 TPS; iii) an equivalent four-conductor stack comprising four 4 mm-wide conductors, 4/4 ES. Figure 1(a) shows the schematic of the three cables.Firstly, the simulated transport AC loss results in the straight 8/2 Roebel cable were compared with measured results as well as 8/2 TPS and 4/4 ES.Next, we evaluated the loss behaviour of two coil windings comprising two, and eight stacks of double pancakes (DPCs) wound with these cables, respectively.The numerical AC loss results in the two coil windings wound with the 8/2 Roebel cables were compared with the results in the coil windings wound with 8/2 TPS and 4/4 ES.
This paper is structured as follows.Section I introduces the importance and motivation of the work.Section II explains the numerical methods and specifications of the studied cables.Section III validates the numerical model through comparison with experimental results.Section IV shows the results and the analysis of straight 8/2 Roebel cable, 8/2 TPS and 4/4 ES.Section V reports the results and the analysis on superconducting coils wound with the 8/2 Roebel, 8/2 TPS and 4/4 ES.Conclusion is presented in section VI.
II. NUMERICAL APPROACHES A. SIMULATION FOR STRAIGHT CONDUCTOR AND COILS
A two-dimensional (2D) FEM model employing Hformulation was built in COMSOL to study the AC loss characteristics of the straight 8/2 Roebel cable, 8/2 TPS, 4/4 ES, as well as stacks of coil windings wound with the three cables [32], [33], [34], [35].Two state variables were implemented, H = [H x , H y ] T , where H x and H y are magnetic fields parallel and perpendicular to the tape wide-surface.A power law relation was used to represent the highly nonlinear relationship of local electric field E and local current density J in the HTS layer: where E 0 = 1 µV/cm, n = 30 is the power index derived from V -I characteristic; J c (B) is the critical current density dependence on local magnetic field and lateral position.In this work, a modified Kim model was adopted for J c (B) profile, as expressed in (2).k, α, and B 0 are curve fitting parameters and the values used in this study are 0.3, 0.7, and 42.6 mT, respectively.
The governing equation ( 3) is derived by combining Faraday's law, Ampere's law, Ohm's law, and constitutive law.Equation ( 4) is obtained by substituting H x and H y into (3).µ 0 is the magnetic permeability of the free space.µ r is the relative magnetic permeability.Here, µ r = 1 holds true for all the cases studied in this work.
For simplification, 2D axisymmetric model was employed to simulate the electromagnetic behavior in coil windings, with two state variables H = [H r , H z ] T [36].The governing equation is shown in (5), [16], [17], [18].Table 1 lists the specifications of the cables.Figure 2 illustrates the modelled cross-sections of the three cables and the symmetry boundary conditions applied in COMSOL software.Figure 2(a) represents the modelling of 8/2 Roebel cable and 8/2 TPS since they share the same cross-section, while Figure 2(b) simulates the 4/4 ES.Due to the symmetry, only a quarter model is simulated to reduce computing time.To consider the symmetry, H x = 0 is applied on x-axis, and H y = 0 is applied on y-axis.
Two coil windings wound with these three cables, namely 8/2 Roebel, 8/2 TPS, and 8/2 ES, were investigated, including a coil winding comprised of a stack of two double pancake coils (2-DPC), and eight double pancake coils (8-DPC), respectively.Table 2 lists the specifications of the coil windings wound with the three cables.
Figures 3(a)-(c) show the schematic of current distribution in a 1-DPC wound with the 8/2 Roebel, 8/2 TPS and 4/4 ES.In figure 3(a), current is equally distributed in each strand of the 8/2 Roebel cable due to its fully -transposed nature.Pointwise constraint is applied in the partial differential equation (PDE) module for each strand.In figure 3(b), free current-sharing is defined in the 8/2 TPS cable, where the strand carrying the same amount of current is denoted using the same colour.This case is similar to the ''coupled at end'' case in [25].The current-sharing pattern among all strands is repeated for each turn due to the structure of the coil winding.Similarly, in figure 3(c), free current-sharing is defined for the 4/4 ES cable.The different colour in each strand of the 4/4 ES denotes free current sharing.In a 1-DPC level, the current sharing pattern is linked and repeated for each turn for this case as well.
III. MODEL VERIFICATION
Figure 4 shows the simulated transport AC loss in the straight 8/2 Roebel cable at 59 Hz, plotted as a function of the Roebel cable current and compared with the measured AC losses of the cable [36].In the figure, the theoretical transport AC losses estimated by Norris-ellipse (N-e) and Norris-strip The simulated AC losses fall between N-e and N-s in the low current region and agrees well with N-s at high current amplitudes.The simulated losses of the 8/2 Roebel cable have a good agreement with the measured ones although slightly smaller.
Figure 5 shows the simulated AC loss in a 2-DPC coil winding compared with measured results.The modelling method for the coil is identical to what has been presented in Chapter II.The 2-DPC coil winding was wound with 4 mmwide SuperPower wires.The measurement was carried out at 24.63 Hz.Calculation reasonably reproduces the measured values, and the results validate the simulation method for coil windings, which has been used to produce the results for 2-DPC and 8-DPC wound with the three cables in Chapter V. the same current for the whole current range in the 8/2 Roebel cable.In the 8/2 TPS, with increasing I t, turn , current in both the 'inner conductor' and 'outer conductor' increases linearly only until I t, turn /I c0, cable reaches 0.5.The 'inner conductor' in the 8/2 TPS carries much less current than the 'outer conductor' and the difference between the current values is the largest when I t, turn /I c0, cable is around 0.5.The current in the 'outer conductor' in the 8/2 TPS saturates from I t, turn /I c0, cable approximately 0.7 and becomes slightly smaller with increasing I t, turn .On the other hand, current in the 'inner conductor' in the 8/2 TPS becomes larger more rapidly from I t, turn /I c0, cable > 0.5.This clearly indicates current redistribution between the two conductors through the terminals of the 8/2 TPS and the behaviour could be explained using a circuit model described in figure 7.At I t, turn /I c0, cable = 0. 6) and ( 7), we get the ratio of I 1 /I 2,
IV. RESULTS ANALYSIS: STRAIGHT ROEBEL CABLE VS. STACKS
When I is small, R 1 and R 2 are nearly zero; Figures 8(a) and (c) compare the current distribution J /J c in the 8/2 Roebel cable and the 8/2 TPS at I t /I c0 = 0.8, respectively.As shown in figure 8(a), we observe nearly the same pattern of J /J c distribution in ''Outer conductor'' and ''Inner conductor'' at I t /I c0 = 0.8, and both conductors are not yet fully penetrated, that both conductors have a portion of sub-critical area.In contrast, the pattern of J /J c distribution in figure 8(c) is remarkably different.|J /J c | value of the ''Outer conductor'' in the 8/2 TPS is greater than 1 across the conductor width whereas there is 1/3 of the conductor width has less than 1 for the ''Inner conductor''.
Loss power density distribution in the 8/2 Roebel cable and the 8/2 TPS at I t /I c0 = 0.8 are plotted and compared in figures 8(b) and (d).Both ''Outer conductor'' and ''Inner conductor'' have a portion of the conductor width that does not generate AC loss, as seen in figure 8(b), and this is aligned with the fact that both conductors have an area where | J/J c | < 1.However, in figure 8(d), ''Outer conductor'' in the 8/2 TPS generates AC loss across the conductor width, and ''Inner conductor'' has a big area that does not generate AC loss at all due to the low J /J c values in the area.This shows that each strand in the 8/2 Roebel cable behaves broadly equally in generating AC loss; whereas each strand in non-transposed cables behaves unequally in loss generation due to unequal current-sharing.
Figure 9 compares the transport AC loss values in the straight 8/2 Roebel cable, 8/2 TPS and 4/4 ES at 59 Hz.It is observed that for a given cable current, 4/4 ES has the greatest loss while Roebel cable has the smallest loss among the three cable types due to the equal current sharing as explained in figure 6 and figure 8.The difference in the AC loss values between the cables becomes greater with increasing the amplitude of the cable current.
At I t, peak = 151 A and 302 A, AC loss in the 8/2 TPS is 1.06 times and 1.14 times of that in Roebel, respectively.AC loss values in the 8/2 TPS is smaller than 4/4 ES, due to the existence of the horizontal gap [26], [37].At I t, peak = 151 A and 302 A, AC loss in 4/4 ES is 1.47 times and 1.76 times of that in the 8/2 Roebel cable, respectively.
V. RESULTS ANALYSIS: 2-DPC COIL WINDING A. AC LOSS IN 2-DPC COIL WINDING
Figure 10 presents a comparison of the simulated AC loss values in the 2-DPC coil windings wound with the 8/2 Roebel cable, 8/2 TPS, and 4/4 ES and plotted as a function of the coil current amplitude.There is nearly no difference in AC loss values in the coil windings wound with different cables when the coil current is less than 60 A. The difference in the AC loss values between the coil windings wound with the 8/2 Robel cable and those wound with the 8/2 TPS and 4/4 ES becomes greater at high current amplitudes.We attribute the lower AC loss of the Roebel coil winding to the equal current sharing capability.In contrast, the 2-DPC coil winding wound with the 8/2 TPS generates the highest AC loss for all current amplitudes.The AC loss value in the 2-DPC winding wound with the 8/2 TPS is 2.3 times of that wound with the 8/2 Roebel cable at 151 A.
Figures 11 shows the normalised current density distribution, J /J c in the 2-DPC coil windings wound with the 8/2 Roebel cable and 8/2 TPS, respectively, at I t = 113.4 A. As a result of the equal current sharing between strands, the 2-DPC coil winding wound with the 8/2 Roebel cable is equivalent to a 4-DPC coil winding wound with 2 mm strands.In Figure 11(a), we could observe shielding currents (magnetization currents) in the two end coil discs that shield the radial magnetic field component in the end part of the coil winding [19], [36].The fully penetrated region where |J /J c | > 1 is the greatest for the outermost disc and the smallest for the innermost disc.There is un-penetrated or subcritical region even in the outermost disc.In contrast, current in the end pancake coil of the 8/2 TPS is mostly concentrated in the outermost disc and there is almost no shielding current.The induced shielding current for the bottom half of the pancake coil (blue colour) is almost the same as the transport current (red colour), which implies the net current for the bottom half of the PC is almost zero.The inner PC shows similar behaviour as the upper PC although its upper half of the PC does not contain any shielding current.These observations clearly indicate highly unequal current distribution of the coil winding wound with the 8/2 TPS.This behaviour causes high AC loss and hence the 8/2 TPS is not a favourable option for high-current applications.Figure 16 compares the AC loss values in the 8-DPC coil windings wound with the 8/2 Roebel, 8/2 TPS and 4/4 ES at I t = 56.7A and 113.4 A. The 8-DPC coil winding wound with the 8/2 Roebel cable has the smallest loss and the coil winding wound with the 8/2 TPS has the greatest loss at both current amplitudes.At 113 A, loss value in the 8-DPC coil winding wound with the 8/2 TPS is 2.2 times of that wound with the 8/2 Roebel cable.
VI. CONCLUSION
A new numerical approach using the widely used Hformulation in COMSOL Multiphysics is both proposed and demonstrated that can achieve equal current sharing and free current sharing among multiple strands both in straight cables and coil windings.This was done using 2D FEM on three cable case studies, i) 8/2 (eight 2 mm-wide strands) Roebel cable, 8/2 Roebel ii) two parallel stacks (8/2 TPS) which have the same geometrical dimensions as the 8/2 Roebel cable, 8/2 TPS iii) an equivalent four-conductor stack comprising of four 4 mm-wide conductors, 4/4 ES, as well as the windings wound by them, namely the two, and eight stacks of double pancakes (DPCs) wound with these cables.The numerical model was validated by comparing the simulated transport AC loss results in the straight 8/2 Roebel cable with previously measured results.
An electric circuit has been used to explain the current sharing among different strands in the straight 8/2 Roebel cable and 8/2 TPS.The analytical formula can explain the unequal current-sharing among different strands, consistent with the results from the numerical model.
On the straight cable level, we observe nearly the same pattern of J /J c distribution in ''Outer conductor'' and ''Inner conductor'' at I t /I c0 = 0.8 in Roebel cable, and both conductors are not fully penetrated having a portion of sub-critical area.In contrast, the pattern of J /J c distribution in TPS is remarkably different.|J/J c | value of the ''Outer conductor'' in TPS is greater than 1 across the conductor width whereas there is 1/3 of the conductor width has less than 1 for the ''Inner conductor''.In terms of AC losses, it is observed that for a given cable current, the straight 4/4 ES has the greatest loss while the 8/2 Roebel cable has the smallest loss among the three cable types due to the equal current sharing behaviour.The difference in the AC loss values between the cables becomes greater with increasing the amplitude of the cable current.
On the coil winding level, transport current flows at the outer edge of each disc for the 8-DPC winding wound with the 8/2 Reobel cable, while magnetization current is observed in each disc excluding the central disc.The transport current is completely concentrated in the outer disc of each PC in the 8-DPC coil winding wound with the 8/2 TPS, and the inner disc of each PC carries no net current similar to the 2-DPC coil winding case.The result shows that strong unequal current-sharing occurs in the entire 8-DPC coil winding wound with the 8/2 TPS and this, in turn, leads to high loss generation.AC loss value in the end disc wound with the 8/2 TPS is nearly 5 times of that wound with the 8/2 Roebel cable.A zig-zag shaped loss distribution was found for the coil winding wound with the 8/2 TPS due to strong unequal current distribution in the 8/2 TPS strands: high AC loss values correspond to high current concentration and low loss values correspond to low net current flow.
FIGURE 1 .
FIGURE 1.(a) Schematic of the three studied cables (b) Cross-section of the three cables.
FIGURE 2 .
FIGURE 2. Schematic of the numerical model for the three studied cable (a) 8/2 Roebel cable and 8/2 TPS (b) 4/4 ES (only a quarter model was simulated considering symmetry).
FIGURE 3 .
FIGURE 3. Schematic of a 1-DPC wound with the three cables (a) wound with the 8/2 Roebel cable, where equal current flowing in each strand (b) wound with 8/2 TPS, which has the same geometry as the 8/2 Roebel cable, but allowing free current distribution among strands (c) wound with 4/4 ES, which has the identical valid conductor cross-section area but allowing free current distribution among strands.
(N-s) models using the measured cable I c value 164 A are plotted together.The modelling method of the 8/2 Roebel is identical to what we described in Chapter II.
FIGURE 4 .
FIGURE 4. Comparison of measured and simulated AC loss values in the 8/2 Roebel cable.
FIGURE 5 .
FIGURE 5.The simulated results for a 2-DPC coil winding agree with the measured AC loss results.
Figure 6
Figure 6 shows the simulated current distributions in the 'inner conductor' and 'outer conductor' of the 8/2 Roebel cable and 8/2 TPS defined in figure 1 at various I t, turn /I c0, cable values when ωt = 3π/2, where I t, turn is the total current amplitude for the 8/2 Roebel cable and 8/2 TPS.It is worth noting that 'inner conductor' and 'outer conductor' could represent the other three pairs of conductors in the 8/2 Roebel cable and 8/2 TPS due to the symmetry.The 'inner conductor' and 'outer conductor' carry exactly
FIGURE 7 .
FIGURE 7. Schematic of magnetic field coupling among strands (only half of the cable is considered).
3 and 0.7, the current amplitudes of the 'surface conductor' and 'inner conductor' in the TPS are 4.97 A, 23.40 A, and 23.8 A, 42.38 A, respectively.Considering symmetry, the circuit illustrates half cable, i. e. 4 strands in the 8/2 Roebel cable and 8/2 TPS: the 'outer conductor' and 'inner conductor' in the upper right half; the 'outer conductor' and 'inner conductor' in the bottom right half which carry current with the same direction as their upper counterpart conductors.Branch 1 refers to the upper 'outer conductor', branch 2 refers to the upper 'inner conductor', branch 3 refers to the bottom 'inner conductor', branch 4 refers to the bottom 'outer conductor'.Considering current direction in each conductor, the equations for the circuit for the 'outer conductor' and 'inner conductor' can be written as follows, V = (R 1 + jωL 1 ) I 1 + jωM 12 I 2 + jωM 13 I 3 + jωM 14 I 4 (6) V = (R 2 + jωL 2 ) I 2 + jωM 21 I 1 + jωM 23 I 3 + jωM 24 I 4 (7) where R 1 and R 2 are resistance values defined derived from the E-J relationship of the conductors; L 1 , L 2 are self-inductance values of the two conductors, I 1 , I 2 , I 3 , and I 4 are current values in each branch; I is total current for the two branches; and M 12 , M 13 , M 14 , M 21 , M 23 , M 24 are the mutual inductance of branches defined as in figure 7. Due to the symmetry in the cable level, I 1 + I 2 = I 3 + I 4 ; I 1 = I 4 and I 2 = I 3 ; M 13 = M 24 .In addition, L 1 = L 2 = L due to the same geometry and material of the conductors.Substituting the equivalent parameters in equations (
Figures 12
Figures 12 plots and compares the radial magnetic field component, B r , distribution and the flux streamlines around the 2-DPC coil windings wound with the 8/2 Roebel cable and 8/2 TPS, respectively, at I t = 113.4 A. The area filled with large B r in the outermost disc of the 2 DPC coil winding wound with the 8/2 TPS is much greater than that of the 8/2 Roebel cable, while the area filled with large B r in the second disc of the 2-DPC coil winding is smaller than that of the 8/2
Figures 14 (
Figures 14(a) and (b) show the current density distribution in the 8-DPC coil windings wound with the 8/2 Roebel cable and 8/2 TPS, respectively, at I t = 113.4 A. As shown in
figure 15 (
figure 15(a), transport current flows at the outer edge of each disc for the 8-DPC winding wound with the 8/2 Reobel cable, while magnetization current is observed in each disc excluding the central disc due to symmetry to shield the radial magnetic field component in the end part of the coil winding, similar to figure 11(a).Unlike the 8-DPC coil winding wound with the 8/2 Roebel cable, the transport current is completely concentrated in the outer disc of each PC in the 8-DPC coil winding wound with the 8/2 TPS, and the inner disc of each PC carries no net current similar to the 2-DPC coil winding
TABLE 1 .
Specifications of different cables.
TABLE 2 .
Specifications of modelled coil windings. | 6,076 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Thermal stress tensor correlators near lightcone and holography
We consider thermal stress-tensor two-point functions in holographic theories in the near-lightcone regime and analyse them using the operator product expansion (OPE). In the limit we consider only the leading-twist multi-stress tensors contribute and the correlators depend on a particular combination of lightcone momenta. We argue that such correlators are described by three universal functions, which can be holographically computed in Einstein gravity; higher-derivative terms in the gravitational Lagrangian enter the arguments of these functions via the cubic stress-tensor couplings and the thermal stress-tensor expectation value in the dual CFT. We compute the retarded correlators and observe that in addition to the perturbative OPE, which contributes to the real part, there is a non-perturbative contribution to the imaginary part.
Introduction
Understanding the non-perturbative structure of quantum field theories (QFTs) at finite temperature is an important challenge of theoretical physics.In particular, thermal fluctuations cannot be ignored when studying real-world, out-of-equilibrium phenomena in, e.g., the quark-gluon plasma or strongly coupled condensed matter systems.Conformal field theories (CFTs) provide a natural starting point for investigating QFTs more generally.The additional symmetries in conformal theories impose powerful constraints on physical observables, even at strong coupling.Achieving a better understanding of general thermal field theories should therefore begin with an investigation of CFTs at finite temperature.
A basic probe in any local field theory is the stress tensor, T µν .The importance of the stress tensor, and correlators thereof, to CFT is hard to overstate.The stresstensor sector is completely universal in two-dimensional CFT, where the infinitedimensional Virasoro algebra allows determining physical observables based on symmetries.In higher dimensions, although the structure of conformal correlators is less constrained and in general they are determined via case-by-case computations, the conformal invariance still fixes two-and three-point stress-tensor correlators uniquely up to a few constants [1].In particular, the two-point stress-tensor correlator at zero temperature is fixed up to an overall coefficient, the central charge C T .
When considering the theory at finite temperature, the stress-tensor two-point functions are no longer fixed by one constant.Instead, these thermal T T correlators are generally theory dependent: they depend on, for instance, the coefficients appearing in the zero-temperature three-point correlators of stress tensors.It is desirable to identify physical limits that make some universal aspects of the thermal correlators manifest, and then devote future efforts to computing non-universal corrections to the correlators.
In this paper, using the AdS/CFT correspondence [2][3][4], we analyze thermal T T correlators at large central charge in a class of holographic CFTs in four dimensions, focusing on a certain universal, near-lightcone regime.Our analysis of the near-lightcone T T correlators is in part motivated by the large body of recent work on thermal scalar correlators and their near-lightcone behavior in spacetime dimensions greater than two .In the context of AdS 3 /CFT 2 such correlators have been well-studied in the literature, e.g., [38][39][40][41][42][43][44][45][46][47][48][49][50][51][52].In d = 4 holographic CFTs, heavyheavy-light-light (HHLL) correlators were compared to thermal two-point functions in [5] and several operator product expansion (OPE) coefficients of multi-stress tensor exchanges were computed in [6] which also observed that the scalar correlator in the near-lightcone limit is unaffected by higher-derivative interactions if one assumes a minimally coupled scalar in the bulk.Corrections to such universality due to non-minimally coupled interactions were discussed in [20].In [10,13,17] the bootstrap procedure for computing HHLL correlators was developed.Subsequently, it was pointed out [27,32] that higher-dimensional scalar correlators near the lightcone share certain similarities with the two-dimensional Virasoro vacuum blocks.Although the underlying mechanism responsible for this remains to be better understood, the time is ripe for investigation of a parallel story for the thermal correlators of stress tensors.
An initial step towards this direction was made in [53], which computed the thermal two-point correlators of stress tensors in d = 4 holographic CFTs dual to Einstein gravity and read off conformal data beyond the leading order in the large C T expansion.It was also observed that some OPE coefficients cannot be determined in the near-boundary analysis of the bulk equations of motion but these coefficients do not affect the near-lightcone T T correlators.Subsequently, ref. [54] included the Gauss-Bonnet (GB) higher-derivative term in the gravitational action in AdS 5 to study what happens when the conformal collider bounds [55] in the dual CFT are saturated. 1It was shown that the thermal stress-tensor correlators near the lightcone take the vacuum form when this happens.
In this paper we point out that the stress-tensor correlators computed in Einstein-Gauss-Bonnet gravity suggest a certain near-lightcone universality, which does not require ANEC saturation.We shall elaborate on this observation in more detail, but the main message is the following: thermal T T correlators near the lightcone are completely determined by three universal functions (depending on polarization).The Gauss-Bonnet term in the bulk Lagrangian only affects the arguments of these functions via corrections to the cubic stress-tensor couplings and the thermal stresstensor one-point function.We hypothesize that this remains the case in more general higher-derivative gravitational theories. 2he analysis in this paper involves correlators in momentum space, and we mostly focus on retarded correlators in the near-lightcone regime.To show that they have a universal structure, we identity a suitable limit in momentum space and show that the equations of motion of gravitational fluctuations in this limit take the same form as those in Einstein gravity.(These reduced bulk equations of motion isolate the contributions of the leading-twist operators to the thermal T T correlators in the dual CFTs.)The near-lightcone T T correlators depend on a single parameter, α ∼ q + (q − ) 3 /T 4 , where q ± are the lightcone momenta and T is the temperature.The expansion in the inverse powers of α is essentially an OPE (where [55]: where the central charge can be expressed as In this paper we are interested in thermal stress-tensor two-point functions on spatial R3 which involve three independent polarizations, mapping separately to the three different conformal collider bounds.The single-stress-tensor exchange contribution to thermal T T correlators near the lightcone, as shown by [58] using the stress-tensor OPE, are directly proportional to the same combinations of ( a, b, c) appearing in the conformal collider bounds. 3At higher orders in the OPE, in large-C T CFTs, the thermal T T correlators receive contributions from multi-stress-tensor exchanges.In [53], the OPE limit of the thermal T T correlators in holographic CFTs dual to Einstein gravity was studied using the thermal conformal blocks, including double-stress tensors ∼ [T 2 ] J with dimension 8 + O(1/C T ) and the corresponding CFT data was read off by comparison with a bulk computation.
It is interesting to ask what happens when one includes higher-derivative corrections to Einstein-gravity which modifies, in particular, the stress-tensor OPE coefficients ( a, b, c).In the context of holographic CFTs dual to Einstein-Gauss-Bonnet gravity, the thermal T T correlators were examined in [54] in position space.It was observed that, in the near-lightcone regime, the saturation of a CCB, i.e., ANEC saturation, implies that the corresponding correlator takes the vacuum form, independent of temperature.To gain a broader understanding of the near-lightcone dynamics of the thermal correlators and their possible universal behavior, in this paper we study these thermal correlators away from ANEC saturation. 4efore proceeding, let us set up some notations.In any CFT, the thermal onepoint function of an operator O ∆,J on S 1 β × R 3 with dimension ∆ and spin J is fixed, see e.g.[61,62] where β is the inverse temperature and e µ is a unit vector on the thermal circle S 1 β .For our discussion of the near-lightcone correlators, it will be further useful to define the following quantities: where C scalar,shear,sound are defined in (2.1) -(2.3).
We will study the stress-tensor correlators integrated over the xy-plane: dxdy ⟨T µν (t, z, x, y)T ρσ (0)⟩ β . (2.7) The stress-tensor correlators can be classified into three independent channels, see, e.g., [63].In the lightcone coordinates x ± = t ± z in Lorentzian signature, we consider the limit x − → 0, with fixed x − (x + ) 3 β −4 .In any four-dimensional CFT with no operators with twist less than or equal to two other than the stress-tensor, the thermal T T correlators in the near-lightcone limit are given by The leading and subleading terms in (2.8) -(2.10) are universal, but the higher-order terms a priori are model-dependent.These higher-order terms contain contributions from the operators with larger dimensions -in holographic CFTs, they include multistress tensor operators denoted as [T k ] J .
Position-Space Correlators in Holographic Einstein-Gauss-Bonnet Theory
Here we make an observation based on the thermal T T correlators obtained holographically using Einstein-Gauss-Bonnet gravity [54].Denote the dimensionless Gauss-Bonnet coupling as λ GB . 6We introduce a parameter which will help simplify expressions.The limit κ → 1 recovers Einstein gravity.The coefficients ( a, b, c) can be related to the Gauss-Bonnet coupling: ) . (2.12) The conformal collider bounds, (2.1) -(2.3) translate to, respectively, Now we make the following observation: the near-lightcone T T correlators in holographic CFTs dual to Einstein-Gauss-Bonnet gravity computed in [54] can be recast in the following way: where (2.17) While the first and second term are fixed by conformal symmetry as was shown in (2.8) -(2.10), one can see that, even at O(α 2 ), the dependence on the Gauss-Bonnet coupling can be absorbed into the parameters α, which depend on the thermal onepoint function of the stress tensor (∼ b T β −4 ) and ( a, b, c) through the particular combinations appearing in the conformal collider bounds. 7This observation hints at the following intriguing possibility: multi-stress tensor contributions to the nearlightcone T T correlators in holographic CFTs might be fixed by (the k-th power of ) the single-stress-tensor contribution.Exploring such a possibility and its consequences is the underlying motivation of the present work.
To see this universality in position space we note that near the lightcone the corresponding reduced EoMs in Einstein-Gauss-Bonnet gravity obtained in Sec.3.2 of [54] are identical to the ones in Einstein gravity, after performing suitable rescalings of the coordinates.This is easy to see in the scalar and shear channels, while the sound channel is technically more complicated when analyzed in position space.In the next section we shall analyze the EoMs in momentum space and show the universality in all channels.
Before moving to the momentum-space analysis, let us conclude this section with a technical remark: the structure of the higher-order terms, denoted by dots in (2.14) -(2.16), in fact slightly differs from the first three terms we listed -besides the corresponding dependence of α scalar , α shear and α sound , the higher-order terms are multiplied by a log(−x + x − ) piece.These arise for two different reasons, one is a contribution due to the anomalous dimensions of the multi-stress tensor operators and the other is because we consider the integrated correlator.We discuss this more in Appendix A where we perform the Fourier transformation of the position-space correlators.
Momentum-Space Correlators
In this section, we will study momentum-space thermal T T correlators and show that the near-lightcone correlators computed in the holographic Einstein-Gauss-Bonnet theory are universal: they are determined by three universal functions, depending on three polarizations.We will hypothesize that this might be the case for all holographic theories.In subsequent sections, we will compute these functions.
Let us give a brief review on Einstein-Gauss-Bonnet gravity.The action in five dimensions is given by The theory admits a black-hole solution [64,65]: where f (r) and f ∞ are The parameter r + is the location of the black-hole horizon.We focus on a planar horizon.In the following we set the AdS radius To study the equations of motion of gravitational fluctuations, we consider the metric perturbation h µν = h µν (r)e −iωt+iqz , with the momentum along the z-direction, and adopt gauge-invariant quantities following the recipe in [63].The radial gauge h rµ = 0 is used.The fluctuations are classified into three independent channels: scalar, shear, and sound.In Einstein-Gauss-Bonnet gravity, the corresponding gauge invariants are given by [66]: where (w, q) = 1 2πT (ω, q) are the dimensionless frequency and momentum; T is the Hawking temperature.The linearized equations of motion of gravitational fluctuations in Einstein-Gauss-Bonnet gravity were worked out in [66].(See [67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83] for more recent applications of Einstein-Gauss-Bonnet holographic gravity.)In this work, we are interested in the near-lightcone regime of the correlators.
Defining a coordinate u = r 2 + /r 2 , the equation of motion in each of the three channels can be written as a second-order differential equation: The channel-dependent coefficients A and B are given in Appendix C.
Equations of motion in the near-lightcone limit
In this work, we are interested in the near-lightcone limit.Denote q ± = w ± q.We consider the following limit: This limit isolates contributions from the leading twist operators, which corresponds to zooming in on the near-boundary region in the bulk.For the position-space stresstensor correlators, the corresponding near-lightcone limit in the bulk was discussed in [54].
We next show that, in the limit (3.8), the equations of motion in Einstein-Gauss-Bonnet gravity reduce to those in Einstein gravity.
Scalar Channel: First we derive the reduced equation of motion in the scalar channel.In the limit (3.8), the equation of motion at leading order can be written as The observation is that, after rescaling the variables the equation of motion (3.9) becomes Since this equation is completely independent of κ, we conclude it is identical to the equation of motion in Einstein gravity in the same limit.
Shear Channel: The equation of motion in the limit (3.8) is After performing the rescalings we find which is identical to the equation of motion in Einstein gravity in the same limit.
Sound Channel: In the limit (3.8), the sound-channel equation becomes Performing the rescalings we obtain the same equation as in Einstein gravity: In the Einstein gravity case, κ = 1, one can verify that u = u r , α = α r in all channels.
In summary, the momentum-space reduced equations of motion in the three different channels can be written as where K(α r , u r ) is channel-dependent: Hence, we have observed that the reduced equations of motion in the holographic Einstein-Gauss-Bonnet theory take the same form as the ones obtained in Einstein gravity.
We will next analyze the action and show that the near-lightcone T T correlators are determined by three universal functions which correspond to the three independent polarizations.
Thermal correlators from holography
Here we compute holographic thermal T T correlators G µν,ρλ in four spacetime dimensions.The symmetries of the theory imply that the momentum-space retarded correlator has the following form [63]: where G scalar , G shear and G sound are three independent scalar functions of momenta and the tensor structures L µν,ρλ , S µν,ρλ and Q µν,ρλ are fixed by the symmetries, see, e.g., [63].If not stated otherwise, we use Minkowski signature.
Following the Lorentzian AdS/CFT dictionary [84][85][86][87][88] we impose incoming boundary conditions at the horizon, u → ∞.To compute the correlators we need the O(Z 2 ) on-shell action in all three channels, which is given by [66] where C T is the central charge of the holographic Einstein-Gauss-Bonnet theory.
The correlators are given by where A and B are the coefficients in the near-boundary expansion:9 Let us adopt a new variable where we used (3.10), (3.13), (3.16) and the relation We can now rewrite the equations of motion (3.18) as Note that x → 0 is the boundary limit while x → ∞ corresponds to the black hole horizon.
Before solving the equation (3.28), let us consider a formal analysis of the nearboundary structure of the correlators.The near-boundary expansion up to quadratic order is the same in all channels and reads where a and b are functions of α r .The coefficients A and B can be related to a, b in the following way: We can now write the correlator in e.g. the scalar channel as where we have ignored terms analytic in momenta. 10Note that the ratio f = b a only depends on q ± through α r .The function f scalar can be obtained via after we solve for Z(x) from the corresponding equation of motion.
Analogously, in the other channels we obtain The functions f shear (α r ) and f sound (α r ) are again defined as the ratios of the corresponding coefficients in the near-boundary expansion of Z(x).
In the next two sections, we will compute the functions f scalar,shear,sound (α r ) first perturbatively in 1/α r and then numerically.A few comments are in order: • We again emphasize that the near-lightcone T T correlators in the holographic Einstein-Gauss-Bonnet theory are expressed in terms of the same functions (there are three of them, which corresponds to the three independent polarizations) as the ones that appear in pure Einstein gravity.It is possible that this universality holds true more generally, going beyond the holographic Einstein-Gauss-Bonnet theory.We rephrase it in the language of the stress-tensor threepoint couplings below.
• The function f has a perturbative expansion in 1/α which is basically an OPE, and also non-perturbative terms of the type e −α 1 4 .The non-perturbative terms correspond to tunneling under the potential barrier in the Schrödinger equation which can be obtained from (3.28) and are sensitive to the boundary conditions at the horizon (x→∞), as we explain in Section 5.
• The perturbative expansion is not sensitive to the horizon boundary conditions -it is equivalent to the OPE which can also be performed in position space.
In Appendix A, we explicitly match several terms between the position-and momentum-space expansions.
Let us now point out that the first term in the perturbative expansion of f is a non-physical (cutoff-dependent) number, while the second term (proportional to β −4 ) is fixed by the T T T three-point couplings and the coefficient b T [58] where b T is defined by The ratio of (3.35) to the corresponding term in Einstein gravity is given by where the zero in the subscript indicates the corresponding value for Einstein gravity.
The first equality in (3.37) follows from (3.35) and to get the second equality we have used expressions for a, b, c from (2.12).We also used11 This statement means that the near-lightcone correlator for all holographic CFTs is completely fixed in terms of basic conformal data (such as a, b, c, b T ) and the function f, which we compute in the next two sections.For the other channels, similar logic implies Note that the combinations of a, b, c in the numerators are proportional to the corresponding ANECs.Hence, one obtains the vacuum result once an ANEC gets saturated, reproducing the results of [54].
Perturbative Analysis
Here we shall focus on computing the near-lightcone thermal T T correlators assuming Einstein gravity in the bulk, setting α r = α.We focus on the scalar channel where the perturbative expansion reads The computation in the other two channels is analogous, so we will simply list the corresponding results.
Leading order (vacuum correlators)
We start with the O(1/α 0 ) term, f scalar , in the large α expansion, which corresponds to the vacuum solution.As α → ∞ the reduced equation of motion becomes and admits an analytic solution in terms of the Bessel functions: with two coefficients c 1 and c 2 .Regularity in the bulk requires c 1 = 0, while c 2 remains arbitrary.This remaining coefficient corresponds to the norm of Z (0) scalar which does not affect the value of the correlator; without loss of generality we require a = 1 in (3.30), thus c 2 = 1.The near-boundary expansion is then given by where γ is Euler's constant.From (4.4) we find the ratio f A similar calculation in the shear and sound channels yields the same result: Note that in all channels f (0) corresponds to the contact term.
Subleading order
To proceed with perturbative expansion for the scalar channel, it will be useful to convert the corresponding reduced equation of motion (3.28) into the following Schrödinger form: The expansion in 1/α reads where scalar (x) was computed before.Expanding the equation (4.7) to O(1/α) gives The solution can be written in terms of the MeijerG functions: Expanding the solution near the horizon, regularity restricts the coefficient c 3 = i 10 .Setting a = 1 in the expansion (3.30) leads to c 4 = 0, which completely fixes the O(1/α) solution.This gives the following contribution to the correlator Following the same path, we also obtain the results in the shear and sound channels.
Extracting the functions f scalar , f shear and f sound , we find sound = 1 60 .(4.12)
Higher orders
The general equation satisfied by the higher-order terms is given by The homogeneous solution is given by while a particular solution can be expressed via the Green's function method described in, e.g., [89], as Although it is not easy to perform the integrals in (4.15) explicitly, one can examine the near-horizon behaviour of the particular solution which will be used to impose regularity at the horizon In the same way, we can obtain the results in the remaining two channels.After extracting the function f, we find the following results: The same method in principle allows one to work out higher-order terms.In Appendix A we also verify the above results using the position-space approach.
Radius of convergence: Let us now estimate the radius of convergence of the perturbative expansion.We focus on the scalar channel.Define We plot r n (n) in Fig. 3 in Appendix A. 12 The radius of convergence is defined as lim n→∞ r n and we find that it seems to be zero.
Non-Perturbative Behavior
In this section we analyze the stress-tensor correlators in all channels by solving the reduced equations of motion (3.28) numerically.Note that the retarded correlators in general have an imaginary part, which represents a purely non-perturbative contribution.Using a WKB approximation we analyse this contribution explicitly and show that all three channels decay exponentially at the same rate.
Numerical solution
In what follows, we again focus on space-like momenta where α (and thus x) is positive.In general, the solution in the limit x → ∞ is a regular function multiplied by a superposition of incoming and outgoing waves.The natural choice is to pick the incoming-wave condition as discussed in [84][85][86][87].With this choice one obtains the retarded correlators in the dual CFT.
Consider the scalar channel.Near the horizon the corresponding reduced equation (3.28) reduces to where Z scalar (x → ∞) denotes the solution deep in the bulk.This equation can be solved analytically in terms of the (differentiated) Airy functions.Expanding the Re( scalar ) Re( sound ) Re( shear ) The bottom (dashed, blue) line corresponds to the shear channel.At large α, all three lines approach the expected value, 3 4 − γ ≈ 0.173, where γ is Euler's constant.The additional smaller figures show the local minimum and maximum that appear in the shear and sound channels, respectively, in a small α region.solution for large x and picking the incoming wave, one finds We use this expression to numerically solve equation (3.28) starting from large values of x all the way to the boundary at x = 0.Then, using (3.32) we compute the function f scalar from this numerical solution.We compute f shear and f sound in a similar way.
We next present the numerical results of both the real and imaginary parts of the correlators, for all three channels.
Real part:
We plot the real part of the function f(α) in Fig. 1.For large α, in all channels the numerical solutions quickly converge to the value 3 4 − γ, i.e., the leading order in the perturbative expansion.We have also verified that, for every n ∈ N, there exists a value α n , such that for all α > α n the n-th order perturbative expansion approximates the numerical solution better than any expansion with n − 1 (or less) terms.
Imaginary part: We present the numerical solutions of the imaginary part of the function f(α) in Fig. 2. The imaginary part of the correlators is purely non- perturbative in the 1/α expansion.Its concrete form is sensitive to the boundary condition at the horizon.If one instead imposes the outgoing-wave condition in the bulk and computes an advanced correlator, one finds that the correlator has the same real part but the imaginary part differs by a sign (this also follows from the general properties of Green's functions).
Imaginary part of correlators from WKB
An interesting question is how to estimate the non-perturbative behavior of the thermal correlators calculated numerically above.Let us answer this question by calculating the decay rate of Im G R using the WKB analysis.To do so, we shall transform the reduced equations of motion (3.28) to the following form: Compared to the previous Schrödinger-like equation (4.7), here the V (ξ) term is independent of the expansion parameter, allowing us to perform a standard WKB analysis.Starting from (3.28), we first rescale x → √ αy to have B(x) = B(y)α −1/2 where B(y) is independent of α.We next introduce ξ(y) which satisfies the relation (5.4) The equations of motion can be written as (5.5) The parameter α plays the role of ℏ; the precise identification is α − 1 4 = ℏ.For simplicity we omit the channel index in ξ and Z.Note that the potential in (5.5) is positive in the region y ∈ (0, 2) which corresponds to a classically forbidden region.
It may be useful to list explicit expressions relating y and ξ for different channels: 13Scalar : y = ξ , ( Shear : Sound : y = 12 + ξ 1 3 . (5.8) In (5.7) we use plus for ξ > 0 and minus for ξ < 0. We omit the channel index in ξ.Transformations (5.6)-(5.8)map the conformal boundary to the points 0, −16 and −1728 in the scalar, shear, and sound channel, respectively.In all three channels the horizon corresponds to ξ = ∞.The transformed equations of motion have the form (5.3), with the identifications ℏ = α − 1 4 and , − ±1 where in V shear we use the plus signs for ξ > 0 and the minus ones for ξ < 0. In all channels the potential forms a barrier (V > 0) in the near-boundary region: Shear : ξ ∈ (−16, 0) , Sound : ξ ∈ (−1728, −512) . (5.12) The potential becomes negative for large ξ.In addition, in the sound channel we find a singularity in the classically allowed region at ξ = 0.
One may now follow the standard WKB analysis.Considering the ansatz of the form and plugging this into the transformed equations, one determines the functions W i (ξ).The leading term is In the classically forbidden region we pick the sign corresponding to the decaying exponential, while deep in the bulk we select the oscillating solution that satisfies the incoming-wave condition near the horizon ξ = ∞.Validity of WKB is restricted to the region where One cannot use the WKB ansatz close to the boundary and around the turning point -in these regions one has to solve the equations of motion and connect solutions inside and outside the barrier.
The imaginary part of the correlator, using (3.24), can be expressed as the following tunnelling probability14 ( In all three channels, the imaginary part of the correlator decays exponentially at the same rate 15Im f ∼ exp −2α It would be interesting to see if this behavior holds more generally.Note that the exponent in the retarded thermal correlator of a scalar field in the large momentum limit was computed in [35,85].Although the near-lightcone limit we consider here is different, the power of momenta in the exponent is the same, i.e. α 1/4 ∼ q, while the multiplicative constants in the exponent differ.
Discussion
In this paper we point out that the near-lightcone thermal correlators of stress tensors in holographic Einstein-Gauss-Bonnet gravity take the same form as those in Einstein gravity. 16More precisely, we observe that the thermal two-point correlators of stress tensors are rather constrained in the near-lightcone limit: they are given by three universal functions (f scalar , f shear , f sound ) whose arguments involve the combination of α ∼ q + (q − ) 3 /T 4 and the three coefficients a, b, c which determine the stress-tensor three-point functions.The correlator in a given channel takes the vacuum form when the corresponding ANEC is saturated, as already noticed in [54].
The correlators admit a perturbative expansion in powers of 1/α.This is essentially the OPE combined with the near-lightcone limit, where only the leading-twist multi-stress tensors contribute.One can read off the OPE coefficients of the two stress tensors and multi-stress tensors.The momentum space approach might be more convenient than the one which employs the near-boundary ansatz in position space and substitutes it into the equations of motion, e.g., [53,54].Note that the power series in momentum space seems to have zero radius of convergence, i.e., it's an asymptotic series. 17epending on whether we want to compute retarded or advanced correlators, we need to impose appropriate boundary conditions at the horizon (which in our variables corresponds to the behavior at large x).Perturbatively, the correlator is completely determined by the OPE (as we explain in Section 3 and Appendix A).However the boundary conditions at the horizon affect the solution non-perturbatively in α.This is because a general solution decays exponentially under the barrier, as discussed in Section 5.2.It would be interesting to understand the significance of such non-perturbative terms. 18n the near-lightcone limit, because of the universality of T T -correlators discussed above, we can simply focus on the analysis based on pure Einstein gravity.Note that for the transverse polarization of the stress tensor, the equation of motion for the metric fluctuation is the same as that for a minimally coupled scalar.Hence the two point functions must be identical.Naively one may find this surprising, given that the OPE coefficients for a scalar contain poles at integer values of the scalar's conformal dimension [6].This corresponds to the mixing with the double-trace operators and fixes the residue of the OPE coefficient of the latter, which ensures the divergence is cancelled.How does this work in the stress-tensor correlator case and how is this reflected in momentum space?The answer to this question is that the logarithmic terms in position space which are produced by the cancellation of the poles at ∆ = 4 get Fourier transformed to the rational functions of momenta (or, in our limit, α) in momentum space.Indeed, in Appendix B, we verify explicitly that the OPE coefficients of the two scalars and multi-stress tensors, together with the thermal expectation values of the latter, multiplied by the corresponding conformal blocks in momentum space reproduce the perturbative expansion of the transverse T T correlator.
It is useful to examine the large-N counting of thermal T T correlators (where N ∼ √ C T ).Consider a finite temperature connected T T -correlator on a sphere above the confinement-deconfinement phase transition.The disconnected component scales like N 4 and this behavior is entirely due to the double-stress tensor operators [T µν ] 2 , as explained in [53].Indeed, the MFT OPE coefficients λ Tµν T αβ [Tµν ] 2 ∼ 1 while ⟨[T µν ] 2 ⟩ ∼ N 4 .The subleading corrections to the OPE coefficients and to the anomalous dimensions of [T µν ] 2 contribute to the connected correlator [53].It is easy to extract the large-N behavior of the OPE coefficients with the k-stress tensors and convince oneself that they all contribute to the connected correlator with the expected N 2 scaling.
On the other hand, in the low-temperature phase ⟨[T µν ] k ⟩ scales like N 0 , while the leading large-N behavior of the T T correlator scales like N 2 .In holographic theories such correlators are simply given by the sum of the vacuum correlators over the thermal images.One can immediately see how this is reproduced by multi-stress tensors.The only contributions that survive in addition to the identity are the double-stress tensors.
In this work, we speculate that the universality observed in the holographic Einstein-Gauss-Bonnet theory remains valid in more general holographic theories.It would be interesting to study the near-lightcone T T correlators using different gravity models to see if this universality persists.On the other hand, understanding this directly from the CFT point of view would be a more ambitious but very interesting goal.In this spirit a possible step is to study the lightcone limit of heavy-heavy-lightlight correlators, with the light operators being stress tensors, from the bootstrap point of view.This would extend recent progress for scalar correlators in e.g.[13,14] and related works.In the latter, the Lorentzian inversion formula was used to get the OPE data for multi-stress tensors in the scalar case and it would be interesting to generalize this to stress tensor correlators, or spinning correlators more generally.A CFT bootstrap approach would likely shed light on the regime of universality beyond the cases explored in this paper and is therefore of great interest.
Performing this limit on the equations of motion, one gets the reduced equations of motion for the bulk-to-boundary propagators Z which can be solved by the ansatz: 20 where Z AdS is the bulk-to-boundary propagator in pure AdS, which in the scalar channel is 2r 2 πw 6 . 21Using the above ansatz and the scalar-channel reduced equation of motion (in the position space) obtained in [54], one finds that the coefficients a n,3 (for all n ≥ 3) in (A.4) are undetermined.This reflects the fact that, by performing the limit (A.1) one loses the information deep in the bulk.However, one can check that these undetermined coefficients are always suppressed in the lightcone limit.
We next compute the holographic stress-tensor correlators perturbatively in the µ expansion.In doing so, we note that in the lightcone limit (i.e., x − → 0 where x ± = t ± z = −it E ± z) the correlator is fully determined by the coefficients a nn .The first few terms of the near-lightcone correlator G scalar in position space are Using the same method, one can generalize the position-space computation to other two channels.However, due to the computational complexity in position space, in this paper we analyze the correlators in other channels in momentum space.
The correlator (A.5) depends on the combination x − (x + ) 3 , consistent with the Fourier transformed results using the variable α which we discuss next. 20We rewrite the ansatz so it looks different from the one in [54]. 21The full solution Z is connected to Z by , where Z is the boundary value of the invariant Z.For other channels and more details on the ansatz, see [53] and [54] A.2 Fourier transform to momentum space We here transform the position-space correlator (A.5) to momentum space, where the conjugate variables to (x + , x − ) are (q + , q − ) = − 1 2 (q − , q + ).Fourier transform of the zeroth-order contribution diverges and thus needs to be regularized.We use the dimensional regularization, where instead of (x − x + ) −3 we consider (x − x + ) −3−ϵ and then take ϵ → 0. The result is where γ is the Euler's constant.Terms O(ϵ) can be neglected, while the pole can be eliminated by counterterms.The regulator-independent (physical) log(−q + q − ) term is We have verified that this result exactly matches the leading term in (4.5) computed in momentum space.
The O(µ 1 ) and O(µ 2 ) terms can be directly Fourier transformed: Applying derivatives with respect to x + and x − gives
B Momentum-Space Thermal Conformal Blocks
In this appendix, we use the momentum-space conformal blocks to examine the scalar channel in Einstein gravity, whose EoM and action are equivalent to the ones for a massless scalar field.Note, however, that this equivalence do not persist when considering higher-derivative terms, such as Einstein-Gauss-Bonnet gravity. 22 The thermal conformal blocks in momentum space were computed in [91].Expanded in thermal conformal blocks, the scalar correlator can be written as where ω n = 2πn β is the Matsubara frequency and a O ∆,J are thermal coefficients.(The explicit form of G ∆ O ∆,J (ω n , q) is given by Eq. (2.11) in [91].)Minimal-twist operators are the ones that dominate in the near-lightcone limit that we are interested in. 22More precisely, while the OPE coefficients for minimal-twist stress tensors in the minimally coupled scalar case do not depend on higher-derivative terms such as the Gauss-Bonnet coupling [6] except through the temperature, the scalar wave equation in Einstein-Gauss-Bonnet gravity is different from the scalar-channel EoM of metric perturbations.We have used µ = (π/β) 4 and normalized the correlator to agree with the stresstensor correlator in the scalar channel, ⟨T xy T xy ⟩.The stress-tensor coefficient is fixed by Ward identities and the stress-tensor one-point function, while the [T 2 ] 4 and [T 3 ] 6 were computed in [6,13].Here we have further obtained the coefficient for the [T 4 ] 8 operator with dimension 16 and spin 8 based on the method of [13].
Let us first discuss operators [T k ] J=2k with k = 0, 1, 2, 3.For the identity contribution, the block has a simple pole at ∆ O = 4 with a residue that is purely a contact term.Removing the contact term gives After Wick-rotating ω n → −iω and taking the lightcone limit we reproduce (4.5).
Likewise, for the stress-tensor exchange we find which is in agreement with (4.12), where we remind the reader that G xy,xy = 1 2 G scalar .Moreover, we have verified that the [T 2 ] 4 and [T 3 ] 6 contributions reproduce the coefficients listed in (A.12).
C Equations of Motion for Einstein-Gauss-Bonnet Gravity
In Section 3, we showed that the equations of motion of metric fluctuations in Einstein-Gauss-Bonnet gravity in the limit (3.8) reduce to the equations of motion in Einstein gravity.Here we list the coefficients A, B in the Einstein-Gauss-Bonnet equations of motion using the notation adopted in this paper. 23calar Channel: where U (u) = √ κ 2 − κ 2 u 2 + u 2 .
− 5 7 a + 2 b − c 14 a − 2 b − 5
first equality in(3.37) was derived from the Ward identity.Eq. (3.37) is consistent with the relation between α and α r in (3.10), as it should.It is tempting to propose that in any holographic theory the function that enters the correlator in the scalar channel is f scalar (α r ) = f scalar
Figure 1 :
Figure1: The real part of f(α).The upper (solid, red) line corresponds to the scalar channel.The middle (dotted, green) line corresponds to the sound channel.The bottom (dashed, blue) line corresponds to the shear channel.At large α, all three lines approach the expected value,3 4 − γ ≈ 0.173, where γ is Euler's constant.The additional smaller figures show the local minimum and maximum that appear in the shear and sound channels, respectively, in a small α region.
Figure 2 :
Figure 2: The imaginary part of f(α).The upper (dashed, blue) line corresponds to the shear channel.The middle (dotted, green) line corresponds to the sound channel.The bottom (solid, red) line corresponds to the scalar channel. )
Figure 3 :
Figure 3: Estimation of the radius of convergence in the scalar channel | 9,219 | 2023-06-01T00:00:00.000 | [
"Physics"
] |
Periodic measures are dense in invariant measures for residually finite amenable group actions with specification
We prove that for certain actions of a discrete countable residually finite amenable group acting on a compact metric space with specification property, periodic measures are dense in the set of invariant measures.
1.
Introduction. Let G be a discrete countable residually finite amenable group acting on a compact metric space X. Denote by M(X, G) the set of G-invariant measures and M e (X, G) the set of ergodic G-invariant measures. For a point x ∈ X, we call x a periodic point if |orb(x )| < ∞. Define the periodic measure µ x as a probability measure with mass |orb(x)| −1 at each point of orb(x) and we denote by M P (X) the set of all such periodic measures.
Specification property, introduced by Bowen [3] in 70's for Z−actions, is a basic property used in smooth and topological dynamical systems to obtain maximal entropy measure, exponential growth of periodic orbits, density of periodic or ergodic measures, multifractal analysis etc. Specification property seems a very strong property, but there are many examples of dynamical systems satisfying this property, including subshifts of finite type, sofic shifts, the restriction of an Axiom A diffeomorphism to its non-wondering set, expanding differential maps and geodesic flows on manifold with negative curvature. Readers may refer [9,Chapter 21] for more details of specification. For non-uniformly hyperbolic dynamical systems, several versions of specification-like were introduced, including [2,11,13,16]. Pfister and Sullivan [18] also introduced a weak specification property called g-almost product property, which was renamed as the almost specification by Tompson [22]. In [19], Ruelle introduced the notion of weak specification for Z d actions and called the definition in [3] as strong specification. Recently, Chung and Li [6] generalized specification to general countable group actions. We will give the details in next section.
For smooth dynamical systems, the problem of density of periodic measures is well studied. For instance, Sigmund [21] proved that for uniformly hyperbolic diffeomorphisms with specification property, each invariant measure can be approximated by periodic measures. Hirayama [11] proved that each invariant measure supported by the closure of a Pesin set of a topological mixing measure is approximated by periodic measures. Liang, Liu and Sun [13] improved Hirayama's result by weakening the assumption of mixing measure to that of hyperbolic ergodic measure.
Our main results are as follows.
Theorem 1.1. Let G be a discrete countable residually finite amenable group acting on a compact metric space X with specification property. Then M P (X, G) is dense in M(X, G) in the weak * topology. Moreover M e (X, G) is residual in M(X, G).
Theorem 1.2. Let Γ be a countable discrete group and f an element of ZΓ invertible in l 1 (Γ, R). Then the action of Γ on X f which is the Pontryagin dual of ZΓ/ZΓf has specification property.
2.
Preliminary. In this section, we will recall some notions and basic facts about amenable groups and residually finite groups. Also we will give the definition of specification.
where F(G) is the collection of all finite subsets of G. Such sequences are called Følner sequences. The quasi-tiling-theory is a useful tool for actions of amenable groups which is set up by Ornstein and Weiss in [17].
We say that The subsets C 1 , C 2 , · · · , C k are called the tiling centres. The next proposition is [7, Lemma 9.4.14].
For our proof, we also need the Mean Ergodic Theorem for amenable group actions.
Lemma 2.1 (Mean Ergodic Theorem). Let G be an amenable group acting on a probability measure space (X, B, µ) by measure preserving transformation, and let {F n } n∈N be a Følner sequence. For any f ∈ L 2 (µ), set A n (f )(x) = 1 |Fn| g∈Fn f (gx).
Residually finite group.
A group is residually finite if the intersection of all its normal subgroups of finite index is trivial. Examples of groups that are residually finite are finite groups, free groups, finitely generated nilpotent groups, polycyclic-by-finite groups, finitely generated linear groups and fundamental groups of 3-manifolds. For more information about residually finite groups, readers can refer [5,Chapter 2].
Let (G n , n ≥ 1) be a sequence of finite index normal subgroups in G. We say Clearly such sequence exists if G is countable and residually finite.
If G ⊂ G is a subgroup with finite index, we say that Q ⊂ G is a fundamental domain of the right coset space G \G, i.e. a finite subset such that {G s | s ∈ Q} is a partition of G.
The following proposition is [8, Corollary 5.6] and we will use it to control the periodic orbits we get. The original proof is a version of the Ornstein-Weiss quasitiling lemma. Also there is an algebraic proof from [1, Theorem 6].
Proposition 2. Let G be a countable discrete residually finite amenable group and let (G n , n ≥ 1) be a sequence of finite index normal subgroup with lim n→∞ G n = {e G }. Then there exists a Følner sequence (Q n , n ≥ 1) such that Q n is a fundamental domain of G/G n for every n ≥ 1.
Specification.
In this subsection, we will recall specification property of general group actions, which is from [6, Section 6].
Let α be a continuous G-action on a compact metric space X with metric ρ. The action has specification property if there exist, for every ε > 0, a nonempty finite subset F = F (ε) of G with the following property : for any finite collection of finite subsets F 1 , F 2 , · · · , F m of G satisfying and for any subgroup G of G with and for any collection of points x 1 , x 2 , . . . , x m ∈ X, there is a point y ∈ X satisfying and sy = y for all s ∈ G .
In [6], this property is called strong specification. We will call it specification since there is no misunderstanding.
3. Proof of Theorem 1.1. Let (G n , n ≥ 1) be a sequence of finite index normal subgroups with lim n→∞ G n = {e G } and (Q n , n ≥ 1) be a Følner sequence such that Q n is a fundamental domain of G/G n as described in Proposition 2.
Let ν ∈ M(X, G), ε > 0 and W a finite subset of C(X), where C(X) is the set of all the continuous real valued functions on X. Uniformly continuity of the elements of W implies that there is δ ∈ (0, ε) such that |ξ(x) − ξ(y)| < ε 8 for all x, y ∈ X with d(x, y) < δ and all ξ ∈ W. By Mean Ergodic theorem, A n (ξ) converges to ξ * in L 2 for all ξ ∈ W, so we can choose a subsequence {A n k } k∈N such that A n k (ξ) converges to ξ * ν-a.e. for all ξ ∈ W. For convenience, we will write the subsequence (Q n k , k ≥ 1)as (Q n , n ≥ 1). Set We know ν(Q(G)) = 1. Denote by ξ * (x) the limit for each x ∈ Q(G). Next we will construct a finite partition of X as following: Since W is finite, Next we will construct F 1 , F 2 , · · · , F t and G m satisfying (1) and (2) in specification property. The idea is from [25, Theorem 1.3] but with very minor changes. Suppose η = {A 1 , A 2 , · · · , A l }. Let a i = ν(A i ) for i = 1, 2, · · · , l and a = min {a i : i = 1, 2, · · · , l}. By Egorov's Theorem, there exist a Borel subset 4|F | 2 l for all g ∈ F, n ≥ N 2 . By Proposition 1, there exist n k > n k−1 > · · · > n 1 ≥ N 2 and N 3 ∈ N s.t. Q m can be γ 4|F | 2 l -quasi-tiled by Q n1 , Q n2 , · · · , Q n k when m > N 3 . Also N 3 will be large enough such that the family of all the translations By the definition, we know Claim. {S nj (c j )c j |S nj (c j )c j ∈ F} and G m satisfy the conditions in specification property.
Using specification property, there is some y ∈ X such that ρ(gx j (c j ), gy) < γ < δ, for all g ∈ S nj (c j )c j . Denote by µ y the periodic measure supported on orb(y).
Claim. (K, ρ) is a convex compact metric set and denote by ext(K) the set of extreme points of K. Then ext(K) is a G δ subset of K. Let K n = {x ∈ K : there exist y, z ∈ K such that x = 1 2 (y + z) and d(y, z) ≥ 1 n }.
Obviously, K n is closed.
As a result, ext(K) = K \ K 0 is a G δ subset of K. We know that M e (X, G) is the set of extreme points of M(X, G). So by the claim, M e (X, G) is a G δ set. Thus we finish the proof.
4.
Proof of Theorem 1.2. For a countable group Γ and an element f = f s s in the integral group ring ZΓ, where ZΓ is the set of finitely supported Z-valued functions on Γ, consider the quotient ZΓ/ZΓf of ZΓ by the left ideal ZΓf generated by f. It is a discrete abelian group with a left Γ-action by multiplication. The Pontryagin dual of ZΓ/ZΓf which is denoted by X f is a compact metrizable abelian group with a left action of Γ by continuous group antomorphisms and denote by ρ some compliable metric on X f . Denote X = (R/Z) Γ . We denote by ρ 1 the canonical metric on R/Z defined by The left and right actions l and r on X are defined by (l s x) t = x s −1 t and (r s x) t = x ts for every s, t ∈ Γ and x ∈ X. We can extend those actions of Γ to commuting actions l and r of ZΓ on X by setting
It is easy to check
Denote by the restriction to X f of the Γ-action l on X.
Then v = r f (w) is what we need. Let e Γ and e X f be the unit elements of Γ and X f respectively. Set W = {e Γ } ∪ support(f * ) = ({e Γ }∪support(f )) −1 . Let ε > 0. Then we can find a nonempty finite subset W 1 of Γ and ε 1 ∈ (0, f −1 1 ) such that if x, y ∈ X f satisfy max s∈W1 |x s − y s | ≤ 2ε 1 , then ρ(x, y) ≤ ε. Take a finite subset W 2 of Γ containing e Γ satisfying By Lemma 4.1, for any finite collection of finite subsets F 1 , F 2 , · · · , F m of Γ satisfyingF F i ∩ F j = ∅, 1 ≤ i = j ≤ m and any collection of points x 1 , x 2 , . . . , x m , otherwise.
The following lemma is a version of [4, Lemma 1]. The same argument also appeared in the proof of [6, Lemma 6.2].
A point x ∈ X f is said to be homoclinic if sx → e X f as Γ s → ∞. The set of all homoclinic points, denoted by ∆(X f ), is a Γ-invariant normal subgroup of X f . Claim 2. Let d be the expansive constant of (X f , α f ) i.e. if x, y ∈ X f with ρ(sx, sy) ≤ d for all s ∈ Γ then x = y. For any ε ∈ (0, d),F = W 1 W 2 (W 1 W 2 ) −1 , any finite subset F 1 of Γ and x ∈ X f , there exists y ∈ ∆(X f ) s.t. max s∈F1 ρ(sx, sy) ≤ ε and sup s∈Γ\F F1 ρ(se X f , sy) ≤ ε.
To prove the above claim, we may assumeF =F −1 otherwise we can replacẽ F byF ∪F −1 . Let F 1 be a finite subset of Γ and x ∈ X f . For each finite set F 2 ⊂ Γ \F F 1 , from Claim 1, we can find y F2 ∈ X f such that ρ(sx, sy F2 ) ≤ ε for all s ∈ F 1 and ρ(se X f , sy F2 ) ≤ ε for all s ∈ F 2 . Note that the collection of the finite subsets of Γ \F F 1 has a partial order. Take a limit point y ∈ X f of {y F2 } F2 . Then ρ(sx, sy) ≤ ε for all s ∈ F 1 and ρ(se X f , sy) ≤ ε for all Γ \ F F 1 . By Lemma 4.2, we know y ∈ ∆(X f ). | 3,213 | 2015-09-30T00:00:00.000 | [
"Mathematics"
] |
Cable energy function of cortical axons
Accurate estimation of action potential (AP)-related metabolic cost is essential for understanding energetic constraints on brain connections and signaling processes. Most previous energy estimates of the AP were obtained using the Na+-counting method, which seriously limits accurate assessment of metabolic cost of ionic currents that underlie AP conduction along the axon. Here, we first derive a full cable energy function for cortical axons based on classic Hodgkin-Huxley (HH) neuronal equations and then apply the cable energy function to precisely estimate the energy consumption of AP conduction along axons with different geometric shapes. Our analytical approach predicts an inhomogeneous distribution of metabolic cost along an axon with either uniformly or nonuniformly distributed ion channels. The results show that the Na+-counting method severely underestimates energy cost in the cable model by 20–70%. AP propagation along axons that differ in length may require over 15% more energy per unit of axon area than that required by a point model. However, actual energy cost can vary greatly depending on axonal branching complexity, ion channel density distributions, and AP conduction states. We also infer that the metabolic rate (i.e. energy consumption rate) of cortical axonal branches as a function of spatial volume exhibits a 3/4 power law relationship.
this equation describes the AP-related energy consumption rate per unit membrane area (cm −2 s −1 ) at any axonal distance and any time. The individual terms on the right-hand side of the equation represent the contributions of the sodium, potassium, leak, and axial currents, respectively. Calculations based on this function distinguish between the contributions of each item toward total energy consumption.
Figure 1. Effect of Axonal Length on AP-related Energy Consumption and Efficiency. (A) Cable model of a
Hodgkin-Huxley-type cortical axon, where axial current, i a , flows through axial resistance, r a , within a uniform cylinder. The membrane currents consist of i C , i K , i Na , and i L , through c m , g K , g Na and g L , respectively. V K , V Na and V L , Nernst potentials. APs (60 Hz) were initiated by a stimulating current (19.1 μ A/cm 2 , 1 s) at one end of the uniform axon (X = 0 μ m) at 37 °C.(B) Traces of AP (black), -i Na (red, inverted Na + current), i K (blue) and i a (green) in the compartments of a uniform axon as in (A) (1.5 μ m in diameter, 1000 μ m in axonal length, 50 μ m in compartmental length (Δ x), unmyelinated) at X = 0, 200 and 500 μ m. (C) Traces of the total energy consuming power (black, P tot = P Na + P K + P a + P L ) and its main components, i.e., the energy consuming power of the sodium conductance (red, P Na = i Na × (V − V Na )), potassium conductance (blue, P K = i K × (V − V K )), and axial conductance (green, P a = π ∂ ∂ i a a V x 1 2 ), in the compartments at X = 0, 200, and 500 μ m. P L is the leak conductance (negligible). See also Figure S2. (D) Distribution of energy cost per unit membrane area per AP along axons of different lengths (same diameter, APs firing at 60 Hz, at 37 °C). Note that the longer the axon, the higher the energy cost at the same distance. Inset, increase of the energy cost at the AP initiation site with axonal length. (E) Distribution of the excess Na + entry ratio (γ ), i.e., the ratio of the total Na + flux during an AP to the theoretically minimal Na + load needed to generate the upstroke of an AP, along axons of different length. Note that the longer the axon, the higher the γ value at the same distance. Inset, increase of the value of γ at the AP initiation site with axonal length. (Theoretical minimum Na + load: Na + flux integrated from dv/dt = 20 to dv/dt = 0.) (F) Comparison of the energy calculation based on Na + counting (E Na c ) to the one based on the energy function (E). See also Figure S2.
Scientific RepoRts | 6:29686 | DOI: 10.1038/srep29686 Effect of Axonal Length on AP-related Energy Consumption and Efficiency. Next, we applied the above energy function to estimate the distribution of energy cost along a single-cable axon (Fig. 1A). In the HH-type cortical axon model that is based on experimental data 18,30,31 , AP propagation (60 Hz, at 37 °C) was simulated by injecting a steady current (19.1 μ A/cm 2 , 1 s) at one end of the axon ( Fig. 1A; simulations in this paper were run in MATLAB). From recordings of APs, transmembrane ion currents, and axial currents at different axonal distances (Fig. 1B), we calculated the energy consumption associated with each ionic current (P ion = i ion × (V − V ion ), Fig. 1C, Figure S2) as well as the total energy cost ∫ = ∑ (E P dt) ion as a function of axon length (see Fig. 1D). Although the Na + current (i Na ) amplitude during an AP increased with distance, the width of i Na decreased (Fig. 1B, red), resulting in a reduction in total Na + influx and the energy consumed by the sodium conductance (integration of the metabolic consuming power (P Na = i Na × (V − V Na )) of the sodium conductance over the duration of an AP, Fig. 1C, red). In contrast, the K + current (i K ) almost leveled off along the axon (Fig. 1B, blue), making the metabolic consuming power of the potassium conductance (P K = i K × (V − V K )) stable (Fig. 1C, blue). Additionally, the amplitude of the axial current (i a ) rose sharply along the axon, resulting in a considerable increase in the amount of energy consumed by the axial conductance (integration of = π ).
To study the effect of axon length on energy consumption, we carried out simulations of unbranched cable axons ranging from 0 (point neuron model) to 1,500 μ m in length (with the same diameter, AP firing at 60 Hz, at 37 °C). As shown in Fig. 1D, the energy cost along an axon of any length is generally highest at the AP initiation point, decreases sharply in the initial 200 μ m, increases slowly and finally saturates at a value below that of the initial section. More crucially, the longer the axon, the higher the energy cost per unit membrane area per AP at the same distance (Fig. 1D). Even more importantly, in the cable model, the cost at the AP initiation site (E int ) can be up to 15% higher than the cost in the point model (Fig. 1D, inset), indicating that more energy is needed to promote AP conduction across longer axons than across shorter ones. By analyzing the components that contribute to the energy consumption, we found that the higher cost in the longer axons was due to the higher cost of the axial and sodium conductances (see Fig. 1B,C), which is reasonable because AP propagation in longer axons demands a larger Na + influx and a larger axial current.
We also examined how axonal length affects energy efficiency, as measured by the excess Na + entry ratio (γ ). A γ value greater than 1 indicates excess Na + influx, and larger γ values, in turn, reflect a less efficient use of energy. As illustrated in Fig. 1E, the AP initiation location of the axon had the lowest efficiency, with values that ranged from 1.5 to 2; the longer the axon, the lower the efficiency at the same distance. In particular, the decrease in efficiency at the initiation site was up to 33% greater than the value of 1.5 obtained using the point model (note a value of 1.3 is obtained using the Na + counting method), see Fig. 1E inset, indicating that long axons will pay the price of reduced efficiency.
To compare the energy-function approach with the traditional Na + -counting method, we applied both approaches to the cable model and the point model. Figure 1F shows that the cost estimated using Na + counting was significantly lower in both a cable model with a length of 450 μ m (27% lower) and in a point model of the same length (42% lower), suggesting that the Na + -counting method severely underestimates metabolic consumption. The discrepancy of the two results is calculated as− , where E Na c and E are the results of the sodium counting method and our approach, respectively. Our results show that factors including spiking rate, length of cable, temperature and gmax have limited effects on the discrepancy (< 10%), indicating that the discrepancy is quite stable. See Discussion for detailed results.
Effect of AP Conduction State on AP-related Energy Consumption. Neurons exhibit various axonal
branching patterns. Different AP conduction states (i.e., successful, blocked, and reflected conduction) have been observed at branch points in different types of neurons 32 . Factors that affect AP propagation through branch points include the geometry of the branches connected at the branch point [33][34][35] , the AP spike rate 32,36,37 and the temperature 38 . To compare the energy costs of different AP conduction states, we examined a model consisting of a parent axon connected with a pair of identical branches ( Fig. 2A). The parent axon is stimulated at the proximal end so that it generates APs at a controlled spike rate (5 or 60 Hz) at 37 °C. We determined the smallest child diameter at which AP propagation failure occurred for at least one AP in the train. Consider the geometrical ratio (GR) 39,40 , which is defined as where d D and d M are the diameters of the child branches and the parent axon, respectively. Figure 2B illustrates the influence of the GR on APs (top) and energy consuming power (bottom) at the initiation site (a) and at 25 μ m after the branch point (b) for a firing rate of 5 Hz. Successful propagation into child branches was observed when the GR was ≤ 9; however, when the GR reached 10, conduction failed at the branch point, indicating that the critical GR (GR c ), or the largest GR for successful conduction, is between 9 and 10. Similar effects of GR on propagation were observed with an AP spike rate of 60 Hz but with a lower GR of 7 (Fig. 2C). Figure 2D,E show the energy cost along the axon for two spike rates. For AP conduction through the branch point, less energy is used when the GR is small, while a larger GR generally requires more energy during the transition. For GR values greater than the critical value GR c , AP propagation failed in the child branch, and very little energy is consumed by the child branches. Actually, the rapid change in energy cost around the branch point at GR ≠ 1 (blue, red, and green) was mainly caused by AP reflection around the branch point. This phenomenon was not observed at GR = 1 because at that time, the APs were propagating in a branching system equivalent to a homogeneous parent axon in accordance with Rall's 3/2 power law 40 . Our simulations predict that the GR c decreases nonlinearly with increasing AP firing rate (Fig. 2F).
Scientific RepoRts | 6:29686 | DOI: 10.1038/srep29686 Distribution of AP-related Energy Consumption in an Axonal Branching Tree. We next applied the energy-function method to a more complexly branched axon whose structure was that of a binary tree with GR = 1 (Fig. 3A). We found that the energy cost per unit length per AP decreased monotonically with distance from the injection site (Fig. 3A) and BL (Fig. 3B). Nonetheless, the energy cost per unit membrane area decreased sharply in the initial 400 μ m before rising and then leveling off (Fig. 3C), resembling the distribution along a single-cable axon (Fig. 1D) and indicating that this distribution pattern of energy cost per unit membrane area against distance might apply to all types of axonal morphology.
Then, to examine how the energy consumption of an entire axonal branching system varies with geometry (i.e., axonal volume, membrane surface and branching complexity), we changed the geometry by increasing the total BL of the system (BL sys ) from one to four while keeping the diameter of the highest level of child branch fixed at 0.2 μ m (geometry versus BL sys , see Figure S1A-S1C). Interestingly, the log-log plot (Fig. 3D) shows that the dependence of the system's metabolic rate on its volume followed an allometric scaling law with a scaling exponent 0.75 for GR = 1 here (the exponent changes from 0.3 to 2 when GR changes from 0.5 to 2). Assuming that the density of the axon (1.05 g/mL) is uniform, we found that the allometric scaling of the metabolic rate linked to AP versus the mass of the axonal tree ( Figure S1E) is the same as the empirically observed scaling of the metabolic rate versus the mass of the entire organism 23,24 . The excellent match between our modeling and the experimental data validates the theory that uses a fractal structure model to explain the quarter-law scaling that is common in biological systems 22 . Additionally, this allometric scaling arose only when we changed the axonal tree's volume by varying the total BL of the tree and keeping the diameter of the highest level of child branch unchanged (the results generated when the volume was changed in other ways are described in the DISCUSSION), suggesting that there may be a lower limit for the diameter of real axonal branches in any type of neuron, consistent with previous anatomical findings and computational predictions 41 ; a comprehensive survey of anatomical data showed that the lower limit for AP-conducting axons is 0.08-0.2 μ m in diameter, and stochastic simulations demonstrated that due to channel noise, the limiting diameter for mammalian pyramidal cell axons is 0.15 μ m.
Moreover, the scaling exponent for the relationship between energy consumption and membrane surface area is close to 1 rather than 3/4 ( Fig. 3E). This finding is in agreement with results from computational models of single-compartment spiking neurons 7 . This relationship can be attributed to the uniform spread of ion conductances over the membrane surface of the system. Additionally, our results showed that during AP propagation, the metabolic cost of the entire axonal branching system increased exponentially with BL sys ( Figure S1D).
Effects of Ion Channel Density and Distribution on AP-related Energy Consumption. This
section investigates how AP-related energy consumption is influenced by the density and distribution of ion channels. First, to study the impact of ion channel density, we changed the Na + and K + channel density of the and unreliable (right), respectively. In (B), AP spike rate of 5 Hz; GR ≤ 9, reliable state; GR ≥ 10, unreliable state. In (C), AP spike rate of 60 Hz; GR ≤ 6, reliable state; GR ≥ 7, unreliable state; red asterisk indicates propagation failure. (D,E) Variation in energy cost per unit membrane area per AP along the axon (branch point is at X = 0) at four different GR values when the AP spike rate is 5 Hz (D) and 60 Hz (E). GR c represents the critical GR, i.e., the largest GR for successful conduction. Note that in the child branch (X > 0), the energy cost per unit membrane area per AP was much lower in unreliable propagation states. (F) GR c decreases exponentially with AP frequency.
Scientific RepoRts | 6:29686 | DOI: 10.1038/srep29686 single-cable axon (Fig. 1A) by manipulating the maximal Na + and K + channel conductance (g Na max from 50 to 650 mS/cm 2 , g k max from 3 to 100 mS/cm 2 , uniformly distributed over the axon) and then stimulated APs (60 Hz, 37 °C) that propagate along the axon. As illustrated in Fig. 4A-C, at the initiation site of the axon, when g k max was fixed, larger g Na max led to increased Na + entry during the rising phase of the AP and increased K + exit during the falling phase of the AP, increasing energy costs. A similar situation was observed when g Na max was fixed and g k max was changed (Fig. 4A,D,E). Because the combination of smaller g Na max and smaller g k max lowered energy costs and increased energy efficiency simultaneously (Fig. 4F,G), we propose that within the range of ion-channel densities sufficient for AP generation and propagation, there is an optimal combination of ion channel densities that will result in both minimal energy consumption and maximal efficiency. Actually, the effect of ion channel density found in the cable model is consistent with previous results obtained through the Na + -counting method for point neuronal models, including biophysical modeling and dynamic clamping of neocortical fast-spiking interneurons 8 , as well as single-compartment models of the squid giant axon and rat interneurons 7,42 . Next, to investigate the influence of ion channel distribution, we carried out simulations in the unbranched axon with a nonuniform Na + channel expression pattern (Fig. 5A, top) and then compared the results with those of the uniform axon. In the nonuniform axon, the initial 50 μ m, where g Na max is three times larger than the value for the rest of the axon, showed a dramatic decrease in Na + entry (Fig. 5B, red, note the amplitude) and K + exit during an AP (blue). This finding accounts for the sharp decrease in two of the main components of energy expenditure, E Na (62% drop) and E K (33% drop) (see Fig. 5A, bottom; Fig. 5C), and consequently, the significant decrease in total energy expenditure (36% drop, see Fig. 5C,D, black). Noticeably, both the energy cost (Fig. 5D, black) and the excess Na + entry ratio (Fig. 5F, black) reached minimum values right at the end of the high Na + density area. Importantly, comparing the energy costs along the nonuniform axon and the uniform one, based on either the energy-function (Fig. 5D) or Na + -counting (Fig. 5E) method reveals that the nonuniform Na + channel pattern resulted in an energy savings of approximately 20% for the total axon. Similarly, comparing the excess Na + entry ratios (Fig. 5F) shows that the nonuniform Na + channel pattern enhanced energy efficiency by approximately 10%, , (E), g K max from 3 to 100 mS/cm 2 , g Na max kept the same. (F,G) Color map of the energy cost per unit membrane area per AP at the initiation site (F) and the excess Na+ entry ratio at the initiation site (G) as a function of g Na max and g K max . White: low cost or low ratio. Note that the minimum energy cost (F) and the maximum energy efficiency (G) were achieved when both the g Na max and g K max were smallest.
with a saturated Na + entry-ratio value of approximately 1.5 compared with the uniform pattern (in gray line). The low energy usage and high energy efficiency of the nonuniform pattern concluded here by the simulations could be a novel prediction for further experimental investigation. It actually suggests a nontrivial rule that actual neuronal axons may have nonuniform distributed ionic channel distributions in an energy efficient strategy.
Discussion
Based on the energy function used to estimate the metabolic cost of APs in HH point neuron models [19][20][21] , we established the energy-function method for the cable model of axons. Through this approach, we provided the first demonstration that AP propagation requires 15% more energy at the initiation site (Fig. 1D, inset) than the amount predicted by the point model. In addition, the allometric scaling relationship between the total energetic rate of the axonal branching tree (P tot ) versus the axonal volume (V), ∝ . P V tot 0 75 , suggested that the metabolic rate of a single neuron is very likely to scale to the 3/4 power of its mass, just as the metabolic rate of an entire organism scales with its body mass in animals 23 , plants and microbes 24 . Additionally, the allometric scaling relationship suggests an invariant minimum diameter for axonal branches in any type of neuron; this suggestion is supported by anatomical findings and stochastic simulations that take channel noise into account 41 . Moreover, by enabling the energy costs to be precisely distributed over the complicated branching pattern (Fig. 3A), this method can be applied to branched dendrites as well.
Advantages of the Cable Energy Function. The cable energy function derived here provides an accurate method for calculating the energy cost of electric signals conducted in any type of neuron with an axonal or dendritic arbor. This theoretical framework allows us to estimate the energy used by cortical axons with many types of ion channels more accurately than is possible with the Na + -counting method, especially in the presence of some subtypes of Ca 2+ and K + channels. By taking into account all of the energy-consuming components in the cable equation, including the axial and transmembrane leak currents, the energy function provides a precise evaluation at any axonal distance and any time (see METHODS). The distribution of energy usage among all of the energy-consuming components in the cortical neuron shows that potassium and sodium conductances consumed the most energy; together, these conductances accounted for 84.4% of the total energy usage in the cable model ( Figure S2) and 96% in the point model. This finding is comparable to the result of the point model of the squid giant neuron 19 .
In contrast, the Na + -counting method tends to underestimate energy consumption (Fig. 1F). In fact, because it is based on the number of ATP molecules required by Na + /K + -ATPase pumps to restore the Na + and K + concentration gradients after an AP, this method merely calculates the energy cost related to the transmembrane and nonuniform (black) Na + channel distribution along the cortical axon. Bottom, variation of the main components of the total energy cost per unit membrane area per AP, i.e., the energy costs of the K + (blue), Na + (red) and axial (green) conductances, along the nonuniform axon when APs propagated at 60 Hz at 37 °C. (B) Traces of AP (black), -i Na (red, inverted Na + current), i K (blue) and i a (green) at X = 0, 50 and 500 μ m. (C) Traces of the total energy consuming power (black) and its main components, i.e., the energy consuming power of the sodium (red), potassium (blue), and axial (green) conductances at X = 0, 50 and 500 μ m. (D,E) Comparison of the energy cost per unit membrane area per AP along axons with nonuniform (black) and uniform (grey) Na + channel expression patterns. The energy cost was obtained by the energy-function (D) and Na + -counting (E) methods. (F) Comparison of the excess Na + entry ratio along axons with nonuniform (black) and uniform (grey) Na + channel expression patterns. Note that in (D-F), the values of the nonuniform axon were lower than those of the uniform axon, except in the initial 50 μ m.
Scientific RepoRts | 6:29686 | DOI: 10.1038/srep29686 flowing of Na + and K + ions. However, even when the energy usage contributed by sodium and potassium channels only was considered for the energy-function method, the energy usage estimate derived from the Na + -counting method was 20% lower for the cortical neuron cable model and 32% lower for the point model. More importantly, we have also examined how the discrepancy between the two methods is influenced by the parameters, i.e. spiking rate, length of cable, temperature and g max . The discrepancy is calculated as , where E Na c and E are the results of the sodium counting method and our approach, respectively. When one factor was examined, other factors were kept the same. Our results show that these factors have limited effects on the discrepancy (< 10%), indicating that the discrepancy is quite stable. The detailed results are as follows. First, when the spiking rate increased from 5 Hz to 125 Hz, the discrepancy of the point model and the cable model increased 7% (39~46%) and 1% (37~38%), respectively. Second, when the length of the cable was ≤ 3000 um, the discrepancy was within the range of 27~38%. Third, when the temperature was in the range of 6 °C~42 °C, the discrepancy was within the range of 32~37%. Fourth, larger g K max and g Na max leads to larger discrepancy, and within the range of ≤ g 100 mS cm / K max 2 and ≤ g 650 mS cm / Na max 2 , the discrepancy is within the range of 30~40%.
For a deeper understanding of how the two methods calculate energy differently, see Methods. In addition, the energy-function method predicts some interesting phenomena. One example is the nonuniform Na + channel expression pattern, which has been observed experimentally and was demonstrated to be vital for axonal signaling functions in previous modeling studies 9,43 . The natural beauty of the computational modeling is that it could provide novel prediction to instruct new experiment investigation and confirmation. The model conclusion of high energy efficiency with nonuniform ionic channel distribution could be a nontrivial point for new experimental examination. Other predictions, such as the prediction that longer axonal length leads to higher energy consumption (Fig. 1D) and the prediction that there is an optimal combination of ion channel densities that will minimize metabolic cost (Fig. 4F,G), need to be experimentally validated. Furthermore, this analytical approach can be used to make predictions regarding a diverse range of neurons because it can be applied to any HH-type neuron model, such as one where Ca + channels play an important role in AP generation, simply by deriving the energy function for that model.
Allometric Scaling of Energy Versus Volume. Biological systems have evolved branching networks for
information flow and energy transport. Experimental 23,[44][45][46] and theoretical studies 22,47,48 have shown that the metabolic rate of a given biological system generally scales with the 3/4 power of system volume or mass, and the potential mechanism predicted a space-filling, fractal-like branching pattern with a minimized energy distribution 22 . In this paper, we also examined the relationship between the metabolic rate and volume of cortical axons. To do this, the diameter of the final branch level was set to an invariant value of 0.2 μ m, as reported by experiments. This setting ensured a realistic value for the diameters of axonal branches in pyramidal cells 41 ; these values varied from 0.2 to 1.27 μ m (Fig. 3A, inset, note that GR = 1 for the diameter of branches at adjacent BLs). The model analysis predicted that allometric scaling with a scaling exponent of 0.75 arose only when the tree's volume was changed by varying the branch level while keeping the diameter of the final branch level constant. When we kept the diameter of the lowest level of parent branch constant in trees of different branch level ( Figure S1F) or when we changed the volume of the single uniform axon by changing the length of each branch ( Figure S1G), the scaling relation could not be obtained. Interestingly, our setting of the model is consistent with the main assumption of the general theory used to explain the widely used quarter-power scaling, i.e., keeping the final branch of the fractal network a size-invariant unit 22 .
The power law relation between the energy consumption of the whole axonal architecture and the level of branching ( Figure S1D) was also observed in computational models of the bifurcating axonal arbor of dopamine neurons of the substantia nigra pars compacta 49 . The high energy demand of the massive axonal arbors of these neurons, which was confirmed by the computational work, was proposed as a factor in the selective vulnerability of these dopamine neurons to Parkinson's disease 49 . Implications of the Analytical Approach. With extended applications from axons to more complex neuronal systems, this analytical approach could play a significant role in the estimation of energy expenditure, from subcellular to whole-organism levels. For instance, by applying this approach to the morphology-based HH neuron model that was built on experimental data, we can obtain a more realistic subcellular distribution of energy use, compared to the one that was first obtained through calculations of Na + influx 1 . Similarly, the energy function could be used to replace previous estimates of the elevation of AP-related energy, which is highly dependent on the excess Na + entry ratio, within the neural signaling energy budgets that have been constructed for grey matter 2,4 and white matter 3 .
Furthermore, this analytical method of energy calculation can be used to investigate factors that influence energy consumption and reveal trade-offs between energetic constraints and signaling performance. In this study, we examined the impact of axonal geometry, AP propagation state, and ion channel density. Other factors that strongly affect AP conduction speed and spiking frequency [7][8][9][10][11]16 , such as information rate 6,7,50 and ion channel kinetics, are worth re-investigating using the analytical approach to confirm previous conclusions.
Moreover, this analytical approach provides a valuable way of understanding energetic constraints on neuronal morphology. For instance, the repression of axonal branching, which is a crucial regulatory mechanism that can counteract positive cues to evoke branching 51 , might also be an energy-saving strategy because metabolic costs linearly increase with increasing surface area of the axon branching system (Fig. 3E). In addition, how the growth of an axonal process is coordinated with the retraction of another is unknown on a molecular level 51 , and from an energy consumption standpoint, the repression of axonal branching might help to achieve physiological function within the limits of available energy resources. Furthermore, branch points, which are thought to represent a powerful process that filters communication with postsynaptic neurons 32 , might also have an energy-conserving function. Our results, which demonstrate a reduction in metabolic cost when conduction fails compared to when it succeeds (Fig. 2D,E), support the idea first proposed by Hallermann et al. that there is a tradeoff between energy minimization and the reliability of AP propagation 1 . In their study, successful propagation of APs into collaterals of the main axon depended on the inactivation kinetics of Na + channels, while in ours, it depended on the geometrical relationship between the diameters of the branches before and after the branch point. Additionally, a previous study 52 showed that the location of somata relative to the dendritic and axonal trees affects signal attenuation and hence metabolic costs. This result can be reinvestigated through direct energy calculations based on the energy function.
We believe this analytical approach will provide new insight into the metabolic costs of brain signaling, which is considered to be energetically expensive compared with other metabolic processes, although it is remarkably efficient compared with computers. On the one hand, the brain accounts for 20% of the body's total resting oxygen consumption [53][54][55] despite accounting for only 2% of body mass. This limited energy supply imposes serious metabolic constraints on brain function and evolution, including constraints on the density and size of neurons 7,56,57 and on functional connectivity 58 , network activity 59 , and coding strategies [60][61][62][63] . On the other hand, the human brain's power consumption is approximately 20 W 64 , which is merely 0.00001% of the power of today's most advanced supercomputers 65 . Finally, accurate estimation of the metabolic cost will improve the interpretation of functional magnetic resonance imaging data [12][13][14] .
Methods
Cable Model of Hodgkin-Huxley-type Cortical Axon. To describe AP propagation along an axon, the cable equation that describes the flow of ion currents along the axon needs to be derived 66,67 . Figure 1A gives the equivalent circuit of the cable model. During AP propagation, the membrane potential, V(x, t), changes along the x-axis, , R a = 0.15 kΩ · cm is the axial intracellular resistivity, a is the axon radius, and i a (μ A) is the axial current. According to Kirchhoff 's current law, i a varies along the x-axis, where i m is the membrane-crossing current and i stim is the AP-stimulating current. The membrane-crossing current i m consists of the capacitance and the ion currents, where φ regulates the temperature dependence, with Q 10 = 2.3. Combining Equations 1-3, we obtain, a a m stim 2 2 i.e., the cable equation, is the axial conductance and I stim = i stim /2π a (μ A/cm 2 ).
Energy Consumption of AP Propagation in the Cable Model. Inspired from the procedure applied to the classical HH model to compute the electrochemical energy involved in the dynamics of point model of single neuronal circuit 19 we plan to derive energy function of neuronal axon in the case of multi-compartment neuronal model as the method used in previous reference 19 (2 ) The role of the capacitance is to store energy from the battery supplier, not to consume energy, the total energy of the capacitors of the axon cable is The total consumed energy H g all by all dissipated items could be calculated by summation of battery energy, capacitance energy and the stimulus energy based on Eq. 6. To substitute Eqs. 7-9 into Eq. 6, we have ( ( , ) ) ( ( , ) ) ( ( , ) ) ( , ) ( , ) In summary, our approach is based on calculation of energy dissipation through conductances of the cable (including the axial conductance). In methods, we demonstrate the mathematical consistency of this with energy conservation and calculation of energy supplied by the ionic reversal potentials (and stimulus). During AP firing, the summation of external stimulus energy, the ion battery energy and the capacitor storing energy should be equal to the dissipated energy by conductors. Within the circuit, ion batteries supply energy, capacitors store energy while only conductors consume energy for action potential generation and initiation. Actually, computing the ATP consumption of the Na+ /K+ -ATPase pumps is a crude way to estimate the energy discharged from the batteries during AP firing. The explanation is as follows. The amount of energy stored in the ion batteries depends on the ion concentration gradients across the membrane. During AP firing, the membrane-crossing of the ions decreases the concentration gradients, which decreases the energy stored in the ion batteries. Conversely, after AP firing, the Na + -K + -ATPase recharges the ion batteries by restoring the ion concentration gradients to the equilibrium level. The ATP used in this restoring process is supposed to be the energy required to recharge the ion batteries after AP firing, which is numerically equal to the energy discharged from the batteries during AP firing.
Based on the explanation above, there are two main reasons for the inaccuracy of the sodium counting approach. First, it is not an accurate way to estimate the energy dissipated by the conductors. Second, even as a way to compute the energy discharged by the ion batteries, it only gives a coarse estimation by calculating the ATP molecules by the ratio of the number of pumped out sodium ions to needed ATP molecules, i.e. 3:1. According to our simulation, this coarse estimation is significantly lower than the direct calculation of the energy discharged from the batteries. Furthermore, whether the ATP calculating approach should be based on Na + -counting or K + -counting is controversial. Some believed that calculating the ATP molecules by the ratio of the number of pumped in potassium ions to needed ATP molecules, i.e. 2:1, is more accurate 5 . Actually, those two methods give different results.
From the biological perspective, we want to emphasize that, besides the sodium currents which have been widely recognized as making important contribution to the metabolic cost of AP firing, the potassium currents also contribute significantly to the cost. The potassium currents not only cause energy dissipation when flowing out of the membrane to restore the membrane potential, but also may be a part of the axial currents that result in the energy dissipation of propagating AP along the axon. | 8,540 | 2016-07-21T00:00:00.000 | [
"Physics"
] |
Power-Hop: A Pervasive Observation for Real Complex Networks
Complex networks have been shown to exhibit universal properties, with one of the most consistent patterns being the scale-free degree distribution, but are there regularities obeyed by the r-hop neighborhood in real networks? We answer this question by identifying another power-law pattern that describes the relationship between the fractions of node pairs C(r) within r hops and the hop count r. This scale-free distribution is pervasive and describes a large variety of networks, ranging from social and urban to technological and biological networks. In particular, inspired by the definition of the fractal correlation dimension D2 on a point-set, we consider the hop-count r to be the underlying distance metric between two vertices of the network, and we examine the scaling of C(r) with r. We find that this relationship follows a power-law in real networks within the range 2 ≤ r ≤ d, where d is the effective diameter of the network, that is, the 90-th percentile distance. We term this relationship as power-hop and the corresponding power-law exponent as power-hop exponent h. We provide theoretical justification for this pattern under successful existing network models, while we analyze a large set of real and synthetic network datasets and we show the pervasiveness of the power-hop.
Introduction
During the last decade a wealth of studies have identified an impressive set of universal network properties. Scale-free degree distributions [1][2][3][4], the presence of a giant component [5] and small average shortest paths coexisting with high clustering [6][7][8][9] are just some of them. The presence of scale-free distributions have attracted ideas from fractal theory and self-similarity [10] in the analysis of complex networks. An object is said to be self-similar if it appears the same in any length scales observed. For example, with M and ρ being the parameters of a self-similar object (e.g., mass and characteristic length respectively), the latter can be described through a power-law such as: M = ρ δ . Hence, fractal theory can quantify the dimensionality structure of such complex geometric objects beyond pure topological aspects based on similar scaling laws, since the exponent δ is the dimension of the scaling law. Furthermore, in contrast to topological dimensions, the fractal dimensions can take non-integer values allowing us to describe in greater detail the space that the object of interest fills [10]. In the case of complex networks these various dimensions carry information about many interesting underlying properties such as information diffusion and percolation [11][12][13][14]. While for networks embedded in a metric space the definitions can be applied almost unchanged [15,16] this is not the case for the majority of the complex networks we study.
Our work is inspired by the definition of the fractal correlation dimension D 2 on a cloud of points S. In particular, with C(r) being the fraction of pairs of points from S that have distance smaller or equal to r, S behaves like a fractal with intrinsic fractal dimension D 2 in the range of scales r 1 to r 2 iff: An infinitely complicated set S would exhibit the above scaling over all the possible ranges of r. However, real objects are finite and hence, Eq (1) holds only over a specific range of scales. For example, a cloud of points uniformly distributed in the unit square, has intrinsic dimension D 2 = 2, for the range of scales [r min , 1], where r min is the smallest distance among the pairs of S.
In the case of a complex network the set S is simply the set of vertices V. Eq (1) can be applied unchanged for networks embedded in a metric space where any of the distance metrics of the space (e.g., Euclidean distance) can be utilized. Motivated by definition (1) we explore the following power-hop conjecture, which avoids the limitation to only spatially embedded networks. With C(r) being the fraction of pairs of nodes within hop-count r and d the 90-th percentile diameter the power-hop conjecture states: Conjecture 1 (power-hop conjecture) Given a network G ¼ ðV; EÞ, C(r) follows a powerlaw relationship with r, in the range [2, d], i.e., C(r) / r h , 2 r d.
The above conjecture essentially implies that the plot of C(r) versus r in log-log scale will be a straight line with slope equal to the power-hop exponent h. Since, real networks are finite objects we expect the above scaling to hold only over a specific range-similar to the scale-free degree distribution that holds at the tail of the distribution in the majority of the cases [5]. In particular, we conjecture that this range begins at r = 2 and up to r = d. To reiterate, d is the effective diameter, that is, the distance within which 90% of the node pairs are. The contribution of this study is twofold. First, the empirical analysis of a large and diverse collection of network datasets (both real and synthetic) supports the power-hop conjecture. In particular, our results showcase the pervasiveness of this pattern. Of course, different types of networks exhibit different power-hop exponent. Second, we theoretically prove that under the successful Kronecker network model [17] this pattern is justified and preserved. We would like to note here that Faloutsos et al. [4] have shown that, for the Internet topology at both the autonomous system and router level, there is a power-law relationship between the hop distance r and the fraction of nodes within distance r. In our work we provide strong empirical evidence that this is also true in a diverse set of real networks (not only the Internet) within specific scales, supporting further the above conjecture.
Materials and Analysis
Materials: For our analysis we use a large collection of publicly available network datasets. Table 1 presents basic meta-data information for the networks, while Table 2 provides some statistics about these datasets. For each of the networks we calculate the all-pairs shortest paths and compute C(r) as a function of the hop-count r. We then provide a linear fit log C(r) = α + β Á log r, where essentially β = h.
We also build synthetic network datasets using two well-studied network formation models, namely, the Barabási-Albert (BA) preferential attachment [1] and the stochastic Kronecker graph model [17] that has been shown to be able to reconstruct many of the (static and temporal) patterns that real network datasets exhibit. The BA model has two parameters that we can tune: the number of edges m that every new node generates in the network and the power w of the preferential attachment (e.g., for w = 1 we have a linear preferential attachment). In brief, with BA model at every iteration a new node i is generated. This node further generates m new edges whose one end is attached to i while the other end is randomly attached to an already existing nodes j in the network, with a probability proportional to deg w j , where deg j is j's degree. The highest the degree of a node the more chances it has to acquire more edges; "the rich gets richer". Furthermore, a larger value for w pronounces this phenomenon leading to the generation of the "hub" nodes that attract the majority of the edges. In our experiments, we used the following sets of parameters: (m, w) = {(1, 0.5), (1, 0.65), (1, 0.9), (1, 1), (2, 0.5), (2, 0.65), (2, 0.9), (2, 1)}. The seed network for the BA model consists of two connected nodes. For every parameter setting we generated 100 topologies with 10,000 nodes and calculated h, d and the R 2 of the linear fit.
The stochastic Kronecker graph model accepts as input a probability matrix P 1 2 P N 1 ÂN 1 and an integer value k. P 1 describes a matrix that represents a small initiator network G, with Table 1. Network dataset meta-information. Basic information about the relationships that the real network datasets used for our power-hop analysis represent. For every network we define its nodes and edges, as well as its type (i.e., directed vs undirected). For the collaboration networks, HEP-TH captures the collaborations between high energy theoretical physicist, while the GR-QC captures the collaborations between physicist working on general relativity and quantum cosmology. Furthermore, the urban networks capture consecutive visitations by Foursquare users to venues in New York City and San Francisco respectively. the entry p ij describing the probability of the edge (i, j). Using P 1 we compute its k th Kronecker power (see S2 Text) and obtain matrix P k ¼ P k 1 . Then for every pair of nodes (u, v) we include an edge with probability p k u, v and the corresponding network is G k . In our experiments, we use k = 5 and N 1 = 5, while Analysis: Can we prove under realistic assumptions that networks will exhibit the behavior expected from the power-hop conjecture? Fast forwarding, the answer is yes and for our theoretical analysis we rely on the Kronecker model. The reason for using the latter, is that the Kronecker model (being a superset of the successful Recursive Matrix model [24]) has been shown to be able to match a number of static and time-evolving properties of networks [17]. Hence, with this model being a realistic assumption, we are interested in examining on whether it exhibits the power-hop scaling behavior observed in real network datasets. Furthermore, its mathematical tractability allows for analytical derivations. In particular, we have the following lemma, which we prove in the supplementary materials (see S3 Text): Lemma 1 Let M be a binary m × m matrix that describes the seed network G. Then in G K , the number of pairs reachable in r hops is c K r , where c r is the number of pairs reachable in r hops in G. Simply put, the above lemma says that Kronecker multiplications will retain the powerhop scaling. As a consequence of Lemma 1, if the initial network G has power-hop exponent h 1 , then after K Kronecker products/iterations, the resulting network will have a power-hop exponent Kh 1 . While Lemma 1 considers the simple case of binary seed matrix, the following Lemma considers the generic case of stochastic Kronecker model (see S4 Text for the proof) Lemma 2 Let M be a seed matrix, and let the edges of G be generated based on the probabilities in M, and G K generated based on M K . Then in G K , as K ! 1, the expected number of pairs reachable in r hops approaches c K r , where c r is the expected number of pairs reachable in r hops in G K .
Before describing our experimental results in detail we would like to emphasize here on some practical details for estimating the power-hop exponent. In particular, the formal definition of D 2 is the following: Theoretically, this limit provides the slope of the straight-line dependence between logC(r) and logr. Nevertheless, given that any practical/real dataset is finite, the above scaling holds only over a small range for r and the correlation dimension is computed from the slope over this range [10]. This is actually the case in our network datasets as well and we are following the literature on the practical fractal dimensions [10] to compute the power-hop exponent.
Results
Next we present the experimental results obtained from the real and synthetic datasets.
Real network datasets: We begin by analyzing various network datasets that represent different types of systems. In particular, we have analyzed technological (e.g., power grid, the Internet and the web-graph), social (e.g., friendship networks-Facebook, Gowalla-and coauthorship networks), urban (e.g., associations between locations based on human mobility patterns) and biological networks. Our results are presented in Table 2, where we also provide a pointer to the data sources. Note here that, for networks that are not connected, we analyze the largest-connected component.
As we can see the pattern is pervasive and the linear fit is very good for all of the cases (R 2 > 0.96) even for those with networks with large effective diameter. Fig 1 depicts corresponding dependencies between C(r) and r for all the networks in log-log scale. Only the range [2, d] is plotted for clarity.
Finally, we would like to emphasize here that the power-hop exponent of networks of the same type are very similar and are nicely ordered on the real number line (see Fig 2). For example, social networks have a power-hop exponent roughly in the range 3.5-4, while technological networks have a much smaller exponent, that is, in the range 2-2.5. The only biological network in our datasets (a protein-protein interaction network) also exhibits a different exponent (h % 3), while the urban networks examined exhibit exponents in the range 3-3.5. The latter, while capturing aggregated urban mobility patterns, have been created from and affected by the underlying social network layer. Furthermore, interactions within a city have been described through biological metaphors in both classical and recent studies [25][26][27][28][29]. Therefore, it might be logical that the power-hop exponent of our urban network datasets is close to that of both the biological and social networks. We would like here to emphasize on the fact that the groups depicted in Fig 2 are not clusters in the formal notion of unsupervised learning literature. The main point is that the power-hop exponent h of the networks of different types are ordered on the real line. In order to perform a robust cluster analysis one would need access to a much larger number of network datasets. Nevertheless, our results in this work show that h can potentially be used to classify/identify the type of a recorded network. Furthermore, given the consistency of the values for the power-hop exponent, network models that try to explain the link formation for different types of networks could potentially be evaluated on the basis of being able to reproduce this scaling as well.
Synthetic network datasets-BA model: Now we turn our attention to the synthetic datasets created using the BA network formation model. Fig 3 presents our results. In particular, we depict the box plots for the power-hop exponent h computed over all the networks generated with a given set of parameters. As per the power-hop conjecture, h is computed over the range [2, d].
As alluded to above, we examined networks generated with linear and sub-linear preferential attachment probabilities. We did not examine super-linear growth of the preferential attachment probability, since this leads to networks with extremely small diameter that do not allow us to obtain reliable statistical results due to the limited range between 2 and d. For each case we also considered two different scenarios, where each new node in the network generates one or two new edges. The effective diameter for each scenario is provided in the table at the Fig 3. Note here that, the quality of the power-law fit was excellent for all cases (R 2 > 0.98 in all networks). As we can see, if we focus on a specific value for the number of new edges added at each step (i.e., fixed m), linear (w = 1) preferential attachment provides networks with smaller powerhop exponent as compared to sub-linear preferential attachment (w 2 {0.5, 0.65, 0.9}). In fact, we performed statistical tests (t test) for every pair of the model parameters and all rejected the null hypothesis at the significance level α = 0.05 (i.e., that the average power hop exponent h is the same for the two scenarios). Only exception is the comparison between (1, 0.5) and (1, 0.65) where our hypothesis test cannot reject the null at α = 0.05 (p − value = 0.33). Nevertheless, the dynamic range obtained for h when m = 1 is fairly small and so are the differences, especially for values of the preferential attachment w that are close to each other. Furthermore, for a given type of preferential attachment we observe that increasing the number of edges added by every new node, leads to an increase in the power-hop exponent of the complex network. We can draw some intuition behind this behavior, if we recall the connection between the power-hop exponent of our network and the fractal correlation dimension of a point-set. In general, higher dimensionality is associated with higher degree of complexity, and hence, a larger power-hop exponent can be deemed as a sign of a more complex network structure. In the case of the BA model, larger w leads to networks with well defined attractors, which further give raise to a well-defined structure (e.g., a few hubs and every other vertex is connected to these few central entities). A well-defined structure exhibits regularity and less complexity and hence, its dimensionality (which in our case is captured through h) is lower. Conversely, when w is small, there are not clear attractors that emerge, and the structure is less clearly defined, requiring in a sense more details/features to describe the system in detail (i.e., higher dimensionality). Similarly, larger m leads to more complex topologies, since for a given w there are more chances for a node to emerge as an attractor.
Synthetic network datasets-Kronecker model: Finally, we perform experiments using the stochastic Kronecker graph formation model. To reiterate, being a superset of the successful recursive matrix network model [24], the Kronecker model is one of the most successful network formation models that can recover many of the properties of real networks [17]. Hence, we are interested in examining whether Kronecker networks obey the power-hop conjecture. In particular, as alluded to above, we choose uniformly at random the value for γ in the range [0. 3, 0.5]. Different values of γ will provide networks with different diameters (generally speaking larger γ provides networks with smaller diameter). As with the real network datasets and the synthetic datasets obtained from the BA model, the power-law fit Eq (1) is excellent again, with an R 2 > 0.95 (see S5 Text). The left part of Fig 4 depicts a scatter plot of the fractal dimension for each obtained network and the corresponding effective diameter. As we can notice there is a clear decreasing trend, which is also in agreement with our results from the BA network model. In the same figure we plot a linear fit with slope −0.07 (p-value < 0.05) and R 2 = 0.77. Furthermore, it is interesting to note that networks with smaller diameter exhibit a higher variance in the corresponding fractal dimension (somehow evident in the results from the real datasets as well). However, it is not clear whether this is an intrinsic phenomenon or just an artifact of the larger sample of networks with smaller diameter. Fig 4 (right part) presents a scatter plot of the R 2 of the calculated fractal dimension as a function of the effective diameter. The reason for presenting these results stems from the fact that when δ is small, one can argue that it is easier to fit a line through the smaller number of points. Nevertheless, our results show that when the effective diameter is large, the R 2 value is extremely high as well, further strengthening our observations for a universal network pattern. Finally, it is interesting to note that even the worst fit (which appears for the smallest effective diameter observed) is still fairly good (R 2 = 0.9327).
Finally, we experimentally demonstrate Lemma 1. We start with a seed matrix that describes the network depicted in the left part of Fig 5. This is essentially a network that exhibits a linear relationship between logC(r) and logr with a slope of h 1 = 0.6021 and effective diameter d = 5. We then compute the k th Kronecker power, k 2 {2, 3, 4}, and experimentally obtain the number of pairs of nodes within distance r 2 {1, . . ., d} for each network. The top-right part of
Discussion
As aforementioned there is a volume of work in the literature that studies the dimensionality of networks (e.g., [11][12][13][14][15][30][31][32]-with the list of course not being exhaustive). This body of literature studies the theoretical properties (e.g., phase transitions) and asymptotic behavior of the complex networks. However, despite the significant contribution of these studies, real networks are finite. Therefore, in our study we are more interested in the practical extensions of the existing literature, by analyzing and studying real-network datasets. Inspired by the fractal correlation dimension D 2 defined over a point-set, we examine the scaling behavior of the fraction C(r) of node pairs within hop-count r and pose the power-hop conjecture (see Conjecture 1).
To summarize, the contributions of our work are as follows: • Pervasiveness: all the real networks we studied support the power-hop conjecture, that is, they exhibit excellent fit for a power-law C(r) / r h (see Table 2) • Analysis: we theoretically proved that one of the most realistic models, the Kronecker model, automatically leads to power law behavior of the power-hop conjecture, given mild initial conditions (see Lemma 1) Furthermore, our empirical results (Fig 2) show that the value of the power-hop exponent is related to the type of network. While different networks exhibit different power-hop exponent, networks of the same type have similar exponents. | 4,970 | 2016-03-14T00:00:00.000 | [
"Computer Science"
] |
Estimation for Expected Energy Not Served of Power Systems Using the Screening Methodology of Cascading Outages in South Korea
The uncertainty of complex power systems increases the possibility of large blackouts due to the expectations of physical events, such as equipment failures, protection failures, control actions failure, operator error, and cyber-attacks. Cascading outage is a sequence of dependent failures of individual components that successively weaken the power system. A procedure to identify and evaluate the initiating events and perform sequential cascading analysis is needed. In this paper, we propose a new screening methodology based on sequential contingency simulation of cascading outages, including probabilistic analysis and visualization model. Performance of a detail cascading analysis using practical power systems is suggested and discussed. The proposed screening methodology will play a key role in identifying the uncontrolled successive loss of system elements.
Introduction
Future electrical power systems need to secure the stable supply of electric power according to the expansion of distributed power sources based on various technologies, and power systems need to be reliable against natural disasters and unexpected physical or cyber-attacks.In addition, the potential economic and social losses are so enormous that it is imperative to evaluate the risk of system failures in the event of a blackout.In the analysis of recent blackouts [1][2][3], large blackouts are described by cascading outages that cause deterioration of poser systems, as shown in Figure 1.A cascading outage is defined by sequential contingency caused by an initial disturbance.The initial disturbance includes natural disasters, imbalance and violations, and unexpected failures [4][5][6].An initial event causes an imbalance or violation in the system.This can lead to the tripping of facilities such as transmission lines, generators, and transformers, and outages can spread in succession, depending on the operating conditions of the system.An outage can occur sequentially, resulting in a separate system [7].If a violation occurs in the islanding system, it can be extended to a partial or total blackout.
Cascading outage is a sequence of dependent failures of individual components that successively weaken the power system.In case of an event that is related to weather, such as an earthquake or typhoon, it causes extensive power outages and creates a complex disaster that has a huge impact on society and economy.In addition, since the importance of securing social and public safety and increasing social demands of the disaster, it is necessary to consider for the influence of the power system resulted from disasters.
Transmission operations are focused on avoiding cascading that could lead to the loss of generation or load due to contingencies events.Transmission operations have to decide what kinds of the pre-determined mitigation solutions and when to avoid unacceptable system conditions or loss of generation due to cascading either in real time or study mode.Similarly, transmission planning is focused on the expected future impact of cascading on the potential loss of generation or load due to contingency events.Transmission planning involves deciding when expected system performance deficiencies are significant enough to implement system modifications [7][8][9].Power outages that are associated with weather-related cause significant damage to the transmission system [10][11][12].Severe weather changes can be a source of the large-scale disruptions to the system, increasing the loss of the grid and cause severe power outages along with a large amount of electricity demand concentrated in urban areas and aging of the system facilities.In the case of precipitation and strong winds, it may cause equipment damage due to contact with trees.It results in damage to the transmission power transmission which transmits large capacity electric power over a long distance.For such a reason, most of the blackouts occurred due to severe weather conditions.In Korea, there is no damage to snowstorms or hurricanes, which account for a large part of the United States (U.S.) outage rate.However, Korea is directly or indirectly influenced by three or more typhoons every year, and there is a risk of natural disasters due to a rainy season of about 30 days.Consequently, it is necessary to analyze the impacts of the power system caused be the climate change because the annual heat wave becomes severe and the heavy rainfall becomes heavy.In the Korean power system, the scale of the system is larger than that of the country, and it is complicated.In addition, the system is operated strictly in accordance with the reliability standard.Although the system is relatively robust, a partial power outage may occur, but the probability of the system collapse due to an imbalance in supply and demand is low.However, it is essential to examine the possibility of a major outage in the event of a critical situation when large-scale power transmission is concentrated in a metropolitan area and physical or cyber-attacks are carried out as in an armistice country.
In this paper, we propose a new screening methodology that is based on sequential contingency simulation of cascading outages, including probabilistic analysis and visualization model.We estimate the Expected Energy Not Served (EENS) by simulating the potential outages in the power system against serious scenarios.The screening methodology based on fast sequential contingency simulation is used to identify potential cascading events.Cascading outages are consecutively applied until thermal and voltage violations are alleviated or drop below the thresholds respectively [13,14].Loss of load and generation is monitored and reported and probabilities of initiating events and consequences may be represented by visualization model.The rest of this paper is organized as follows.Section 2 introduces a new approach to identify the successive loss of system elements.Section 3 describes the operational algorithms that are applied to the proposed sequential outage checkers.Section 3 reports the simulation results of the proposed screening methodology applied to Korean power systems.Conclusions and discussion are given in Section 4.
A New Approach to Identify the Successive Loss of System Elements
In this section, we introduce the screening methodology for estimating the EENS and cascading probability analysis for identifying transmission system weakness.
Screening Methodology for Estimating the EENS
There is a possibility of a large blackout due to various changes in institutional and environmental systems.As a large blackout spreads rapidly from initial disturbances, it is necessary to develop and introduce a cascading outage model that can simulate the likelihood of sequential events and the detailed process thereof.As a result, the model development and decision criteria are needed to analyze the uncertainty of the future system as expected.In this paper, we propose a method to determine the possibility of cascading outages through the estimation of EENS.
The occurrence of natural disasters, such as typhoons and earthquakes, increase the probability of a cascading outage.It is necessary to monitor the sequence by generating cascading outages in a power system [15,16], and analyzing the load tripping amount due to the disturbance.The outage of a power transmission tower and electric power conduit pipe with many transmission lines is considered as severe scenarios in bulk power systems.Each potential candidate element from an initial disturbance is performed when considering transmission line outages and then a cascading outage is repeated.The simulation is repeated until there are no overloaded lines or transformers in the process of cascading outages.The sequence of the cascading model of the transmission line is as follows [17,18].First, the probability of outage is calculated, according to the following equation.
where L i denotes the load factor of the corresponding line, and L MAX i denotes the overload rate tripping criterion flowing on the line i.As the tripping criterion is set at 150%, the L MAX i value is 150.Equation ( 1) is calculated according to the overloading transmission lines in each cascading outage occurrence and the transmission line with a value of 1 or higher is eliminated.
We propose the screening methodology for estimating the expected energy not served (EENS) of power systems [8,9,19], as shown in Figure 2. Firstly, the scenario setting for initial event selection should be preceded.All of the transmission lines and transformers in the system are simulated as N-1 contingency.As a result, facilities with an overload rate tripping criterion exceeding 1 are selected as initial disturbance events (IDE).According to the selected IDE, we check the convergence through the power flow analysis.As it does not converge after initial disturbance, the event is excluded from IDE and the other event is considered.In the convergence, we proceed to the following steps to continue the simulation.If the initial event has converged after simulating, then we check the violation.If there is a voltage problem, we shed off the load and install the fixed shunt reactors of the nearby area for eliminating the violation.Consequently, it is checked whether the generation range of the reference bus (slack bus) is exceeded, and corrective measures are taken by increasing or decreasing the output power of other generators, so that the power generator can be operated within the range.
If the system converges after all corrective actions are taken, we check whether the overload rate tripping criterion exceeds 1.If there are several facilities that are exceeding the overload rate tripping criterion, the one with the highest overload rate is removed and we repeat the process until no additional equipment exceeds the criterion.If the divergence cannot be resolved even after the corrective action is taken, then the simulation is judged to be divergent and the simulation is terminated.When the power generation amount of the system is smaller than the load amount or when the load shedding amount due to the corrective action exceeds a specific value, it is determined to be divergent and the operation is terminated.In addition, it there is no longer any overload facility and the amount of change as the EENS is calculated as compared to the load of the system that initiated the simulation as the screening simulation of cascading is terminated.
Cascading Probability Analysis for Identifying Transmission System Weaknesses
As a cascading outage can be a result of unexpected interactions or problems that are associated with various factors, it needs to be approached by a probabilistic model [20][21][22].It is necessary to examine not only the possibility of blackout in a deterministic method, but also a method with a probabilistic approach and expressing it visually, which can help with evaluating the result.
In this paper, we apply the branching process [23][24][25][26] technique to simulate cascading outages for analyzing each step from the sequential process.The simulation group is divided into generation at each stage.The first generation is created due to the initial event and the generated event becomes the parent, causing the next child event.
If the number of events caused by the initial disturbance is |Z 0 | and the number in the next step is |Z 1 |, the propagation rate may be expressed as λ 0 = |Z 1 |/|Z 0 | in initial accident.As a result, it can be expressed as Equation (2) [27,28].Where Z m is the number of outages in m generation and Z m+1 means the number in m + 1 generation.The following steps are necessary to stochastically approach to a cascade occurrence.Firstly, it is necessary to generate occurrence sequence data from sequential events through random occurrence failure.To some extent, it is necessary to divide events into generations based on the data.In the case, P i,m is the number of times that the event occurs in the next generation due to the event of facility i in generation m, C i,m denotes the number of effective children in facility i in generation m and D i,m denotes the set of events due to the previous event.
where P i,m is the number of times that the event occurs in the next generation due to the event of facility i in generation m, C i,m denotes the number of effective children in facility i in generation m, and D i,m denotes the set of events due to the previous event.Both Equations ( 3) and ( 4) may be written with the corresponding parameters.
When the value corresponding to each variable is calculated, then the propagation rate is expressed in Equation ( 5).Based on Equation ( 5), the process and probability of each event may be determined.
Case Studies on Korean Power Systems
In this section, we report the results of applying the proposed methodology described in details in Section 2 to practical power systems, Korean power systems.
Estimation of Expected Energy Not Served
In this section, we estimate the EENS by simulating the potential outages in Korean power systems.South Korea's peak demands for electricity in 2015 and 2020 are considered.We repeat all of the steps described in Section 2, while increasing the load level of the system and calculating the EENS for each system condition.When the estimated EENS is displayed according to the load level, there is a point where the EENS value increases dramatically, which becomes the critical load.In order to analyze this, the simulation of the cascading outages was carried out by gradually increasing the load of the system from 100% to phase.In the case where the load level is 120% in peak demand in 2015, the result of transmission line between buses A and B as the initial disturbance is shown in Figure 3.During the initial event, an overload of 182% occurred in the transmission line between buses C and D, and additional events continued by simulation steps.Based on the sequential steps, the resulting divergence and violation are stabilized by various adjustments, such as load shedding and fixed shunt reactors.The EENS amount that was calculated from the sequential steps is 1647.6MW, which is the largest amount at the load level.The name of the buses is not clearly displayed for security reasons.Figure 4 represents the estimated EENS using the proposed screening method.The critical load is indicated for each system.The critical load means the point at which the magnitude of the average power failure and the probability distribution increase sharply [8,29,30].This is a reference point where cascading outage occurs and is the basis for assessing the risk of a blackout.For the Korean power system, the critical load points are indicated by 1.53% and 1.56% for peak demand conditions in 2015 and 2020, respectively.Simulation results can be considered for system planning to avoid the possibility of power outages and the phenomena that are transferred to a large-scale system can be identified.Further, if a margin of large-scale power outage is calculated and provided to system operators, more systematic operation and planning will be suggested.The application of screening method for severe scenarios is performed to examine the possibility of major system outages.We assume 882 failures of practical power systems for 71 power transmission towers and 40 electric power conduit pipes in Korean power grids.Each facility outage would be set as an initial disturbance, and the sequential process should be recorded and analyzed for subsequent events.Cascading outages are consecutively applied until thermal and voltage violations are alleviated or drop below the thresholds, respectively.If no facilities, such as power transmission towers and electric power conduit pipes, can be removed after the initial event, the additional event to the line and transformer connected to the initial event substation is simulated.When a scenario setting is conducted, the estimated failure list is created by dividing the machined line in the steel tower into one line, two lines, and so on.We perform the cascading simulation using 882 possible cases in total, involving 71 transmission towers and 40 electric power conduit pipes for power transmissions.It assumes the collapse of a four-wire transmission tower or a fire situation on multi-wire electric power conduit pipe as the most critical outage scenario of outage that may occur in terms of a major outage.We estimate the stability of the system by assuming additional breakdowns for each event occurrence and calculate the scale of the power failure depending on scenario-specific sequential events by simulating large-scale outage scenarios.
The EENS estimated for each region can be summarized, as shown in Table 1.The name of the region is not clearly displayed for security reasons.In the outage of electric power conduit pipes, the result is blank in the area where the facility is not affected.Based on the results, if a failure occurs in this scenario, a large-scale blackout may occur that has not been experienced before.In case of region A, the EENS that occurs when a power transmission tower is somewhat small, but the EENS calculated from an electric power conduit pipes event is large.Therefore, it is necessary to establish countermeasures against the event in the area.Consequently, it is possible to analyze regional events by simulating cascading outages and estimating the EENS for each scenario.As a countermeasure against the problem, it is considered that the installation of special protection system (SPS) and the expansion of new power facilities should be considered.
We give the review of the blackout cases that are caused by cascading outages.Based on the estimated EENS, it is derived that the threshold of large-scale blackout in the Korean power system occurs at over 150%, which can help to determine the reliability of the power system.In order to analyze the damage that is caused by natural disasters that cause severe events, we simulate some scenarios, such as the collapse of a transmission line tower or fires in electric power conduit pipes, which can actually occur in the case of a large power outage scenario.The scenarios are simulated locally, calculating the scale of power outage for each region.It can be used to reinforce and enhance new facilities installation and systems in vulnerable areas.In addition, there is a plan to utilize real-time phasor measurement unit (PMU) data in order to prevent a cascading outage [31,32].It can be used to prevent and mitigate the cascading outage through accurate and quick information acquisition from PMU data on the power system.
Visualization of Cascading for Probabilistic Analysis
To analyze the occurrence probability of cascading outages, we use the peak demand data in 2015.The underground transmission line track is considered to be one line, and the overhead transmission line is considered to be two lines.In the initial event for selecting a candidate transmission line, the transmission line with an overloading over 150% is selected for the N-1 contingency simulation.Additionally, the metropolitan area is reviewed first.Adjacent matrices are used to visualize the cascading outages, and to show that each event is propagated [33,34].The adjacency matrix may be expressed by the following equation.
Adjacency Matrix = M ij = 1 (outages spread f rom i to j) 0 (outages that does not spread f rom i to j) Based on Equation (6), if an event spread from i to j, it is set to 1; otherwise, it is 0. By analyzing the simulation results, it is possible to monitor the propagation path and the probability allowing for an adjacent matrix to be generated based on the result.We construct a visualization graph with adjacency matrices and to connect each outage by event occurrence, frequency of occurrence, and propagation rate.
Figure 5 shows the simulation results of cascading outages in the metropolitan area at the peak demand condition in 2015.Figure 5a shows the path of the simulated event's occurrence.From the results, the circle refers to the event that occurred, and the arrow direction refers to the process of the cascading outage.The sequential propagation step of cascading outages is expressed by the line thickness.In other words, the lower the cascading outage generation step, the larger the thickness of the line.Figure 5b is the visualization result of sequential cascading outages and shows the calculated probability.We use the cascading outage data and the number of occurrences that are derived from Figure 5a.Based on the number of occurrences, the propagation rate of the cascading outage was calculated and expressed as the size of the circle.Additionally, the transmission line connected to each circle indicates the number of events that occurred from the arrow's beginning to the next event.
In other words, the larger the circle size, the higher the probability of spreading to other lines, and the thicker the line, the more times that the event occurs.Based on these results, it is possible to recognize which route is most likely to spread to another line due to cascading outages occurring in the metropolitan area, and which route it will spread.
In South Korea, power transmission voltages are 154 kV and 345 kV on major grids and 765 kV facilities are operated.The power system on Jeju Island is connected to the mainland with High Voltage Direct Current transmission system (HVDC).During the restoration after a large blackout, the power system in South Korea is divided into seven regions to restore the system and synchronize it to each region.It is possible to apply the proposed screening method to the power system for the pre-defined regions if it is regarded as a regionally interconnected system, as shown in Figure 6.The proposed screening method will be applied to the interconnected system in Future.
Discussion and Conclusions
Cascading outage is a sequence of dependent failures of individual components that successively weaken the power system.In this paper, we propose the screening methodology that is based on sequential contingency simulations is used to identify potential cascading events.Cascading outages are consecutively applied until thermal and voltage violations are alleviated or drop below the thresholds, respectively.Loss of load and generation is monitored and reported, and the probabilities of initiating events and consequences may be represented by the visualization model.
The proposed screening methodology will play a key role in identifying the uncontrolled successive loss of system elements triggered by an incident at any location.Cascading results in widespread electric service interruption that cannot be restrained from sequentially spreading beyond an area predetermined by studies.In addition, the visualization model can identify potential future system weaknesses and limiting facilities.The screening methodology implemented by Korean power systems is presented.
In future, the transient progression including voltage instability, frequency instability, and small signal instability will be added to the screening methodology and the application to practical power systems will be discussed.
Figure 1 .
Figure 1.Sequential process of a large blackout.
Figure 2 .
Figure 2. Sequence of the screening methodology for estimating Expected Energy Not Served (EENS).
Figure 6 .
Figure 6.Power grids including seven regions in South Korea.
Table 1 .
Simulation Result of the Expected Energy Not Served (EENS) for severe scenarios. | 5,125.4 | 2017-12-29T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Your Sentiment Precedes You: Using an author’s historical tweets to predict sarcasm
Sarcasm understanding may require information beyond the text itself, as in the case of ‘I absolutely love this restaurant!’ which may be sarcastic, depending on the contextual situation. We present the first quantitative evidence to show that historical tweets by an author can provide additional context for sarcasm detection. Our sarcasm detection approach uses two components: a contrast-based predictor (that identifies if there is a sentiment contrast within a target tweet), and a historical tweet-based predictor (that identifies if the sentiment expressed towards an entity in the target tweet agrees with sentiment expressed by the author towards that entity in the past).
Introduction
Sarcasm 1 is defined as 'the use of remarks that clearly mean the opposite of what they say, made in order to hurt someone's feelings or to criticize something in a humorous way' 2 . An example of sarcasm is 'Being stranded in traffic is the best way to start my week' (Joshi et al., 2015). There exists a sentiment contrast between the phrases 'being stranded' and 'best way' which enables an automatic sarcasm detection approach to identify the sarcasm in this sentence.
Existing approaches rely on viewing sarcasm as a contrast in sentiment (Riloff et al., 2013;Maynard and Greenwood, 2014). However, consider the sentences 'Nicki Minaj, don't I hate her!' or 'I love spending four hours cooking on a weekend!'. The sarcasm is ambiguous because of a likely hyperbole in the first sentence, and because sentiment associated with 'four hours cooking' depends on how much the author/speaker likes cooking. Such sarcasm is difficult to judge for humans as well as an automatic sarcasm detection approach. Essentially, we need more context related to the author of these sentences to identify sarcasm within them.
The question we aim to answer in this paper is: 'What sentiment did the author express in the past about the entities in the tweet that is to be classified? Can this information help us understand if the author is being sarcastic?' We present the first quantitative evidence to show that historical text generated by an author may be useful to detect sarcasm in text written by the author. In this paper, we exploit the timeline structure of twitter for sarcasm detection of tweets. To gain additional context, we explore beyond the tweet to be classified (called 'target tweet'), and look up the twitter timeline of the author of the target tweet (we refer to these tweets as the 'historical tweets'). Our method directly applies to discussion forums and review websites, where other posts or reviews by this author may be looked at.
The rest of the paper is organized as follows. Section 2 contains the related work. We present a motivating example in Section 3, and describe the architecture of our approach in Section 4. The experimental setup and results are in Sections 5 and 6. We present a discussion of challenges observed with the proposed historical tweet-based approach in Section 7, and conclude the paper in Section 8.
Related work
Sarcasm detection relies mostly on rule-based algorithms. For example, Maynard and Greenwood (2014) predict a tweet as sarcastic if the sentiment embedded in a hashtag is opposite to sentiment in the remaining text. Similarly, Riloff et al. (2013) predict a tweet as sarcastic if there is a sentiment contrast between a verb and a noun phrase. Similarly, supervised approaches implement sarcasm as a classification task that predicts whether a piece of text is sarcastic or not (Gonzalez-Ibanez et al., 2011;Barbieri et al., 2014;Carvalho et al., 2009). The features used include unigrams, emoticons, etc. Recent work in sarcasm detection deals with a more systematic feature design. Joshi et al. (2015) use a linguistic theory called context incongruity as a basis of feature design, and describe two kinds of features: implicit and explicit incongruity features. Wallace et al. (2015) uses as features beyond the target text as features. These include features from the comments and description of forum theme. In this way, sarcasm detection using ML-based classifiers has proceeded in the direction of improving the feature design, while rule-based sarcasm detection uses rules generated from heuristics.
Our paper presents a novel approach to sarcasm detection: 'looking at historical tweets for sarcasm detection of a target tweet'. It is similar to Wallace et al. (2015) in that it considers text apart from the target text. However, while they look at comments within a thread and properties of a discussion thread, we look at the historical tweets by the author.
Motivating example
Existing approaches detect contrast in sentiment to predict sarcasm. Our approach extends the past work by considering sentiment contrasts beyond the target tweet. Specifically, we look at tweets generated by the same author in the past (we refer to this as 'historical tweets'). Consider the example in Figure 1. The author USER1 wrote the tweet 'Nicki Minaj, don't I hate her?!'. The author's historical tweets may tell us that he/she has spoken positively about Nicki Minaj in the past. In this case, we observe an additional tweet where the author describes having a good time at a Nicki Minaj concert. This additional knowledge helps to identify that although the target tweet contains the word 'hate', it is sarcastic. Figure 2 shows the architecture of our sarcasm detection approach. It takes as input the text of a tweet and the author, and predicts the output as either sarcastic or non-sarcastic. This is a rule-based sarcasm detection approach that consists of three modules: (a) Contrast-based Predictor, (b) Historical Tweet-based Predictor, and (c) Integrator. We now describe the three modules in detail.
Contrast-based Predictor
This module uses only the target tweet. The contrast-based predictor identifies a sarcastic tweet using a sentiment contrast as given in Riloff et al. (2013). A contrast is said to occur if: • Explicit contrast: The tweet contains one word of a polarity, and another word of another polarity. This is similar to explicit incongruity given by Joshi et al. (2015).
• Implicit Contrast: The tweet contains one word of a polarity, and a phrase of the other polarity. The implicit sentiment phrases are extracted from a set of sarcastic tweets as described in . This is similar to implicit incongruity given by Joshi et al. (2015).
For example, the sentence 'I love being ignored.' is predicted as sarcastic since it has a positive word 'love' and a negative word 'ignored'. We include rules to discount contrast across conjunctions like 'but' 3 .
Historical Tweet-based Predictor
This module uses the target tweet and the name of the author. The goal of the historical tweet-based predictor is to identify if the sentiment expressed in the tweet does not match the historical tweets posted by the author. The steps followed are: 1. The sentiment of the target tweet is computed using a rule-based sentiment analysis system that we implemented. The system takes as input a sentence, and predicts whether it is positive or negative. It uses simple rules based on lookup in a sentiment word list, and rules based on negation, conjunctions (such as 'but'), etc. On Sentiment140 4 corpus, our sentiment analysis system performs with an accuracy of 58.49%.
2. The target tweet is POS-tagged, and all NNP sequences are extracted as 'target phrases'.
3. 'Target phrases' are likely to be the targets of the sentiment expressed in the tweet. So, we download only the historical tweets which contain the target phrases 5 .
4. The sentiment analysis system also gives the sentiment of the downloaded historical tweets. A majority voting-based sentiment in the historical tweets is considered to be the author's historical sentiment towards the target phrase.
5. This module predicts a tweet as sarcastic if the historical sentiment is different from the sentiment of the target tweet.
A target tweet may contain more than one target phrase. In this case, the predictor considers all target phrases, and predicts the tweet as sarcastic if the above steps hold true for any of the phrases. Possible lacunae in this approach are: 1. If the historical tweets contained sarcasm towards the target phrase, while the target tweet did not, the predictor will incorrectly mark the tweet as sarcastic.
2. If the historical tweets contained sarcasm towards the target phrase, and so did the target tweet, the predictor will incorrectly mark the tweet as non-sarcastic.
3. If an entity mentioned in the target tweet never appeared in the author's historical tweets, then no input from the historical tweet is considered.
Integrator
This module combines the predictions from the historical tweet-based predictor and the contrastbased predictor. There are four versions of the module: 1. Only historical tweet-based: This prediction uses only the output of the historical tweet-based predictor. This also means that if this author had not mentioned the target phrase in any of his/her tweets in the past, the tweet is predicted as non-sarcastic.
2. OR: If either of the two predictors marked a tweeet as sarcastic, then the tweet is predicted as sarcastic. If not, then it is predicted to be non-sarcastic.
3. AND: If both the predictors marked a tweet as sarcastic, then the tweet is predicted as sarcastic. If not, then it is predicted to be nonsarcastic.
4. Relaxed-AND: If both the predictors marked a tweet as sarcastic, then predict the tweet as sarcastic. If the historical tweet-based predictor did not have any tweets to look up (i.e., the author had not expressed any sentiment towards the target in the past), then consider only the output of the contrast-based predictor.
Experimental Setup
For the contrast-based predictor, we obtain the implicit sentiment phrases as follows: (1) We download a set of 8000 tweets marked with #sarcasm, and assume that they are sarcastic tweets. These are not the same as the test tweets, (2) We extract 3-grams to 10-grams (1-gram represents a word) in these tweets, (3) We select phrases that occur at least thrice. This results in a set of 445 phrases. These phrases are used as implicit sentiment phrases for the contrast-based predictor. For the historical tweet-based predictor, we first POS tag the sentence using Malecha and Smith (2010). We then select NNP sequences 6 in the target tweet as the target phrase. Then, we download the complete timeline of the author using Twitter API 7 , and select tweets containing the target phrase. The historical tweet-based predictor then gives its prediction as described in the previous section.
Both the predictors rely on sentiment lexicons: The contrast-based predictor needs sentimentbearing words and phrases to detect contrast, while the historical tweet-based predictor needs sentiment-bearing words to identify sentiment of a tweet. We experiment with two lexicons: 1. Lexicon 1 (L1): In this case, we use the list of positive and negative words from Pang and Lee (2004).
Lexicon 2 (L2):
In this case, we use the list of positive and negative words from Mohammad and Turney (2013).
Based on the two lexicons, we run two sets of experiments: 1. Sarcasm detection with L1 (SD1): In this set, we use L1 as the lexicon for the two predictors. We show results for all four integrator versions (Only historical tweet-based, AND, OR, Relaxed-AND).
Sarcasm detection with L2 (SD2):
In this set, we use L2 as the lexicon for the two predictors. We show results for all four integrator versions (Only historical tweet-based, AND, OR, Relaxed-AND).
For all experiments, we use the test corpus given by Riloff et al. (2013). This is a manually annotated corpus consisting of 2278 tweets 8 , out of which 506 are sarcastic.
Results
Tables 1 and 2 show Precision (P), Recall (R) and F-score (F) for SD1 and SD2 respectively. We compare our values with the best reported values in Riloff et al. (2013). This comparison is required because the test corpus that we used was obtained from them. 6 We also experimented with NN and JJ NN sequences. However, the output turned out to be generic. 7 https://dev.twitter.com/overview/api 8 Some tweets in their original corpus could not be downloaded due to privacy settings or deletion. Table 2: Averaged Precision, Recall and F-score of the SD2 approach for four configurations of the integrator Table 1 shows that using only the historical tweet-based predictor, we are able to achieve a comparable performance (F-score of approximately 0.49 in case of SD1 and SD2 both) with the benchmark values (F-score of 0.51 in case of Riloff et al. (2013)). The performance values for 'Only historical tweet-based' are not the same in SD1 and SD2 because the lexicon used in predictors of the two approaches are different. This is obviously low because only using historical contrast is not sufficient.
The AND integrator is restrictive because it requires both the predictors to predict a tweet as sarcastic. In that case as well, we obtain F-scores of 0.617 and 0.627 for SD1 and SD2 respectively. Relaxed-AND performs the best in both the cases with F-scores of 0.826 and 0.882 for SD1 and SD2 respectively.
We experiment with two configurations SD1 and SD2, in order to show that the benefit of our approach is not dependent on the choice of lexicon. To understand how well the two captured the positive (i.e., sarcastic tweets) class, we compare their precision and recall values in Table 3. We observe that the positive precision is high in case of OR, AND, Relaxed-AND. The low precisionrecall values in case of 'Only historical tweetbased' indicates that relying purely on historical tweets may not be a good idea. The positive precision in case of Relaxed-And is 0.777 for SD1 and 0.811 for SD2. The contrast within a tweet (captured by our contrast-based predictor) and the contrast with the history (captured by our historical tweet-based predictor) both need to be applied together.
Discussion
Our target phrases are only NNP sequences. However, by the virtue of the POS tagger 9 used, our approach predicts sarcasm correctly in following situations: 1. Proper Nouns: The tweet 'because Fox is well-balanced and objective?' was correctly predicted as sarcastic because our predictor located a past tweet 'Fox's World Cup streaming options are terrible'.
User Mentions: User mentions in a tweet
were POS-tagged as NNPs, and hence, became target phrases. For example, a target tweet was '@USERNAME ooooh that helped alot', where the target phrase was extracted as @USERNAME. Our approach looked at historical tweets by the author containing '@USERNAME'. Thus, the predictor took into consideration how 'cordial' the two users are, based on the sentiment in historical tweets between them.
3. Informal Expressions: Informal expressions like 'Yuss' were tagged as NNPs. Hence, we were able to discover the common sentiment that were used in, by the author. The target tweet containing 'Yuss' was correctly marked as sarcastic.
However, some limitations of our approach are: 1. The non-sarcastic assumption: We assume is that the author has not been sarcastic about a target phrase in the past (because we assume that the historical tweets contain an author's 'true' sentiment towards the target phrase). Past work in sarcasm detection focuses on target tweet only. We present a approach that predicts sarcasm in a target tweet using the tweet author's historical tweets. Our historical tweet-based predictor checks if the sentiment towards a given target phrase in the target tweet agrees to the sentiment expressed in the historical tweets by the same author. We implement four kinds of integrators to combine the contrast-based predictor (which works on the target tweet alone) and the historical tweet-based predictor (which uses target tweet and historical tweets). We obtain the best F-score value of 0.882, in case of SD2, where the contrast predictor uses a set of polar words from a word-emotion lexicon and phrases with implicit sentiment.
Our work opens a new direction to sarcasm detection: considering text written by an author in the past to identify sarcasm in a piece of text. With availability of such data in discussion forums or social media, sarcasm detection approaches would benefit from making use of text other than just the target text. Integration of historical text-based features into a supervised sarcasm detection framework is a promising future work. | 3,750.6 | 2015-09-01T00:00:00.000 | [
"Computer Science"
] |
A Simplified Two-Dimentional Unit Cell Model for Analyzing the Transverse Shear Stiffness of Three-Dimentional Truss-Like Core Sandwich Beam
Problem statement: In structural analysis using finite element softwa re, a complex threedimensional model affects the speed of calculation time. A simplified two-dimensional model should be used for analyzing structural responses t o reduce the calculation time. Approach: This study presents a study on the analysis of the trans verse shear stiffness of three-dimensional trusslike core sandwich beam using a simplified two-dime nsional unit cell model. Three kinds of core topologies: A truss core, an X-truss core and a bidirectional corrugated-strip core are chosen to be analyzed in this study. The presented simplified tw o-dimensional unit cell model is compared in transverse shear stiffness with the three-dimension al fi ite element unit cell model. Results: In this study, the results show that the simplified two-dim ensional unit cell model can be used for analyzing the transverse shear stiffness of three-d imensional truss-like core sandwich beam with a good correlative with the three-dimensional finite el ment unit cell model. Conclusion: From the finding, the transverse shear stiffness of three-di mensional truss-like core sandwich beam can be obtained from the simplified two-dimensional unit c ell model. This simplified two-dimensional model can be used to substitute the complex three-d im nsional finite element model; consequently, the speed of calculation time is increased.
INTRODUCTION
In recent structural engineering analysis, a finite element method is a tool for analyzing transverse shear stiffness of beam structures. It is, however, an expensive time-consuming method if applying the finite element method to three-dimensional models.
The force and distortion relationship of unit cell approach is an analytical method for analyzing the transverse shear stiffness of sandwich beams. Libove et al. (1951) have used this method in analytical study of simple corrugated core sandwich beams. Lok and Cheng (2000) have used it for analyzing the transverse shear stiffness of simple truss core sandwich beams. Leekitwattana et al. (2011) have also used the unit cell approach for analyzing the transverse shear stiffness of complex truss-like core sandwich beams. It was found that the force and distortion relationship of unit cell approach is an accurate method if applied for the truss-like core sandwich beam (Libove et al., 1951;Lok and Cheng, 2000;Leekitwattana et al., 2011).
This study aims to present an application of the force and distortion relationship of simplified twodimensional unit cell approach in analysis the transverse shear stiffness of three-dimensional trusslike core sandwich beams, as shown in Fig. 1. Fig.1: A three-dimensional truss-like core sandwich beam
Fundamental of unit cell approach:
The transverse shear stiffness of beams can be deduced from the relationship between the applied transverse shear force, Q y and the corresponded deflections δ y and δ z of a unit cell which is a repetitive unit of a sandwich beam , as demonstrated in Fig. 2. The relationship between the applied transverse shear force, Q y and the corresponded deflections δ y and δ z can be expressed as Eq. 1. This expression provides the direct calculation of the transverse shear stiffness, D Qy . (1)
Three-dimensional finite element unit cell model:
Three-dimensional finite element model of unit cell, as shown in Fig. 3, is analyzed. The model has a fixed support at line 1-1' and a roller support at line 5-5'. Additional constraint boundary conditions are set up along the lines 4-4' and 8-8' to maintain the displacement equality of both lines in the z-direction. The unit cell consists of the top and bottom steel faceplates and a truss-like core. These parts are modeled using the SOLID45 element type-an eightnode element having three degrees of freedom in nodal translations at each node-from the ANSYS element library Swanson Analysis Systems, 2007. In this study, the typical 2 mm finite element mesh size is used. The connections between the faceplates and core elements are defined as fully rigid.
The commercial finite element software ANSYS Release 11 is used in this study. The ANSYS is run under the operating software MS Windows XP Professional Version 2002. The hardware condition is a desktop computer with Intel® CoreTM 2 CPU 6600 @ 2.40 GHz and 1.98 GB of RAM.
Simplified two-dimensional unit cell model: Instead of using the three-dimensional finite element model, the unit cell can be presented in a simplified twodimensional model . Leekitwattana et al. (2011) have presented the model and consequent solution matrices for obtaining the corresponded deflections δ y 4 , δ y 8 and δ z 4 of the unit cell presented in Fig. 2.
In this study, the solution matrices are encoded in and solved by the commercial mathematical software MATLAB Version 6.1. The MATLAB is also run under the same operating software and hardware conditions as those used with the ANSYS.
Truss core X-truss core Studied core topologies: In this study, threedimensional truss-like cores: A truss core, an X-truss core and a bi-directional corrugated-strip core, as shown in Fig. 4, are studied using the three-dimensional finite element unit cell approach. The geometrical dimensions of these core are presented in Table 1. These core topologies are also studied using the simplified twodimensional unit cell approach.
Material properties of steel:
In this study, the steel with perfectly elastic-plastic property is used. The tension and compression behaviors of steel are assumed the same. The physical properties of steel are defined in Table 2. In the ANSYS, this material property of steel is defined using the bi-linear model.
RESULTS
Based on the transverse shear stiffness formulation techniques presented in the materials and methods section, the transverse shear stiffness, D Qy , of the sandwich beam with three core topologies, i.e., the truss core, the X-truss core and the bi-directional corrugatedstrip core, are obtained and presented in Fig. 5.
In Fig. 5, the transverse shear stiffness, D Qy , is first factorized by E s t where E s is the modulus of elasticity of steel and t is the thickness of sandwich faceplate. Then, it is plotted against s y /d in the range of 0.25≤s y /d≤2.0 where s y is the horizontal projection of the extended local neutral axis of the inclined part of the core and d is the effective depth of the sandwich beam, i.e., d = t+h c . Here, s y /d is used to define the angle of the inclined part of the core. It is equal to (s c -2f c )/(h c -t c ). Thus, the horizontal length of the unit cell, sc, can be obtained from this expression (Chomphan and Leekitwattana, 2011). (a) truss core, (b) X-truss core and (c) bidirectional corrugated-strip core sandwich beams obtained at any sy/ d ratio from the threedimensional unit cell approach and from the simplified two-dimensional unit cell approach
DISCUSSION
From the comparison of the factorized transverse shear stiffness, D Qy /E s t, of the truss core sandwich beams obtained from the three-dimensional unit cell approach using finite element method and the simplified two-dimensional unit cell approach as presented in Fig. 5a, it can be seen that both approaches agree very well with each other.
From the comparison of the factorized transverse shear stiffness, D Qy /E s t, of the X-truss core sandwich beams obtained from both approaches as presented in Fig. 5b, it can also be seen that the simplified twodimensional unit cell approach agrees well with the three-dimensional unit cell approach with a few percentage differences. The percentage difference of 6.4%, for example, occurs at s y /d = 0.75.
It can be seen from Fig. 5c that the simplified twodimensional unit cell approach applied for the bidirectional corrugated-strip core sandwich beams also agrees well with the three-dimensional unit cell approach. The percentage differences between these approaches are less than 15%. The maximum percentage difference of 14.4% occurs at sy/d = 0.50. The percentage difference of 11.6% occurs at sy/d = 0.75 and the percentage difference of 2.5% occurs at sy/d = 2.0.
According to these comparisons, it can be seen that the transverse shear stiffness, D Qy , of the threedimensional truss-like core sandwich beams obtained from the simplified two-dimensional unit cell approach is well consistent with the three-dimensional unit cell approached applied with the finite element method. Therefore, the simplified two-dimensional unit cell approach can be used for analyzing the transverse shear stiffness, D Qy , of not only the simple truss core sandwich beams but also the complex truss-like core sandwich beams, i.e., the X-truss core and the bidirectional corrugated-strip core sandwich beams presented in this study.
The simplified two-dimensional unit cell approach is considerably more advantageous than the threedimensional unit cell approach applied with the finite element method. This is because the simplified twodimensional unit cell approach can be modeled in simple line-art model. There is no requirement to do three-dimensional solid model in this approach.
CONCLUSION
This study presents the application of the simplified two-dimensional unit cell approach to obtain the transverse shear stiffness, D Qy , of the threedimensional truss-like core sandwich beams. Three core topologies of the truss core, the X-truss core and the bidirectional corrugated-strip core are presented as examples of three-dimensional truss-like core topology. The responses of the transverse shear stiffness, D Qy , obtained from the simplified two-dimensional unit cell approach applied with the MATLAB are presented and compared with those obtained from the threedimensional unit cell approach applied with the ANSYS. It is found that the simplified two-dimensional unit cell approach agrees very well with the threedimensional unit cell approach. The simplified twodimensional unit cell approach can be applied to the three-dimensional truss-like core sandwich beams as an alternative to the finite element method; consequently, it can be used to increase the speed of calculation time. | 2,263.8 | 2012-02-01T00:00:00.000 | [
"Engineering"
] |
Dynamic Bandwidth Scheduling of Software Defined Networked Collaborative Control System
From the perspective of collaborative design of control and scheduling, a software-defined networked collaborative control system (SD-NCCS) is established. On this basis, the system performance of SD-NCCS is analyzed and the average sensitivities of reference are used in utility function to evaluate the effects of network-induced delays. Moreover, the network pricing mechanism and game theory are introduced and a dynamic bandwidth resource allocation model of the overall control system for optimal performance is obtained. Thus, the problem of the network resource allocation of the SD-NCCS has been converted to be the problem of solving the Nash equilibrium point under the non-cooperative game model. Furthermore, the Nash equilibrium solution under this frame is obtained using the estimation distributed algorithm (EDA). Finally, a simple example is included to illustrate the performance of our scheme.
I. INTRODUCTION
The networked control system (NCS) is a distributed control system where the control loops are closed via a communication network. Compared with the traditional point-topoint control system, the most significant feature of NCS is that the sensors, controllers, and actuators within are interconnected through some communication network instead of direct point-to-point connections [1]. As a result, using NCS may increase system flexibility, reduce system wiring, break the space limitation, which can realize remote monitoring and control in complex environments. Software-Defined Network (SDN) is recognized as the next-generation network paradigm [2], which decouples the network control plane from the data forwarding plane to a logically centralized controller, enabling rapid network technology innovation for multiple users. Under the SDN architecture, it is feasible to provide flexible and efficient service slicing solutions for different QoS requirements of nodes at the application layer by utilizing the characteristics of OpenFlow (OF) switches and Network Virtualization (NV).
The associate editor coordinating the review of this manuscript and approving it for publication was Qiong Wu.
Due to the limited communication bandwidth of the network itself, many problems such as delay, packet loss and jitter of data transmission in the control system will affect the control performance of NCS [3]- [5]. Without additional network resources, how to allocate and utilize bandwidth reasonably in limited network resources, so that the entire control system has an optimal control performance is a problem worth studying.
Notably, most of the aforementioned research is focused on the control strategy to reduce the impact of delay, packet loss, and packet error on control performance. However, studying NCS from the perspective of the quality of control (QoC) ignores the complex dynamic behavior of the network and is difficult to apply to complex real systems. The integrated design strategy of both the QoC and the quality of service (QoS) has attracted a growing number of researchers' attention. The collaborative design of control and scheduling can be divided into two categories. One is a real-time calculation method using real-time feedback theory, see [6]- [7]. In [8] a collaborative design method combining adaptive controller and feedback scheduling strategy is proposed, which overcomes some limitations of application and execution platform. The other is to maximize the performance of the control system in collaborative design and see [9]- [11] for a partial list of references. For example, in [12] joint optimization of information transmission and state estimation developed to reduce the influence of network inducing factors on the convergence and accuracy of estimation. However, a key research issue in the NCS over SDN is the lack of dynamic scheduling mechanisms for network resources [13].
Motivated by the above analyses and under the SDN framework, from the perspective of collaborative design of control and scheduling, a software-defined networked collaborative control system is proposed. Furthermore, under the framework of SD-NCCS, the non-cooperative game model and network pricing mechanism are introduced, and a dynamic bandwidth scheduling strategy suitable for SD-NCCS is proposed. The goal is to find a flexible bandwidth allocation strategy based on Nash equilibrium and network pricing. Thus, the dynamic performance requirements of individuals and the whole are simultaneously satisfied. The network pricing mechanism is implemented in a centralized control mode in the network controller, and the overall performance of the control system is evaluated by the cumulative amount of the output error of each control loop. Also, considering the complexity of the calculation of the exact solution of the network bandwidth scheduling problem in practical operation, this paper gives an EDA-based Nash equilibrium point solution method.
II. PROBLEM FORMULATION A. SYSTEM STRUCTURE
Envisioning the advantages of a software-defined network, a general architecture of SD-NCCS is proposed as shown in Figure 1. First, the data plane and the control plane are decoupled. The data plane is mainly composed of a set of switches that support the OpenFlow protocol (referred to as the OF Switch). In addition to the packet matching and forwarding tasks according to the flow table, the OF switch provides a secure interface to the control plane according to the OpenFlow protocol for convenient control. The control plane queries and manages the flow table in the switch. Second, the control plane consists of a controller that supporting the OpenFlow protocol (referred to as the OF Controller) or a network operating system and a set of applications on top of it. Through the OpenFlow channel, the control plane can acquire the state of the data plane and program the data plane, and thereby achieving optimal management of network resources from a global perspective. The part responsible for bandwidth scheduling and allocation above the OF Controller in the control plane is called the policy center (one kind of control APPs with a bandwidth allocation algorithm). Also, the network control subsystem is no longer a closed-loop control of the straight-through structure, but an open-loop control using a hierarchical structure. Each control loop issues a control command by the main controller, and the execution agent receives the command and controls the output.
It is assumed that the SD-NCSS has M hierarchical network control loop subsystems, and each control loop subsystem includes a main controller and an execution agent. The execution agent includes a local controller, a sensor, and an actuator. The main controller sends a control command to the execution agent through the network, and the local controller in the execution agent converts the control device into a control signal according to the received command, and the sensor feeds the output action state information to the local controller. Both the main controller and the execution agent are time-driven and have the same sampling period h. The main controller sends bandwidth requirements to the policy center through the OF switch and the OF controller according to the task information, and the policy center integrates the bandwidth requirements of each subsystem to perform bandwidth allocation. A complete bandwidth scheduling operation cycle is called T O , as shown in Figure 2. First, during the bandwidth polling period T P , each control loop controller submits a bandwidth requirement to the center. Then, in the allocation period T A , the center calculates and allocates the bandwidth according to the collected information, that is, updates the Flow Table in the corresponding OF Switch. Finally, in the remaining data transmission period T T , each control loop alternately transmits data according to the allocated time slice until the beginning of a new cycle. Through the state information feedback and bandwidth resource reallocation of each new cycle, it can adapt to the changes in data service requirements of each control ring. Therefore, in this layered software-defined network collaborative control system, flexible scheduling strategies can be conveniently implemented.
B. PERFORMANCE ANALYSIS
Under the premise of ensuring the stability of each control loop, finding an optimal bandwidth allocation scheme is the problem to be solved in this paper. Therefore, it is necessary to further analyze the impact of time delays on control performance under the assumption that there is no packet loss and disorder.
First, assume that during a data transmission period T T , the main controller of the control loop i will periodically send a reference control signal r (t) to the corresponding execution VOLUME 8, 2020 agent with h as the sampling period, as shown in Figure 3. The total number of reference transmissions during T T is , the control reference signal is being sent to the corresponding execution agent on the main controller side, and the latest reference control signal at this time is recorded as r (n) t j k . However, at the same time, the reference control signal actually received by the actuator agent side is a previous reference control signal, denoted as Based on the above analysis, the average sensitivity of the control output to the reference control signal due to network delay is defined as follows [14]: In practical engineering applications, the average sensitivity of the reference control signal can be calculated in advance or determined by simulation experiments according to the control requirements and then obtained by table lookup.
Finally, the designed utility function must be an increasing function of the bandwidth x, which is also affected by the average sensitivity of the reference control signal. In addition, to satisfy the existence of the Nash equilibrium, it must also be quasi-concave and continuously differentiable. Based on the above conditions, the utility function of the control loop i designed in this paper during a data transmission period t ∈ t 0 k , t 0 k+1 is as follows: is the average sensitivity of the reference control signal change of the control loop i.
The optimization goal is to maximize the total utility of all control loops in the SD-NCCS as follows: where ω R ∈ (0, 1) is a retention scale factor, which was introduced to retain part of the bandwidth for unexpected overload conditions. However, simply using J as the optimization objective does not consider that each control loop should pay a corresponding price for bandwidth resources. Therefore, such bandwidth resource allocation without price mechanism constraints cannot prevent individual control loops from maximizing their private utility due to private interests. All bandwidth resources may be only assigned to one control loop and cause the overall system to crash.
III. MAIN RESULT A. GAME MODELING BASED ON NETWORK PRICING
The game model based on network pricing mechanism is discussed in this section. Applying economic model and game theory to the field of computer networks becomes of high research. Utilizing a game-theoretical framework with the pricing mechanism is an effective way of resource allocation of arbitration. Especially in the field of wireless access networks. For example, in [15] a utility-based power control via convex pricing scheme is proposed to obtain an efficient power allocation in the uplink of CDMA wireless networks.
In [16] a cooperative distributed interference pricing algorithm is proposed for power control in heterogeneous wireless networks. In [17] a Stackelberg game is formulated to study the price-based bandwidth allocation in multi-user Bluetooth low energy (BLE)-backscatter communication. In [18] a hierarchical auction-based mechanism is designed to reduce both global communication and unnecessary repeated computation in ad hoc networks. The pricing strategy mainly includes two modes: Non-usage-Based Pricing (NBP) and Usage-Based Pricing (UBP) [19]. Although NBP is simple in pricing and convenient to operate, it cannot use price leverage to adjust user resource allocation, which affects resource utilization and easily leads to network congestion. However, the unit bandwidth price of UBP can be tracked and changed according to the consumption of network resources. This dynamic pricing mechanism aims at maximizing the overall satisfaction of users and can achieve high utilization and fairness of network resources. Every control loop wants to maximize its final revenue under the framework of the SD-NCCS. In general, the revenue of a control loop depends on the pricing strategy of other control loops. Given the pricing strategy of other control loops, if no control loop has the motivation to choose other pricing strategies (the utility of this strategy is already the max), then no control loop will break this equilibrium. This equilibrium is called the Nash equilibrium (NE) [20]. On the one hand, it needs to provide services for each control loop to improve its QoC. On the other hand, it needs to balance the bandwidth requirements of each control loop so that the QoC of each control loop can be guaranteed. Therefore, for the SD-NCCS with limited bandwidth resources, the optimal resource scheduling strategy that takes into account the best performance of individuals and the whole is to allocate bandwidth at the NE.
The OF controller plays the role of price maker and each control loop mentioned above is referred to as a user. Each user does not know the existence of other users and their current status. Each one submits its bidding strategy according to its QoC need, and then pays the corresponding amount for the actual bandwidth allocated. The entire bidding process is as follows: Step 1: Each user has the same amount of funds g in the initial state of each round of bidding.
Step 2: Each user submits a bidding y i (y i ∈ 0, g ).
Step 3: The price maker will develop a bandwidth allocation scheme based on the bidding and allocates bandwidth resources to each user.
Step 4: Each user pays the amount of bandwidth allocated. The amount y i that user i needs to pay can be calculated by the following formula.
where λ is the unit bandwidth price, and the utility function of each user is as follows: It can be seen that the benefit of a user is determined by the corresponding QoC obtained and the payment under the given QoS. Each user independently makes its bidding strategy based on its earnings. Each user will not offer the high price arbitrarily, because too much high bidding will only lower its final income. It should be noted that the amount, the user ultimately pays is based on the bandwidth they have obtained, which may not match the initial offer.
The Nash equilibrium based on and non-cooperative game model can be described as follows: A non-cooperative game model with M users based on the network pricing can be described as i is the optimal strategy for user i when other users choose is the policy space of user i, and S i (·) (shown in equation (6)) is the utility function set of the user i. Then, the bidding strategy x * = x * 1 , . . . , x * i , . . . , x * M is a Nash equilibrium based on the network pricing and non-cooperative game model. However, each time a user submits status information, it is not practically a submission of a bidding, but the average sensitivityP i n,o (t k ) for a new task. The price maker runs the bandwidth allocation program according to theP i n,o (t k ) combined with the game model to calculate the Nash equilibrium point, and performs actual scheduling on each control loop.
The following theorem will be used to prove the existence and uniqueness of Nash equilibrium for MG.
Theorem 1: A Nash equilibrium exists in game NG = , P j , u j (·) , if for all j ∈ : (1) The policy space P j is a non-empty compact convex subset of the Euclidean space R n ; (2) Utility function u j (p) is continuous on p and concave on p j ; Proof: Please refer to [21] for details of the certificate. Theorem 2: A Nash equilibrium exists in SD-NCCS, Proof: This theorem can be proved by verifying that MG satisfies the two conditions of Theorem 1. Assuming that x min i ≤ x max i is always true, it is clear that the first condition is met. It remains to prove that the utility function S i (x) is quasi-concave on x i for all i.
The first-order differential of utility function S i (x) is: The second-order differential of utility function S i (x) is: Obviously, the utility function S i (x) is continuous and differentiable on x and concave on x i . A concave function S i (x) is also quasi concave, so the second condition is also satisfied, and the proof is completed.
Theorem 3: The SD-NCCS has a unique Nash equilibrium. Proof: The proof of the uniqueness of Nash equilibrium is similar to Theorem 2. The specific proof process can be found in the reference [22]. To ensure the simplicity of the text, it will not be repeated here. So far, we have converted the SD-NCCS network bandwidth allocation problem into a Nash equilibrium calculation problem based on the network pricing non-cooperative game model.
B. EDA ALGORITHM
Envisioning the Nash equilibrium point under the network pricing non-cooperative game model is a nonlinear optimization problem with multiple constraints. The traditional numerical calculation method is difficult to solve the Nash equilibrium, so this paper adopts the EDA-based optimization algorithm for calculation. Furthermore, the implementation steps are as follows [23]: Step 1: Initialization: randomly generate a valid initial population B 0 meeting the bandwidth constraint.
Step 2: Select: select the first half of the best gain candidates from the parents.
Step 3: Update: reconstruct a probability model based on Gaussian distribution from the obtained dominant group.
Step 4: Sampling: a new generation of populations is sampled by a new probability model. Go to Step 2 until the termination criterion is met.
IV. EXPERIMENT AND RESULT ANALYSIS A. SIMULATION SETUP
In this section, a simple example is given to illustrate the effectiveness of the proposed method. The topology of the SD-NCCS test platform is shown in Figure 4, including a controller as a policy center, a switch supporting the Open-Flow protocol, and three control loops. Each control loop consists of the main controller and a simulation execution agent. The simulation test platform includes three networked dc-motor subsystems with the same control parameters. The state-space description of the subsystem can be expressed by (10) [24], and the corresponding motor parameters are shown in Table 1. and The reference control signal inputs of the three control loops during three transmission periods are as shown in Figure 5, and the corresponding sensitivity of the control signals is as shown in Table 2. Substituting each group of data into equation (3) gives the utility function of the corresponding control loop at each stage. The remaining parameters are set as follows: ω R = 0.1, T O = 0.5 s, T P = 0.03 s, and the remaining time is T T . Among them, the bandwidth allocation period T A is very small relative to the polling period T P and can be ignored. The evaluation of SD-NCCS system performance can be jointly described by all control loop subsystems. The performance of each control loop subsystem can be measured by the integrated absolute error (IAE) [25]. The IAE for the control loop i can be expressed as It can also be directly expressed by controlling the output instantaneous error, and (12) is corrected to: where t d = d t, t d ∈ t j , t j+1 is the sampling time of the output signal measurement, d is the index of the sampling point, and t is the sampling interval. D represents the total number of sampling points and can be set to D = 100. y nom,j (t d ) is the output of the control loop i measured at t d without time delay. y act,j (t d ) is the output of the control loop i measured at the t d after the scheduling policy is adopted in the case of time delay.
B. RESULT ANALYSIS
The initial population of the Network Pricing-based Bandwidth Allocation (NPBA) method is 100. After 10 generations of evolution, the overall fitness is quickly converged and stabilized, and reaches the Nash equilibrium. Figure 6. The detailed numerical results are shown in Table 3 and IV.
As shown in Figure 6, generally speaking for independent subsystems, the less bandwidth allocation, the worse the control performance will be. Since the EBA uses the average bandwidth method, each control loop maintains the same bandwidth allocation for different transmission periods, so there is a case of excess bandwidth in a certain period. Loop 2 during period 2 for example, the average sensitivity of the loop 2 is 1.00, and only a small amount of bandwidth is required to maintain the stable output of this subsystem. However, the result of a large amount of bandwidth wasted VOLUME 8, 2020 by this control loop is the lack of bandwidth of other control loops, which ultimately leads to a decline in overall control performance. In contrast, although the individual subsystems using NPBA are inferior to the corresponding subsystems using EBA in a certain transmission period, such as the first transmission period, the IAE of the control loop 1 and 2 using NPBA are higher than EBA. However, the overall IAE of EBA is higher than that of NPBA, indicating that the SD-NCCS using the proposed strategy provides satisfactory performance.
V. CONCLUSION
This paper presents a software-defined networked collaborative control system from the perspective of collaborative design of control and scheduling. Under the framework of the SD-NCCS, a dynamic bandwidth scheduling strategy based on the game theory and network pricing mechanism is proposed, which transforms the network resource allocation problem into the Nash equilibrium solving problem. The biggest difference from the existing research results is that the proposed method forces the control subsystems to work at the Nash equilibrium dynamically, and the overall performance of the SD-NCCS with limited resources is improved. The simulation results demonstrate the effectiveness and availability of the proposed method. In addition, as the control plane and data plane are separated in SD-NCCS, a centralized algorithm for reaching the NE can be handed over to the SD-NCCS controller, which can be realized by an application on it. Therefore, in terms of complexity, comparing to the decentralized methods aforementioned, it is simpler and more operational. As it comes to the cost, more spending may be saved supposedly. At last, further study and implementation of this research work in a realistic software-defined networking environment is practical importance and is part of our current and future work. He is currently a Chief Technology Officer with Zhejiang Yi Duan Machinery Company. His research interests include intelligent CNC metal forming machine, metal forming, and wheel equipment. VOLUME 8, 2020 | 5,220.6 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Exploring the Impact of Digital Inclusive Finance on Agricultural Carbon Emission Performance in China
This paper attempts to reveal the impact and mechanisms of digital inclusive finance (DIF) on agricultural carbon emission performance (ACEP). Specifically, based on the provincial panel data in China from 2011 to 2020, a super slacks-based measure (Super SBM) model is applied to measure ACEP. The panel regression model and spatial regression model are used to empirically analyze the impact of DIF on ACEP and its mechanism. The results show that: (1) during the study period, China’s ACEP exhibited a continuous growth trend, and began to accelerate after 2017. The high-value agglomeration areas of ACEP shifted from the Huang-Huai-Hai plain and the Pearl River Delta to the coastal regions and the Yellow River basin, the provincial differences displayed an increasing trend from 2011 to 2020. (2) DIF was found to have a significant positive impact on ACEP. The main manifestation is that the development of the coverage breadth and depth of use of DIF helps to improve the ACEP. (3) The positive impact of DIF on ACEP had a significant spatial spillover effect, that is, it had a positive effect on the improvement of ACEP in the surrounding provinces. These empirical results can help policymakers better understand the contribution of DIF to low-carbon agriculture, and provide them with valuable information for the formulation of supportive policies.
Introduction
As the Gloabl-warming effect intensifies, accelerating low-carbon development has become an important development strategy for various countries around the world [1]. In 2020, Chinese government formally set out strategic goals for peaking carbon emissions by 2030 and achieving carbon neutrality by 2060. This ambitious goal, which can significantly slow global warming, drives China's development toward a low-carbon economy [2]. In China, agriculture is a major source of carbon emissions, accounting for 17% of the country's total carbon emissions, significantly higher than the global average of 11% [3]. Promoting carbon emission reduction in agriculture is not only an important part of China's "dual carbon" goal, but also an indispensable content of accelerating the construction of agricultural ecological civilization. Therefore, To reduce carbon emissions, one of China's main approach is to improve the agricultural carbon emission performance (ACEP), which can reflect the agricultural factor productivity resourced from multiple factors such as agricultural material input, human consumption and economic develop [4,5]. Finance plays a key role in the resource allocation of agricultural production, and thus has a significant impact on the productivity of the agricultural sector [6,7]. Due to the high China's monumental carbon reduction goals, it is important to explore the impact of DIF on ACEP and propose some corresponding suggestions. Existing studies have pointed out that DIF has disrupted traditional finance in technological investment, market transactions, and environmental performance [11,28]. However, few studies have been conducted to explore the relationship between DIF and ACEP. Furthermore, due to the spatial agglomeration and regional differences of agricultural resources and production conditions, agricultural production and its carbon emissions may be affected not only by internal elements but also by surrounding factors [29]. Therefore, spatial autocorrelation should be considered into the impact of DIF on ACEP [34]. Traditional analysis methods, such as liner regression model, logistic regression model, and correlation analysis, ignore the spatial spillover effect of spatial elements [20,29]. This limitation may result in biased outcomes of global regression models [35]. Spatial regression models, including spatial lag model (SLM), spatial error model (SEM), and spatial Durbin model (SDM), can be used to accurately explore the spatial spillover effect of independent variables on dependent variables by considering the spatial autocorrelation [36]. Therefore, this method can provide more accurate and useful information regarding the impact of DIF on ACEP, which can help to better understand the relationship between financial development and agricultural carbon reduction in China.
Given the shortcomings of the existing literature, this paper attempts to examine whether DIF has a positive impact on ACEP and whether there is a spatial spillover effect in this impact. Specifically, based on 2011-2020 panel data from China, this paper applies a super slacks-based measure (Super SBM) model to measure ACEP from the perspective of the total factor productivity. Following this, a panel data model and a spatial regression model are employed to empirically analyze the impact of DIF on ACEP and its mechanism in China. The two key contributions that this paper offers in comparison to the previous literature can be summarized as follows. First, this paper fills a gap in the literature. Few studies have examined the impact of DIF on ACEP, which can increase awareness of the importance of DIF and its application in agriculture. Second, it takes spatial spillover effect into account and examines the spatial spillover effect of DIF on ACEP in the surrounding provinces. With consideration of the reality of China's agricultural development, this paper selects suitable input-output indicators and applies the Super SBM model to accurately measure and analyze the evolution trend of ACEP in China. It is therefore of significance for the scientific formulation of agricultural carbon emission reduction policies in China. Following the introduction, Section 2 will introduce the methods and data. Section 3 reports the empirical results and discussion, and Section 4 summarizes the conclusion and implications of this paper.
Theoretical Framework
As a new financial model, DIF can influence ACEP as follows. First, the development of DIF widens access to finance and reduces financing costs. DIF makes use of digital technologies to help farmers understand the role of financial products, stimulating farmers' willingness to use financial services to affect the efficiency of agricultural green production [37]. Meanwhile, the expanding funding resources, collected from scattered financial resources at a lower cost, can provide more financing support for the development of eco-agriculture, circular agriculture, and smart agriculture [31]. Second, DIF can apply big data and cloud computing technology to produce a powerful function of matching capital supply and demand [38]. It can affect ACEP by guiding financial resources to green agricultural activities in key areas such as agricultural equipment, pollution prevention, and the cultivation of new subjects [31]. Third, by dispersing risk in a wide range, DIF can support agricultural technology innovation to affect ACEP. Technological innovation usually faces the risk that it cannot be implemented in the short term [34]. Owning to the spread of information technology to residents in remote areas, many scattered investors are attracted to the capital market. The considerable risks of agricultural technology innovation are effectively dispersed to a wide range, and ACEP may be improved through technology innovation ( Figure 1). agricultural enterprises, and changing resident' consumption patterns, which will attract more capital, enterprises and talents from neighboring areas to the local area [39]. On the other hand, advanced digital technology and financial services would also spillover to adjacent areas through knowledge and talent flow, the rural economic development in adjacent areas would be significantly affected, thereby influencing agricultural carbon emissions [40]. based on the above analysis, this study puts forward two hypothesis: Hypothesis 1. The development of DIF will improve local ACEP. Hypothesis 2. The development of local DIF has significant spatial spillover effects.
Calculation Method of ACEP
In the agricultural production process, inputs such as land, labor, capital, and technology, produce not only the agricultural products necessary for humans but also the carbon emission, which is called the undesired output. The SBM model, first proposed by Tone (2001), was more realistic as it takes into account the undesired output of the production process [18]. This model thus has been widely used to measure carbon performance, eco-efficiency, and energy efficiency [19,41]. Compared to traditional DEA models, the SBM model considers the slack variables and the efficiency in the presence of undesired outputs. Based on this, Tone (2002) then created a Super SBM model, which allowed the maximum value of the result to be greater than 1, thus making it possible to facilitate comparative rankings [42]. The ACEP can be measured by the following formula, which is as follows: With the increase in the scale of cross-regional flow of resources and the intensification of interregional competition for technological innovation resources, the development of local DIF would influence the surrounding ACEP. DIF can promote rural economic development by creating more employment opportunities, reducing financing costs for agricultural enterprises, and changing resident' consumption patterns, which will attract more capital, enterprises and talents from neighboring areas to the local area [39]. On the other hand, advanced digital technology and financial services would also spillover to adjacent areas through knowledge and talent flow, the rural economic development in adjacent areas would be significantly affected, thereby influencing agricultural carbon emissions [40]. based on the above analysis, this study puts forward two hypothesis: Hypothesis 1. The development of DIF will improve local ACEP.
Hypothesis 2.
The development of local DIF has significant spatial spillover effects.
Calculation Method of ACEP
In the agricultural production process, inputs such as land, labor, capital, and technology, produce not only the agricultural products necessary for humans but also the carbon emission, which is called the undesired output. The SBM model, first proposed by Tone (2001), was more realistic as it takes into account the undesired output of the production process [18]. This model thus has been widely used to measure carbon performance, ecoefficiency, and energy efficiency [19,41]. Compared to traditional DEA models, the SBM model considers the slack variables and the efficiency in the presence of undesired outputs. Based on this, Tone (2002) then created a Super SBM model, which allowed the maximum value of the result to be greater than 1, thus making it possible to facilitate comparative rankings [42]. The ACEP can be measured by the following formula, which is as follows: where p is the super efficiency value of the valued agricultural carbon emission, whose value can be larger than 1; m, s 1 , and s 2 represent the number of input indicators, the number of desired output indicators, and the number of undesired output indicators, respectively; x i , y g , and y j are the value of inputs, desired output, and undesired output, respectively; x i , y g r and y b j represent the mean value of input, desired output, an undesired output, respectively; λ is the weight vectors; S − , S g and S b are slack vectors corresponding to the abundance of inputs, scarcity of desirable output, and excess of undesirable output, respectively.
The estimated decision units are completely efficient when p > 1; otherwise, it is inefficient. In this paper, the expected output of agriculture was expressed by the agricultural output value (billion dollars) and the non-desired output was represented by the agricultural carbon emissions (million tons). Carbon emission is the major global climate change contributor and can reflect various pollutants in agricultural production [43]. According to the literature [3,41], agricultural inputs in this paper include land area, land force, agricultural machinery power, chemical fertilizer, pesticide, agricultural film, and irrigation (Table 1).
Agricultural Carbon Emission Measurement
Agricultural carbon emissions are derived from inputs and outputs in the agricultural production process. According to IPCC (2007) and Jiang et al. (2019), four main sources of these emissions are as follows [43,44]: The first is the use of agricultural inputs such as pesticides, chemical fertilizers, and agricultural film. The second is tillage, which results in a large amount of organic carbon entering the air. The third is the energy consumed in the process of agricultural irrigation. The fourth is the feeding of main livestock. The measurement formula of agricultural carbon emissions is as follows: where ACE is the total agricultural carbon emissions, AC i is the emissions for the i-th agricultural carbon source, S i is the amount of the i-th agricultural carbon source, and is the emission coefficient of the i-th agricultural carbon source. The emission coefficients of various agricultural carbon sources are described in Table 2. According to the primary hypotheses put forward in this study, a panel regression model is used to quantify the impact of DIF on ACEP. Panel regression data refers to pool data collected from both cross-sections and time. Supported by large samples, panel data provides a greater level of insight into the dynamic behavior of individuals. Subsequently, it overcomes the problem of absent variables, thus improving the accuracy of results [46]. The specific model is as follows: where i is the provincial administrative unit, t represents time, β is the estimated parameter, λ represents individual effects, η represents time effects, ε represents the random disturbance term of normal distribution. lnacep i,t is the logarithm of ACEP; lndif i,t is the logarithm of digital inclusive finance indicators; and lnctl i,t is the logarithm of control variables.
Spatial Autocorrelation Analysis
This study attempts to employ the Moran's I index to test whether the ACEP among the provinces has spatial effects. Moran's I index includes the Global Moran's I iindex and Local Indicator of Spatial Association (LISA) [47]. The specific formula for Global Moran's I index is expressed as follows: where I is the Global Moran's index, n is the number of spatial units, w ij is the geographic distance weight matrix, m i and m j are the values of ACEP in unit i and j, respectively, m is the average value of ACEP for the entire region. The range of Moran's I is from −1 to 1. The larger the value of Moran's I, the higher the spatial correlation between units. When the value of Moran's I index is close to −1, indicating that there is a significant negative spatial correlation between units. If the value of Moran's I is close to zero, indicating that the value of ACEP is random distributed. Additionally, to identify the local spatial agglomeration of ACEP, the LISA statistic is expressed as follows: where I i is the local Moran's I index, reflecting the spatial correlation degree between unit i and adjacent unit j. The definitions of the other variables are the same as those of Formula (4). When the value of I i is greater than 0, indicating that the value of ACEP in unit i is similar to adjacent unit j. When the value of I i is less than 0, indicating that the value of ACEP in unit i is different from adjacent unit j.
Spatial Regression Model
Traditional panel regression model ignores the possible spatial dependence of variables. To exploring the possible spatial spillover effect of DIF on ACEP, three widely used patial regression models, including spatial lag model (SLM) (Fornula (6)), spatial error model (SEM) (Fornula (7)), and spatial Durbin model (SDM) (Fornula (8)) are constructed in this study [35]. These models are based on different types of spatial effects. The specific formulas are as follows: ln ln where w ij is the geographic distance weight matrix, ρ represents the spatial autoregression coefficient, which can measure the spatial spillover effect of the explained variable in surrounding units on this unit, θ and w represent the coefficients of the spatial lag variables respectively, reflecting the spatial spillover effect of the explanatory variables in surrounding units on the explained variable, respectively. δ is the coeffcient of spatial component errors. The definitions of the other variables are the same as those of Formula (3). According to the litature [48], several test including the Lagrange multiplier (LM) and robust LM tests are conducted to choose the suitable algorithm for spatial regression. According to Lesage and Pace (2009), the spatial regression model partial differential method can be used to decompose the effect of the coefficients [49]. Thus, the direct, indirect, and total effect estimates are employed to interpret the model. In this study, direct effect refers to the impact of the development of DIF on local ACEP, including the spatial feedback effect. The indirect effect refers to the impact of the development of DIF on ACEP in the adjacent areas, which is considered as the spatial spillover effect of DIF on ACEP. The specific process formula can refer to Zhong et al. (2022) [34].
Explained Variable and Core Explanatory Variable
This paper uses the measurement result of ACEP as the explained variable. Meanwhile, DIF is chosen as the core explaining variable. In this paper, the DIF index, obtained from "The Peking University Digital Financial Inclusion Index of China", is used as the proxy variable. The index is synthesized based on financial service data provided by Ant Financial at the provincial, city, and county levels in China [50]. The digital inclusive financial index includes 3 sub-dimensions: the coverage breadth, the depth of use, and the digitalization degree. The coverage breadth mainly includes three indicators: the number of Alipay accounts, the proportion of Alipay account-bound users, and the number of Alipay accountbound cards, which reflect numbers Financial coverage; the depth of use covers payment, credit, insurance, investment, and monetary funds and other services, reflecting the increase in the variety and availability of digital financial instruments; the digitalization degree includes mobile, affordability and facilitation, which can reflect the degree of integration and inclusiveness of digital finance and digital technology. The index has been widely applied to the field of China's DIF [9,26], and has considerable representativeness and reliability. This paper uses these three sub-indicators as explanatory variables to further explore the impact of different dimensions of digital finance on ACEP.
Control Variables
Referring to the previous research, some other influencing factors were selected as control variables. According to Wu et al. (2020) and Gao et al. (2021) [51,52], the value of GDP per capita and and the urbanization rate of the resident population are used to reflect the regional socio-economic development level. The improvement of the industrialization level can provide material conditions and product markets for agriculture, as well as technical support, promoting agricultural economic development. Thus the ratio of output value of secondary industry to GDP is selected to measure the level of regional industrial development. The higher the opening-up level, the easier it is to learn from advanced foreign technology and management experience, which can promote the green development of agriculture [34]. Therefore, the proportion of foreign trade volume of agricultural products in total agricultural output value is employed to indicate the level of opening up. Considering that agricultural disasters lead to higher losses of agricultural inputs, the proportion of crop disaster areas in crop sown area is included in the control variables. Additionally, rural economic development is closely related to agricultural production activities. The increase in farmers' income results in changes in the structure and model of agricultural production. Thus the per capita disposable income of rural households is used to measure.
Data Sources and Descriptive Statistics
The research sample in this study covers 31 provinces and cities in China. The availability of data in the balanced provincial panel data from 2011 to 2020 is broad and given the validity of the data, it is employed for empirical research. In the sample, Eastern Table 3 shows the descriptive results of the variables calculated by SPSS. 22.0. Figure 2 shows the average level of ACEP in the country and three regions (east, central and west) between 2011 and 2020. China's ACEP exhibited a fluctuating growth trend from 0.32 in 2011 to 0.71 in 2020. In particular, the growth rate of ACEP further accelerated after 2017. The reason for this phenomenon is that, since 2015, the Chinese government has proposed an agricultural green development strategy, and attempted to achieve zero growth in chemical fertilizers and pesticides [53]. Meanwhile, local governments have formulated corresponding measures to protect the agricultural environment [54]. In different regions, the average ACEP in eastern, central, and western regions generally showed a trend of increasing, from 0.39, 0.31, and 0.30 in 2011 to 0.86, 0.50, and 0.79 in 2020, respectively. The ACEP in Eastern China was the highest, followed by Western China, and the lowest ACEP occurred in Central China. The box plot illustrates the discrete distributions of ACEP values in China's provinces from 2011 to 2020 (Figure 3). It can be seen that the average value of ACEP developed from a concentration of low values to a spread towards the two ends and a concentration of medium values, indicating that the regional differences in ACEP between provinces widened over the study period.
Spatial-Temporal Changes in ACEP
The spatial distribution of ACEP was generally high in the eastern and southern regions and low in the northern and western regions of China ( Figure 4). In 2011, 67% of provinces had ACEP values below 0.30, mainly located in western, northern, and southeastern China. The ACEP values of the other provinces were in the range of 0.30-0.60, and high-value areas were distributed in the Huang-Huai-Hai plain and the Pearl River Delta, indicating that there was a large potential for technological advancement and low carbon emission reduction in China's agricultural development process. Between 2011 and 2020, the vast majority of provinces showed a significant improvement in ACEP over the study period, especially between 2017 and 2020. In 2020, fourteen provinces in China had maintained efficient values (value > 1) for ACEP, with Hainan having the highest value (1.113). These provinces are mainly located in the coastal regions and the Yellow River basin. These are relatively developed regions or important major agricultural production areas, where local governments have access to land, labor, capital, and technology to promote agricultural production efficiency [52]. However, the central and northern provinces were found to have lower values of ACEP, with Jilin having the lowest value (0.23). Some of these provinces are large agricultural provinces, which indicates that these provinces still have input and large carbon emissions. The spatial distribution of ACEP was generally high in the eastern and southern regions and low in the northern and western regions of China (Figure 4). In 2011, 67% of provinces had ACEP values below 0.30, mainly located in western, northern, and southeastern China. The ACEP values of the other provinces were in the range of 0.30-0.60, and high-value areas were distributed in the Huang-Huai-Hai plain and the Pearl River Delta, indicating that there was a large potential for technological advancement and low carbon emission reduction in China's agricultural development process. Between 2011 and 2020, the vast majority of provinces showed a significant improvement in ACEP over the study period, especially between 2017 and 2020. In 2020, fourteen provinces in China had maintained efficient values (value > 1) for ACEP, with Hainan having the highest value (1.113). These provinces are mainly located in the coastal regions and the Yellow River basin. These are relatively developed regions or important major agricultural production areas, where local governments have access to land, labor, capital, and technology to promote agricultural production efficiency [52]. However, the central and northern provinces were
The Correlation between DIF and ACEP
As shown in Figure 5, the digital inclusive financial level in China showed rapid growth between 2011 and 2020, with its growth rate accelerating after 2013. Specifically, the digital inclusive financial index increased from 30.19 in 2011 to 339.01 in 2020, with an average annual growth rate of 98.02%. At the same time, ACEP also displayed a significant upward trend. Relying on the Internet, big data, cloud computing, and other advanced information technologies, DIF can broaden the coverage of traditional finance and effectively promote the financial accessibility of farmers and disadvantaged groups in remote areas [55]. This advantage helps to alleviate the financing difficulties for farmers and increase access to finance for green agricultural projects [7]. As a result, ACEP had an obvious growth trend consistent with the digital inclusive financial level.
The Correlation between DIF and ACEP
As shown in Figure 5, the digital inclusive financial level in China showed rapid growth between 2011 and 2020, with its growth rate accelerating after 2013. Specifically, the digital inclusive financial index increased from 30.19 in 2011 to 339.01 in 2020, with an average annual growth rate of 98.02%. At the same time, ACEP also displayed a significant upward trend. Relying on the Internet, big data, cloud computing, and other advanced information technologies, DIF can broaden the coverage of traditional finance and effectively promote the financial accessibility of farmers and disadvantaged groups in remote areas [55]. This advantage helps to alleviate the financing difficulties for farmers and increase access to finance for green agricultural projects [7]. As a result, ACEP had an obvious growth trend consistent with the digital inclusive financial level.
Basic Regression Analysis
Four different specifications of panel data regressions are used to measure DIF's impact on ACEP (Table 4). These include the ordinary least square (OLS), the pooled ordinary least squares (POOL), the fixed effect (FE), and the random effect (RE). The Hausman test indicates that the FE-effects model is more suitable for econometric analysis. According to Table 4, the coefficient of DIFI was positive and significant at the 5% level. The data shows that for every percentage point increase in the level of DIF, there was a corresponding 0.207 percentage point increase in the efficiency of agricultural carbon emissions. This indicates that, within China, DIF has a positive impact on ACEP. This result confirms the finding of He et al. (2019) [56], who indicated that digital inclusive financial development had a positive effect on agricltural green total factor productivity. The spread of DIF alleviates the financial constraints faced in rural areas. It can simplify the complex and bureaucratic business processes of traditional finance and promote innovation in sectors that have often lacked it. Over the past 10 years, the Chinese government has issued a series of documents that strongly advocate the spread of digital finance in agriculture and within rural areas. The documents emphasize the role green finance plays in promoting solutions to the emissions problem within agriculture [53]. Meanwhile, digital currency, third-party payment, and online loans are gaining popularity in rural China [57]. These policies and measures have enhanced the impact of DIF on agricultural carbon reduction.
Moving to consider the influences of control variables, the coefficient of PDC was -0.384, this was at the 5% level and therefore significant. This indicated that a 1% increase in the PDC would reduce ACEP by 0.384%. This regression result is consistent with the findings of Fang et al. (2021) [7]. The main reason for this finding is that losses due to natural disasters reduce yields and reduce the efficiency of agricultural carbon emissions. The coefficient of PIR was 0.621 at the 5% significance level, implying that PIR played a significant positive role in promoting ACEP. A likely cause is that as per capita income rises, farmers tend to upgrade their agricultural input, purchasing higher quality seeds. They also invest in improving agricultural infrastructures, such as irrigation and advanced machinery. These are all factors conducive to improving the efficiency of agricultural carbon emissions [58]. The coefficients of other variables, including PGDP, URP and RSI, were not statistically significant, indicating that these factors had no significant impact on ACEP.
Basic Regression Analysis
Four different specifications of panel data regressions are used to measure DIF's impact on ACEP (Table 4). These include the ordinary least square (OLS), the pooled ordinary least squares (POOL), the fixed effect (FE), and the random effect (RE). The Hausman test indicates that the FE-effects model is more suitable for econometric analysis. According to Table 4, the coefficient of DIFI was positive and significant at the 5% level. The data shows that for every percentage point increase in the level of DIF, there was a corresponding 0.207 percentage point increase in the efficiency of agricultural carbon emissions. This indicates that, within China, DIF has a positive impact on ACEP. This result confirms the finding of He et al. (2019) [56], who indicated that digital inclusive financial development had a positive effect on agricltural green total factor productivity. The spread of DIF alleviates the financial constraints faced in rural areas. It can simplify the complex and bureaucratic business processes of traditional finance and promote innovation in sectors that have often lacked it. Over the past 10 years, the Chinese government has issued a series of documents that strongly advocate the spread of digital finance in agriculture and within rural areas. The documents emphasize the role green finance plays in promoting solutions to the emissions problem within agriculture [53]. Meanwhile, digital currency, third-party payment, and online loans are gaining popularity in rural China [57]. These policies and measures have enhanced the impact of DIF on agricultural carbon reduction.
Moving to consider the influences of control variables, the coefficient of PDC was −0.384, this was at the 5% level and therefore significant. This indicated that a 1% increase in the PDC would reduce ACEP by 0.384%. This regression result is consistent with the findings of Fang et al. (2021) [7]. The main reason for this finding is that losses due to natural disasters reduce yields and reduce the efficiency of agricultural carbon emissions. The coefficient of PIR was 0.621 at the 5% significance level, implying that PIR played a significant positive role in promoting ACEP. A likely cause is that as per capita income rises, farmers tend to upgrade their agricultural input, purchasing higher quality seeds. They also invest in improving agricultural infrastructures, such as irrigation and advanced machinery. These are all factors conducive to improving the efficiency of agricultural carbon emissions [58]. The coefficients of other variables, including PGDP, URP and RSI, were not statistically significant, indicating that these factors had no significant impact on ACEP.
The Influences of Different Dimensions of DIF on ACEP
According to the optimal model obtained by the above analysis, this study uses the FE_model to further reveal the influence mechanism of DIF on ACEP. Table 5 presents the estimated results for the three digital inclusive financial indices. The results show that the coefficient of CB was positive at the 1% significance level, indicating that there was a positive relationship between financial coverage breadth and ACEP, and this relationship was significant. The findings show that for every one percentage point increase in the CB, ACEP would increase by 0.459%. The expansion of digital financial coverage breadth allows digital access to financial information whilst transactions can take place via the online platform. The spatial-temporal constraints of traditional financial institutions are thereby overcome. This is especially pertinent and extending the coverage of users to rural areas where access to traditional institutions is limited. On the other hand, the services of DIF target micro, small and rural enterprises, whose needs and particular circumstances are not serviced by traditional financial services. The solutions provided by DIF extend the coverage of users for financial services and alleviate financial exclusion in rural areas [59]. The coefficient of DU was 0.24 at a 5% significant level, indicating that the increased depth of use of DIF in China contributes to a more efficient agriculture and thereby reduces carbon emissions. DU reflects the variety and availability of digital financial instruments. Its improvement can increase the use frequency of financial products, this offers greater flexibility to those working in the industry and allows them to meet the challenges of reducing emissions. However, the results revealed that DD had no significant correlation with ACEP. This is likely due to the lack of modern digital integration within China's traditional financial institutions, resulting in high financial services costs and high thresholds. These factors prove to be a hindrance to the improvement of ACEP. (7) = 244.427 *** χ 2 (7) = 131.237 *** χ 2 (7) = 114.260 *** Notes: t statistics are in parentheses. * p < 0.1, ** p < 0.05, *** p < 0.01.
Spatial Autocorelation Analysis of ACEP
As shown in Table 6, the Global Moran's index for ACEP from 2011 to 2020 were all greater than 0, at p < 0.05, indicating that ACEP in China exhibited a significant positive spatial autocorrelation. This phenomenon may occur because agricultural production is determined by natural endowment. Adjacent regions have similar agricultural resources and production patterns, resulting in the spatial aglomeration of ACEP [22]. During 2011 and 2020, the Global Moran's I index for ACEP showed a decreasing trend, indicating that the spatial agglomeration of ACEP in China continued to weaken. This is mainly because that agricultural production has been deeply influenced by urbanization and industrialization during the study period. Numerous agricultural labor population and land resources flowed into urban areas, resulting in significant changes in regional agricultural production mode. Meanwile, owing to the improvement of agricultural technology, agricultural total factor efficiency in different regions has been increased, leading to the weakening of spatial aglomeration of ACEP [11]. Figure 6 exhibits the spatial agglomeration results of ACEP in 2011 and 2020. The high spatial agglomeration areas of ACEP in 2011 were mainly distributed in Huang-Huai-Hai Plain, including Hebei, Shandong, and Henan Province. These areas are the main agricultural producing areas in China. The low spatial agglomeration areas of ACEP in 2011 were concentrated in northern China, including Neimenggu, Gansu and Ningxia, which are the main pastoral regions in China. During 2011 and 2020, the number of provinces with high spatial agglomeration of ACEP decreased to 2, and transfered to China's eastern coastal areas, including Zhejiang and Fujian. This can be explained that these areas could lead the development of ACEP in surrounding areas through advanced agricultural technology cooperation and exchange [26]. Meanwhile, the low spatial agglomeration areas transferred to the notheastern China, including Jilin and Liaoning Province. These findings further confirmed the previous conclusion that China's ACEP displayed a significant positive spatial agglomeration. technology cooperation and exchange [26]. Meanwhile, the low spatial agglomeration areas transferred to the notheastern China, including Jilin and Liaoning Province. These findings further confirmed the previous conclusion that China's ACEP displayed a significant positive spatial agglomeration.
Selection of Spatial Model
To determine which spatial regression model is more suitable for the estimation. The tests of LM, Wald, LR, Hausman, and fixed effects are conducted. As shown in Table 7, the values of LM-LAG, Robust LM-LAG, LM-ERR, and Robust LM-ERR all passed the significant test, at p < 0.01, indicating that spatial regression models were more suitable for analyzing the impact of DIF on ACEP. Furthermore, the test results of Wald-SAR, Wald-SEM, LR-SAR, and LR-SEM also passed the significant test, indicating that SDM should be used to quantify the spatial spillover effect of DIF on ACEP [34]. In addition, the Hausman result displayed that fixed effect model was more suitable than random effect model. Therefore, the fixed effect SDM model was used to analyze the impact of DIF on ACEP by considering the spatial spillover effect in this study.
Analysis of SDM Results
The SDM results of the spatial effect of DIF on ACEP in China are shown in Table 8. The value of spatial autocorrelation coefficient ρ was significantly positive, indicating that ACEP was significantly affected by surrounding areas. It can be seen that the coefficient of DIFI had positive impacts on local ACEP, further verifying the role of the development of DIF in improving ACEP. Meanwhile, it shows that W*DIFI was significantly negative, indicating that the development of DIF could also improve ACEP in the surrounding provinces. This result is consistent with , who showed that the digital technology could reduce carbon emissions in the surrounding regions [22]. This is mainly because that the development of local DIF can promote the inter-provincial flow of agricul-
Selection of Spatial Model
To determine which spatial regression model is more suitable for the estimation. The tests of LM, Wald, LR, Hausman, and fixed effects are conducted. As shown in Table 7, the values of LM-LAG, Robust LM-LAG, LM-ERR, and Robust LM-ERR all passed the significant test, at p < 0.01, indicating that spatial regression models were more suitable for analyzing the impact of DIF on ACEP. Furthermore, the test results of Wald-SAR, Wald-SEM, LR-SAR, and LR-SEM also passed the significant test, indicating that SDM should be used to quantify the spatial spillover effect of DIF on ACEP [34]. In addition, the Hausman result displayed that fixed effect model was more suitable than random effect model. Therefore, the fixed effect SDM model was used to analyze the impact of DIF on ACEP by considering the spatial spillover effect in this study.
Analysis of SDM Results
The SDM results of the spatial effect of DIF on ACEP in China are shown in Table 8. The value of spatial autocorrelation coefficient ρ was significantly positive, indicating that ACEP was significantly affected by surrounding areas. It can be seen that the coefficient of DIFI had positive impacts on local ACEP, further verifying the role of the development of DIF in improving ACEP. Meanwhile, it shows that W*DIFI was significantly negative, indicating that the development of DIF could also improve ACEP in the surrounding provinces. This result is consistent with , who showed that the digital technology could reduce carbon emissions in the surrounding regions [22]. This is mainly because that the development of local DIF can promote the inter-provincial flow of agricultural green production technologies and green funds, thereby promoting the improvement of ACEP in surrounding areas [11]. Furthermore, the coefficients of W*RSI and W*PDC were negative, indicating that the secondary industry to GDP and the proportion of crop disaster areas in crop sown area in local areas had negative impacts on ACEP in the surrounding regions. Geographically adjacent areas are prone to the same natural disasters, so when the natural disaster area increases, agricultural production in adjacent areas will also be affected [34]. It also shows that the per capita disposable income of rural households had a positive impact on ACEP in the surrounding regions. Note: ** p < 0.05, *** p < 0.01. Table 9 displays the results of direct effect and indirect effect of DIF. It can been seen that the coefficients of DIFI were positive for both direct effect and indirect effect, indicating that the development of DIF had positive impacts on both the local and surrounding ACEP. The coefficient of DIF with direct effect was greater than that with indirect effect, indicating that the impact of DIF on local ACEP was greater than that on surrounding ACEP. In terms of the control variables, the coefficient of PDC for indirect effect was −0.182, which was larger than that for direct effect, indicating that the negative impact of PDC on local ACEP was greater than that on surrounding ACEP. The coefficient of PIR for direct effect was 0.329, which was larger than that for indirect effect, indicating that the positive impact of PIR on local ACEP was greater than that on surrounding ACEP.
Conclusions
Digital inclusive financial development has broken through the service boundaries of conventional finance and effectively matched the capital supply and demand [37]. It has become an important driver of low-carbon agricultural development. This paper calculates the agricultural carbon emissions and employs the Super SBM model to measure ACEP. A panel regression model and a spatial Durbin model are used to systematically examine the impact of DIF on ACEP. The main findings are as follows. (1) China's average ACEP increased from 0.32 in 2009 to 0.71 in 2020, displaying a fluctuating growth trend. The main concentration areas with high-value ACEP transformed from the Huang-Huai-Hai plain and Pearl River Delta to the coastal regions and the Yellow River basin, and the differences between provinces gradually increased. (2) The regression results indicate that DIF had a significant positive impact on ACEP. The coverage breadth and depth of use of DIF could significantly improve ACEP, but the digital degree was found to have no significant effect.
(3) The development of DIF can not only improve local ACEP, but also improve ACEP in the surrounding provinces through the spatial spillover effect.
Several of these important policy implications become apparent given these conclusions. First, the Chinese government should continue to support digital inclusive financial development in rural areas. Specifically, the construction of rural digital infrastructure, such as network base stations and home broadband, should be a push for greater access to smartphones, tablets, and computers. Both coverage and hardware need to be accessible and affordable. In conjunction with this, financial literacy in rural areas must be increased. By increasing knowledge of available financial services and moderately reducing the broadband cost for those most closely connected to the agriculture industry, the depth of use of DIF can be improved. Additionally, accelerating the deep integration of traditional finance and digital technology is also conducive to improving the digitization degree of DIF. Second, green development is the core of China's ecological civilization construction. To promote green development, it oughts to not only guide the inflow of various resource into the green devlopment field, but also reflect the ecological concept in the mode of production organization. This requires the government to give play to the role of financial policies and financial instruments in guiding and structural adjustment. Specifically, the government should redirect funds into agricultural technology innovation activities. The goal should be to promote technological progress and diffusion rather than scale expansion. More importantly, the existence of spatial spillover effect of DIF on ACEP indicates that the digital finance development policies formulated by local government will have significant impacts on the surrounding areas. Therefore, when formulating policies related to agricultural carbon emission reduction and digital financial development, local managers ought to start from the macro-control scale, then break administrative boundary barriers and strengthen regional cooperation. For example, the development of digital finance can formulate relevant planning policies by taking urban agglomerations as a whole, which can promote the agglomeration effect of digital finance industry and strengthen the collaborative governance of agricultural carbon emission reduction.
Although some progress has been made, this study has a few limitations. First, the data sample of this paper only includes China, a developing country. The development of DIF and its application in agricultural production activities in developing countries may be different from those in developed countries. Future research needs to add samples obtained from other regions, including Eastern Asia, Europe, and North America. Second, this paper conducts empirical analysis from a macro perspective but fails to do so from a micro perspective. Future research could obtain first-hand data on DIF at the rural householdscale through field surveys, and analyze the impact of DIF on low-carbon agriculture from the perspective of peasant households.
Data Availability Statement:
The data presented in this paper are available on request from the corresponding author. | 9,843.4 | 2022-09-01T00:00:00.000 | [
"Environmental Science",
"Economics",
"Agricultural and Food Sciences"
] |
Superradiance and black hole bomb in five-dimensional minimal ungauged supergravity
We examine the black hole bomb model which consists of a rotating black hole of five-dimenensional minimal ungauged supergravity and a reflecting mirror around it. For low-frequency scalar perturbations, we find solutions to the Klein-Gordon equation in the near-horizon and far regions of the black hole spacetime. To avoid solutions with logarithmic terms, we assume that the orbital quantum number $ l $ takes on nearly, but not exactly, integer values and perform the matching of these solutions in an intermediate region. This allows us to calculate analytically the frequency spectrum of quasinormal modes, taking the limits as $ l $ approaches even or odd integers separately. We find that all $ l $ modes of scalar perturbations undergo negative damping in the regime of superradiance, resulting in exponential growth of their amplitudes. Thus, the model under consideration would exhibit the superradiant instability, eventually behaving as a black hole bomb in five dimensions.
I. INTRODUCTION
The phenomenon of superradiance through which waves of certain frequencies are amplified when interacting with a medium has long been known in both classical and quantum non-gravitational systems. The quantum aspect of this phenomenon traces back to the so-called Klein paradox [1,2] whose subsequent resolution revealed the existence of superradiant boson (not fermion) states in the background of strong electromagnetic fields (see e.g. [3] and references therein). The superradiant effect also arises in many classical systems moving through a medium with the linear velocity that exceeds the phase velocity of waves under consideration. As early as 1934 it was known that the reflection of sound waves from the boundary of a medium, which moves with supersonic velocity, occurs with amplification [4]. Subsequently, examples of such an amplification were found in a number of cases; for instance, in the motion of carriers in an elastic piezoelectric substance [5] as well as in the motion of a conducting liquid in a resonator [6].
Zel'dovich first realized that the superradiant condition can be fulfilled in a rotational case as well [7]. Suggesting that for a wave of frequency ω and angular momentum m, the angular velocity Ω of a body can exceed the angular phase velocity ω/m of the wave, Ω > ω/m, he demonstrated the amplification of waves reflected from a rotating and conducting cylinder.
In addition, Zel'dovich put forward the idea that a semitransparent mirror surrounding the cylinder could provide exponential amplification of waves. He also anticipated that the phenomenon of superradiance and the process of exponential amplification of waves would occur in the field of a Kerr black hole. The black hole superradiance was independently predicted by Misner [8], who pointed out that certain modes of scalar waves scattered off the Kerr black hole undergo amplification. Possible applications of the superradiant mechanism were explored by Press and Teukolsky [9]. In particular, by locating a spherical mirror around a rotating black hole they pointed out that such a system would eventually develop a strong instability against exponentially growing modes in the superradiant regime, thus creating a black hole bomb.
The quantitative theory of superradiance for scalar, electromagnetic and gravitational waves in the Kerr metric was developed in classic papers by Starobinsky [10] and Starobinsky and Churilov [11] (see also [12,13]). The existence of superradiance is intimately related to the salient feature of the Kerr metric; the timelike Killing vector that defines the energy with respect to asymptotic observers becomes spacelike in the region located outside the horizon, called the ergoregion. This in turn entails the possibility of negative energy states within the ergoregion, underpinning the physical interpretation of the superradiant effect.
Scattering a wave off a rotating black hole may cause fluctuations of the negative energy states, resulting in the negative energy flux into the black hole [14]. As a consequence, the scattered wave becomes amplified, by conservation of energy. It should be noted that there is no superradiance for fermion modes in the Kerr metric, as shown by detailed calculations in [15,16].
The black hole superradiance on its own has only a conceptual significance, showing the possibility of the extraction of rotational energy from the black hole due to the wave mechanism. However, it has played a profound role in addressing the stability issues of rotating black holes in general relativity, against small external perturbations. Developments in this direction have revealed that rotating black holes are stable to massless scalar, electromagnetic and gravitational perturbations [12,13]. On the contrary, it appeared that small perturbations of a massive scalar field grow exponentially in the superradiant regime, creating the instability of the system, the black hole bomb effect [17][18][19]. The physical reason underlying this effect is that the motion of a massive particle around a rotating black hole may occur in stable circular orbits [20] (see also a recent paper [21]). Thus, to view the instability one can imagine a wave-packet of the massive scalar field moving in these orbits and forming "bound states" in the well of the effective potential of the motion. Though the potential barrier keeps the wave-packet bound states in the well from escaping to infinity, but from quantum-mechanical point of view they would tunnel through the barrier into the horizon. As a consequence, the bound states in the well become quasinormal with complex frequencies whose imaginary parts in the superradiant regime determine the growth rate of the wave-packet modes. It is clear that the runaway behavior of such modes between the potential well and the horizon would result in their continuous reamplification, thereby causing the instability.
Another realization of the black hole bomb effect occurs in anti-de Sitter (AdS) spacetimes. This is due to the fact that in the regime of superradiance, the timelike boundary of the AdS spacetime plays the role of a resonant cavity between the black hole and spatial infinity. In [22], it was argued that small rotating AdS black holes in five dimensions may exhibit the superradiant instability against external perturbations. The authors of works [23,24] were the first to develop these arguments further by using both analytical and numerical approaches. Elaborating on the black hole bomb effect of Press and Teukolsky in four dimensions, they pointed out that its realization crucially depends on the distance at which the mirror must be located. Thus, for the superradiant modes to be excited there exists a critical radius and below this radius the system is stable. These results allow one to clarify the instability of small Kerr-AdS black holes, as discussed in [24]. Continuing this line of investigation in five dimensions, the superradiant instability of small rotating charged AdS black holes was considered in [25]. Meanwhile, the case of arbitrarily higher dimensions for small Reissner-Nordström-AdS black holes has recently been studied in [26].
In particular, it was noted that for some values of the orbital quantum number, which can occur in odd spacetime dimensions, the analytical approach of [25] fails being responsible for the seeming absence of the superradiant instability for certain modes. A detailed numerical calculations have shown that the superradiant instability exists in all higher dimensions and with respect to all modes of scalar perturbations [26].
In this paper, we wish to embark on a further exploration of the superradiant instability for rotating black holes in five dimensions. We consider the black hole bomb model for scalar perturbations, which consists of a rotating black hole of five-dimenensional minimal ungauged supergravity and a reflecting mirror around it. In Sec. II we begin by discussing the defining properties of the spacetime metric for the black hole under consideration. Here we present remarkably simple formulas for the coordinate angular velocities of locally nonrotating observers. These formulas reveal the "bi-dragging" property of the black hole at large distances and reduce to its angular velocities as one approaches the horizon. Next, we introduce a corotating Killing vector field which is tangent to the null geodesics of the horizon and calculate the surface gravity and the electrostatic potential of the horizon. In Sec. III we discuss the separated radial and angular parts of the Klein-Gordon equation for a charged massless scalar field and derive the threshold inequality for superradiance. Focusing on low-frequency perturbations, in Sec. IV we find solutions to the radial wave equation by dividing the spacetime into the near-horizon and far regions. To avoid solutions with logarithmic terms, we then assume that the orbital quantum number l is an approximate integer and perform the matching of these solutions in an intermediate region. In Sec. V we calculate the frequency spectrum of quasinormal modes in the black hole-mirror system, taking the limits as l approaches even or odd integers separately. Here we show that in the regime of superradiance, the black hole-mirror system exhibits instability to all l modes of scalar perturbations. In Sec. VI we end up with a discussion of our results.
II. THE METRIC
The general solution to five-dimensional minimal gauged supergravity that describes charged and rotating black holes with two independent rotational symmetries was found by Chong, Cvetic, Lü and Pope (CCLP) [27]. In the case of ungauged supergravity (the vanishing cosmological constant) it is given by the metric where the metric functions are given by the parameters M and Q are related to the physical mass and electric charge of the black hole, whereas a and b are two independent rotation parameters. The metric determinant is given by It is straightforward to check that this metric and the two-form field F = dA, where is the potential one-form of the electromagnetic field supporting the metric, satisfy the equation of motions derived from the action of five-dimensional minimal ungauged supergravity The locations of the outer and inner horizons of the black hole are determined by the real roots of the equation ∆ = 0. Thus, we find that where r 2 + corresponds to the radius of the outer (the event) horizon, while r 2 − gives the radius of the inner Cauchy horizon. It follows that for the extremal horizon, r 2 + = r 2 − , there exist two simple relations between the parameters of the black hole, which are given by In the following we will also need the inverse components of metric (1), which are given by It is easy to see that the stationary and bi-azimuthal isometries of this metric are described by three commuting Killing vectors which can be used to define a family of locally nonrotating observers. The 5-velocity unit vector of these observers is given by where α is determined by the condition u 2 = −1. The defining relations u · ξ (φ) = 0 and u·ξ (ψ) = 0 allow us to determine the coordinate angular velocities Ω a and Ω b of the observers (see e.g. [28] for some details). Performing straightforward calculations, we find that .
At large distances, as follows from these expressions, the bi-dragging property of the metric is governed by the remarkably simple formulas We note that for vanishing rotation parameter a = 0 (or b = 0), the bi-dragging still occurs due to the electric charge of the black hole. The effect disappears at spatial infinity, while it increases towards the horizon and for ∆ = 0, expressions (11) and (12) reduce to the angular velocities of the horizon [27]. We have where the horizon area A is given by With these quantities in mind, we can now introduce a co-rotating Killing vector defined as It is straightforward to show that the norm of this vector vanishes on the horizon, showing that it coincides with the null geodesic generators of the horizon. Using this vector one can calculate the surface gravity κ of the horizon and hence its Hawking temperature T H . We find that where we have used expressions (6) and (14). The co-rotating Killing vector can also be used to calculate the electrostatic potential of the horizon. Indeed, by means of potential one-form (4) and expressions (14), we find that the electrostatic potential of the horizon, relative to an infinitely distant observer, is given by Remarkably, the CCLP solution (1) admits hidden symmetries which are generated by a second-rank Killing tensor [25,29], in addition to its global symmetries given by Killing vectors (9). As a consequence, the geodesic and scalar field equations separate in this metric, ensuring their complete integrability. Below, we proceed with the separation of variables in the Klein-Gordon equation for a massless scalar field.
III. KLEIN-GORDON EQUATION
Let us consider a charged massless scalar field which obeys the Klein-Gordon equation where m φ and m ψ are "magnetic" quantum numbers associated with rotation in the φ and ψ directions. The angular function S(θ) obeys the equation where we have used the freedom of shifting the separation constant, λ → λ + const. As is known [30], this equation when accompanied by regular boundary conditions at singular points θ = 0 and θ = π/2 defines a Sturm-Liouville problem. The associated eigenvalues are λ = λ l (ω), where l is an integer which can be thought of as an "orbital" quantum number.
The solution is given by the five-dimensional spheroidal functions S(θ) = S ℓ m φ m ψ (θ|aω , bω), which form a complete set over the integer l. For nonvanishing rotation parameters, but for a 2 ω 2 ≪ 1 and b 2 ω 2 ≪ 1, one can show that where l must obey the condition l ≥ m φ + m ψ [30].
The radial equation for R(r), by performing a few algebraic manipulations, can be cast where 8 For vanishing electric charge, Q = 0, these expressions go over into those obtained in [30].
They also agree with the vanishing cosmological constant limit of the expressions given in [25]. Next, it proves useful to transform the radial equation into a Schrödinger form. For this purpose, we introduce a new radial function R and a new radial coordinate r * , which are defined by the relations Using these definitions, we rewrite the radial equation (22) in the Schrödinger form where the "effective" potential is given by and we have also used the notation We are now interested in the behavior of the radial equation in the asymptotic regions, at spatial infinity r * → ∞ and at the horizon r * → −∞ (r → r + ), where the effective potential (26) takes the form where T A and R A are the transmission and reflection amplitudes respectively. The complexconjugate of these asymptotic forms corresponds to the associated complex-conjugate solution of equation (25) as the effective potential V (r) is a real quantity. Clearly, these two solutions are linearly independent and using the constancy of their Wronskian, we find that the transmission and reflection amplitudes obey the relation where have introduced the threshold frequency It follows that for the frequency range given by the inequality the reflected wave has greater amplitude than the incident one, |R A | 2 > 1, i.e. the superradiant effect appears. We note that the presence of the electric charge changes the threshold frequency of superradiance. This occurs not only due to the nonvanishing electrostatic potential of the horizon but also because of the gravimagnetic bi-dragging contribution to its angular velocities.
IV. SOLUTIONS
The analysis of singularity structure of the radial equation (22) reveals that solutions to this equation possess an essential singularity. This means that one can not use the familiar techniques, employed in the theory of ordinary linear differential equations, to find the general solutions to this equation (see e.g. [31]). On the other hand, one can certainly find such solutions to some approximated versions of this equation, which are applicable in various regions of the spacetime. In what follows, we are interested in solutions at low frequencies i.e., when the Compton wavelength of the scalar particle is much larger than the horizon radius of the black hole. Following the work of Starobinsky [10], we divide the spacetime into the near-horizon and far regions and approximate equation (22) for each of these regions. Solving then the resulting equations with appropriate boundary conditions and performing an appropriate matching of the solutions in the overlap between the near and far regions, we obtain the complete solution at low frequencies. Below, we discuss these equations and solutions to them in each region under consideration as well as the matching procedure in the overlap between these regions.
A. Near-Region
In the region near the horizon, r − r + ≪ 1/ω, and for low-frequency perturbations where we have used relations (15) and (21), assuming slow rotation as well. Next, using a new dimensionless variable one can show that equation (33) reduces to the hypergeometric type equation where This equation can be solved in a standard way by the ansatz where F (z) = F (α , β , γ, z) is the hypergeometric function, obeying the equation and the parameters are given by Thus, the general solution to equation (35) can be written in terms of two linearly independent solutions of equation (38). We need the physical solution that reduces to the ingoing wave at the horizon, z → 0. It is given by where A in (+) is a constant. Furthermore, in an overlapping region the large r behavior of this solution should be compared with the small r behavior of the far-region solution. Therefore, we also need the large r (z → 1) limit of solution (40) which can be easily found by using the pertinent modular properties of the hypergeometric functions [32]. We find that the large r behavior of the near-horizon region solution is given by It is important to note that in this expansion the first term inside the square bracket requires a special care as the quotient of two gamma functions Γ(−l − 1)/Γ(−l/2) becomes divergent for some values of l. We will return to this issue in more detail below.
B. Far-Region
In the far-region, r − r + ≫ r + , equation (22) can be approximated by Using the ansatz R = u/r and rescaling radial variable as x = ωr, one can show that equation (42) reduces to the standard Bessel equation given by As is known [32], the general solution of this equation is a linear combination of the Bessel and Neumann functions. We have where A ∞ and B ∞ are constants. Though this solution refers only to large r region, but for small x (ωr ≪ 1) it might also have a limiting behavior, which indicates on an overlapping regime of validity with the large r form of the near-horizon solution (41). For small ωr, using the asymptotic forms of the Bessel and Neumann functions, we find that For some further purposes, it may also be useful to know the large ωr behavior of solution (44), which is given by where, as expected, the first term refers to an ingoing wave and the second term corresponds to an outgoing wave. solutions with logarithmic terms will inevitably appear. This makes the matching procedure impossible for odd l, as noted in [26]. However, assuming that l is not exactly, but nearly integral one can avoid the appearance of solutions with logarithmic terms and proceed with the matching procedure. This assumption is well motivated when one remembers that indeed equation (21) gives an approximate definition of l, with omitted terms involving a 2 ω 2 and b 2 ω 2 . Clearly, such a definition in general requires a more precise formula for λ, determined by equation (20) (see e.g. [33]).
Thus, we will henceforth assume that l is an approximate integral. With this in mind, we compare equations (41) and (45) and see that there exists an overlapping regime of validity (r + ≪ r −r + ≪ 1/ω) for the near-horizon and far region solutions. Performing the matching in this regime, we find that the defining amplitude ratios are given by We are now in position to proceed with the superradiant instability of the rotating black hole by placing a reflecting mirror around it.
V. REFLECTING MIRROR AND NEGATIVE DAMPING
As we have described in the introduction, one of the most striking application of the superradiant effect in four dimensions amounts to exploring the black hole-mirror system, which under certain condition acts as a black hole bomb [9]. In this section, we wish to explore this phenomenon in five dimensions, using the model which consists of a rotating black hole of minimal ungauged supergravity [27], and a reflecting mirror located at a large distance L from the black hole (L ≫ r + ) . We assume that the mirror perfectly reflects lowfrequency scalar waves, so that on the surface of the mirror one must impose the vanishing field condition. This, by equation (44), yields This condition, when combined with that requiring a purely ingoing wave at the horizon, defines a characteristic-value problem for the confined spectrum of the low-frequency solution, discussed above. Such a spectrum would be quasinormal with complex frequencies whose imaginary part describes the damping of modes, as can be seen from equation (19).
When the imaginary part is positive, a characteristic mode undergoes exponential growth (the negative damping). In this case, the system will develop instability, creating a black hole bomb.
Comparing now equations (48) and (49), we obtain the defining transcendental equation for the frequency spectrum which can be solved by iteration in the low-frequency approximation. Let us assume that the solution to this equation can be written in the form where n is a non-negative integer, ω n describes the discrete frequency spectrum of free modes and δ is supposed to be a small damping parameter, representing a "response" to the ingoing wave condition at the horizon. Using this in equation (50) it is easy to see that, in lowest approximation, ω n is simply given by the real roots of the Bessel function. Thus, we have where the quantity j l+1 , n represents the n-th root (greater than zero) of the equation J l+1 (ω n L) = 0. A detailed list of these roots can be found in [32]. They can also be easily tabulated using Mathematica. On the other hand, for large overtones of the fundamental frequency (n ≫ 1) one can appeal to the asymptotic form of the Bessel function, which gives the simple formula j l+1 , n ≃ π (n + l/2) .
It should be noted that formula (52) generalizes to five dimensions the familiar flat spacetime result for the frequency spectrum in an infinitely deep spherical potential well [34].
Next, substituting equations (51) and (52) in equation (50) and performing a few algebraic manipulations, to first order in δ, we find that the damping parameter is given by Here the prime denotes the derivative of the Bessel function with respect to its argument and the quantity Ω, as follows from equation (36), is given by Comparing this expression with that given in (52), we see that the superradiant effect crucially depends on the distance L at which the mirror is placed just as in four dimensions [23]. That is, for a critical distance governing the fundamental frequency, the effect ceases to exist. To proceed further, it is useful to simplify separately the product of the quotients of gamma functions in the second line of equation (54). Using the well known relation Γ(z)Γ(1 − z) = π/ sin πz, it is straightforward to show that Substituting now these relations in equation (54), we have where we have changed the overall sign, taking the absolute value of the quotient , since it is always negative in the physically acceptable frequency range. Recalling that here l is nearly integral, we can further simplify this equation by specifying l. Let us now assume that l approaches either even or odd integers. That is, we consider the following cases; (i) l/2 = p+ǫ, where p is a non-negative integer and ǫ → 0. Substituting this in expression (58), we find that its imaginary part vanishes in the limit ǫ → 0, whereas the real part is given by In obtaining this expression we have used the identity which can be easily obtained from the pertinent properties of gamma functions [32]. It should be noted that indeed in the case under consideration, there are no divergencies in expression (58) when ǫ → 0, so that throughout the calculations one can simply set ǫ equal to zero. Turning back to equation (59), we see that its sign is entirely determined by the sign of the quantity Ω, becoming positive in the superradiant regime, Ω < 0. Thus, for all modes of even l we have the negative damping effect, resulting in exponential growth of their amplitudes.
(ii) l/2 = (p + 1/2) + ǫ, again p is a non-negative integer and ǫ → 0. Inserting this in expression (58), we need to consider the limit as ǫ → 0. After performing a few straightforward calculations, we obtain that Using the properties of gamma functions [32], resulting in the relations one can further simplify the combination of gamma functions appearing in equation (61).
Finally, we have It is easy see that this expression possesses two important features: first, its real part that describes the damping of the modes changes the sign in the superradiant regime, Ω < 0.
This means that all modes of odd l may become supperradiant as well, resulting in the instability of the system. Meanwhile, the sign changing does not occur for the imaginary part, which is not sensitive to superradiance at all. Second, the imaginary part involves 1/ǫ type divergence as ǫ → 0. However, this divergence can somewhat be smoothed out by using the fact that the quantity r + is indeed small, in accordance with the regime of validity of the low-frequency solution constructed above. Thus, for a given radius L of the mirror and for the lowest mode (p = 0), the ratio (r 2 + − r 2 − ) 2 /ǫ appearing in the imaginary part can be fixed as finite, to high accuracy. The accuracy considerably increases for higher modes, as can be seen from (64). This would result in a small frequency-shift in the spectrum. These arguments is further supported by a numerical analysis of expression (64).
In Table I we present the numerical results for a charged nonrotating black hole. For the extreme charge of the black hole, we have Q e = r 2 + , as follows from expressions (6) and (7), and we take L = 1, for certainty. The calculations are performed for the parameters r + = 0.01, e = 10 and for the lowest modes as l approaches even or odd integers. We see Table II gives a summary of the numerical analysis of the damping parameter for a singly rotating black hole with zero electric charge, Q = 0. It follows that the superradiant instability occurs to all l modes of scalar perturbations under consideration. Again, we a have a small frequency-shift for the l = 1 mode, by choosing ǫ → 10 −7 .
Thus, we conclude that in the black hole-mirror model under consideration, all l modes of scalar perturbations become unstable in the regime of superradiance, exponentially growing their amplitudes with characteristic time scale τ = 1/σ. In addition, the modes of odd l undergo small frequency-shifts in the spectrum.
VI. CONCLUSION
The superradiant instabilities of black hole-mirror systems as well as small AdS black holes in four-dimensional spacetimes have been extensively studied in [23,24] by employing both analytical and numerical approaches. The analytical approach is based on a matching procedure, first introduced by Starobinsky [10], that allows one to find the complete lowfrequency solution to the Klein-Gordon equation by matching the near-horizon and far regions solutions in their overlap region. In our earlier work [25], using a similar analytical approach we gave the quantitative description of the superradiant instability of small rotating charged AdS black holes in five dimensions. In a recent development [26], this investigation was continued for small Reissner-Nordström-AdS black holes in all spacetime dimensions.
Here it was also pointed out that in odd spacetime dimensions, the matching procedure In the black hole-mirror system, we have defined a characteristic-value problem for the confined (quasinormal) spectrum of the low-frequency solution and calculated the complex frequencies of the spectrum. Using an iterative approach, we have found the general expression for the imaginary part (for the small damping parameter) of the quasinormal spectrum, which appeared to be a complex quantity. Next, taking the limit as l approaches an even integer, we have shown that the imaginary part of the damping parameter vanishes identically, whereas its real part becomes positive in the superradiant regime. Thus, all modes of even l undergo negative damping, resulting in exponential growth of their amplitudes.
Meanwhile, in the limit as l approaches an odd integer, the damping parameter remains complex whose real part is positive in the superradiant regime, thereby showing that all modes of odd l become unstable as well. As for the imaginary part, its sign appears to be not sensitive to superradiance at all. We have argued that to high accuracy, the imaginary part of the damping parameter can be considered as representing a small frequency-shift in the spectrum, as discussed at the end of Sec. V.
Finally, we have concluded that that in the five-dimensional black hole-mirror system, all l modes of scalar perturbations undergo negative damping in the regime of superradiance, exponentially growing their amplitudes and thus creating the black hole bomb effect in five dimensions.
VII. ACKNOWLEDGMENTS
The author thank Ekrem Ç alkılıç and H. Hüsnü Gündüz for their stimulating encourage- | 6,924.4 | 2014-08-19T00:00:00.000 | [
"Physics"
] |
Analysis of the structure of Ξ(1690)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Xi (1690)$$\end{document} through its decays
The mass and pole residue of the first orbitally and radially excited Ξ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \Xi $$\end{document} state as well as the ground state residue are calculated by means of the two-point QCD sum rules. Using the obtained results for the spectroscopic parameters, the strong coupling constants relevant to the decays Ξ(1690)→ΣK\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Xi (1690)\rightarrow \Sigma K$$\end{document} and Ξ(1690)→ΛK\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Xi (1690) \rightarrow \Lambda K$$\end{document} are calculated within the light-cone QCD sum rules and width of these decay channels are estimated. The obtained results for the mass of Ξ~\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\widetilde{\Xi }}$$\end{document} and ratio of the Br(Ξ~→ΣK)/Br(Ξ~→ΛK)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Br(\widetilde{\Xi }\rightarrow \Sigma K)/Br(\widetilde{\Xi }\rightarrow \Lambda K)$$\end{document}, with Ξ~\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \widetilde{\Xi } $$\end{document} representing the orbitally excited state in Ξ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \Xi $$\end{document} channel, are in nice agreement with the experimental data of the Belle Collaboration. This allows us to conclude that the Ξ(1690)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Xi (1690)$$\end{document} state, most probably, has negative parity.
Introduction
Understanding the spectrum of baryons and looking for new baryonic states constitute one of the main research directions in hadron physics. Impressive developments of experimental techniques allow discovery of many new hadrons. Despite these developments, the spectrum of baryon is still not well established. This is due to the absence of high intensity anti-kaon beams and small production rate of the resonances. At present time only the ground state octet and decuplet baryons as well as (1320) and (1530) baryons are well established. Up to present time the quantum numbers of (1690), (1820) and (1950) have not been determined. Theoretically, the spectrum of baryon, within different approaches, have been studied intensively (see [1][2][3][4][5][6][7][8][9] and references therein).
The main results of these studies are that different phenomenological models explain successfully the nature of a e-mail<EMAIL_ADDRESS>(1320) and (1530) states. However, these approaches predict controversially results for other excitations of baryons. In [8] using the nonrelativistic quark model the mass of (1690) is calculated and it is obtained that it might be radial excitation of with J P = 1 2 + . This result was then supported by the quark model calculations in [5]. However within the relativistic quark model in [9] it was established that the first radial excitation should have mass around 1840 MeV. In [4] the authors suggested that the (1690) state might be orbital excitation of with J P = 1 2 − . This point of view was supported by calculations performed within Skyrme model [2] and chiral quark model [7]. The controversy results suggests independent analysis for establishing the nature of (1690) state.
In the present study, within the light cone QCD sum rules, we estimate the widths of the → K and → K transitions. We suggest that (1690) state may be radial ( ) or orbital ( ) excitation of baryon. For establishing these decays we need to know the residue of (1690) as well as the strong coupling constants for these decays. For calculation of the mass and residue of the states as the main inputs of the calculations we employ the two point QCD sum rule method.
The paper is arranged as follows. In Sect. 2 the mass and residue of (1690) baryon within both scenarios, namely considering (1690) as the orbital and radial excitations of baryon, are calculated. In Sect. 3 we present the calculations of the strong coupling constants defining the (1690) → ( )K transitions within both scenarios. By using the obtained results for the coupling constants we estimate the relevant decay widths and compare our predictions on decay widths with the existing experimental data in this section, as well. We reserve Sect. 1 for the concluding remarks and some lengthy expressions are moved to the Appendix.
Mass and pole residue of the first orbitally and radially excited state
For calculation of the widths of → K and → K decays we need to know the residues of , and baryons. In present work we consider two possible scenarios about nature of the (1690): (a) it is represented as radial excitation of the ground state . In other words it carries the same quantum numbers as the ground state , i.e. J P = 1 2 The (1690) state is considered as first orbital excitation of the ground state , i.e. it is negative parity baryon with J P = 1 2 − . In the following we will try to answer the question that which scenario is realized in nature? To answer this question we will calculate the mass of (1690) state and decay width of the → K and → K transitions and then compare the ratio of these decays as well as the prediction on the mass with existing experimental data. Note that the BABAR Collaboration has measured the mass (m = 1684.7±1.3 +2.2 −1.6 ) MeV and width ( = 8.1 +3.9+1.0 −3.5−0.9 ) MeV of (1690) [10,11] and Belle Collaboration has measured the mass (m = 1688± 2) MeV and width ( = 11 ± 4) MeV of this state as well as the ratio B( (1690) 0 →K − + ) B( (1690) 0 →K 0 0 ) . The experimental value for this ratio measured by Belle is 0.50 ± 0.26 [12].
For determination of the mass and residue of baryon, we start with the following two point correlation function: where η (x) is the interpolating current for state with spin J P = 1 2 + and T indicates the time ordering operator. The general form of the interpolating current for the spin-1 2 baryon can be written as [13,14]: where a, b, c are the color indices and β is an arbitrary parameter with β = −1 corresponding to the Ioffe current. C is the charge conjugation operator. According to the general philosophy of QCD sum rules method, for calculation of the mass and residue of baryons the correlation function needs to be calculated in two different ways: (a) in terms of hadronic degrees of freedom and (b) in terms of perturbative and vacuum-condensates contributions expressed as functions of QCD degrees of freedom in deep-Euclidian domain q 2 0. After equating these two representations, the desired QCD sum rules for the physical quantities of the baryons under consideration are obtained. As already noted, the quantum numbers J P of (1690) state have not been determined via experiments yet. Therefore, firstly we consider the case when (1690) represents a negative parity baryon. The hadronic side of the correlation function is obtained by inserting complete sets of relevant intermediate states. For calculation of the hadronic side of the correlation function, we would like to note that the above interpolating current has nonzero matrix element with baryons of both parities. Taking into account this fact and saturating the correlation function by complete sets of intermediate states with both parities we obtain: where m, m and s, s are the masses and spins of the ground and first orbitally excited baryons, respectively. Here dots represent the contributions of higher states and continuum. The matrix elements in Eq. (3) are determined as Here λ and λ are the residues of the ground and first orbitally excited baryons, respectively. Using Eqs. (3) and (4) and performing summation over the spins of corresponding baryons, we obtain We perform Borel transformation in order to suppress the contribution of higher state and continuum, where M 2 is the square of Borel mass parameter. The correlation function from QCD side can be calculated by inserting Eq. (2) to Eq. (1) and usage of Wick's theorem to contract the quark fields. As a result we have an expression in terms of the involved quark propagators having perturbative and non-perturbative contributions. For calculation of these contributions we need explicit expressions of the light quark propagators. By using the light quark propagators in the coordinate space and performing the Fourier and Borel transformations, as well as performing the continuum subtraction by using the hadron-quark duality ansatz, after lengthy calculations, for the correlation function we obtain where, the expressions for B QCD 1 and B 2 QCD are presented in Appendix.
Having calculated both the hadronic and QCD sides of the correlation function, we match the coefficients of the corresponding structures / q and I from these representations to find the following sum rules: From these equations one can easily find: where (2) . The sum rules for mass and residue of the radially excited state are obtained from Eq. (8) by replacements m → −m and λ → λ . Note that, there are other approaches to separate the contributions of the positive and negative parity baryons (for instance see [15][16][17][18]).
The sum rules for the mass and residue of the orbitally and radially excited state of the baryon as well as the residue of the ground state contain many input parameters. Their values are presented in Table 1. For performing analysis of widths of the → K and → K decays in next section, we also need the residues of the and baryons. We use the values of these residues calculated via QCD sum rules [20]. The mass of the ground state is taken as input parameter, as well. Besides these input parameters, QCD sum rules contains three auxiliary parameters, namely the value of continuum threshold s 0 , Borel mass square M 2 and β arbi- trary parameter. Obviously any measurable physical quantity must be independent of these parameters. Hence we need to find the working regions of these parameters, where physical quantities demonstrate good stability agains the variations of these parameters. The window for M 2 is obtained by requiring that the series of operator product expansion (OPE) in QCD side is convergent and the contribution of higher states and continuum is sufficiently suppressed. Numerical analyses lead to the conclusion that both conditions are satisfied in the domain The considerations of the pole dominance and OPE convergence lead to the following working window for the continuum threshold: In Figs In order to find the working region of β, as an example in Fig. 5 we present the dependence of the ground-state baryon's residue on the cos θ . From this figure we see that the residue is practically insensitive to the variations of cos θ in the domains We depict the numerical results of the masses and residues of the first orbitally and radially excited baryon as well as the ground state residue in Table 2. The errors in the presented results are due to the uncertainties in determinations of the working regions of the auxiliary parameters as well as the errors of other input parameters. From this table we see that although consistent with the experimental data [10][11][12], the radial and orbital excitation of receive the same mass, which prevent us to assign any quantum numbers to (1690) only via mass calculations. The residue of these two states are obtained to be differ from each other by a factor of roughly three.
and transitions To K and K
In present section we calculate the strong couplings g K , g K , g K and g K defining the → K , → K , → K and → K transitions. For this aim we introduce the correlation function (12) where η (x) and η (x) are the interpolating currents for the and baryons, respectively. The general forms of these currents are taken as [13,14] where a, b, c are color indices, C is the charge conjugation operator and A 1 According to the method used, we again calculate the aforesaid correlation function in two representations: hadronic and QCD. Matching these two sides through a dispersion relation leads to the sum rules for the coupling constants under consideration.
Firstly let us consider the → K transition. As we already noted, the interpolating currents for baryons can interact with both the positive and negative parity baryons. In what follows, we denote the ground state positive (negative) parity baryons with ( ) and ( ). Taking into account this fact, inserting complete sets of hadrons with the same quantum numbers as the interpolating currents and isolating the ground states, we obtain where p = p + q, p and q are the momenta of the , baryons and K meson, respectively. In this expression m is the mass of the baryon. The dots in Eq. (14) stand for contributions of the higher resonances and continuum states. The matrix elements in Eq. (14) are determined as 0|η | ( p, s) = λ γ 5 u( p, s), where g i are the strong coupling constants for the corresponding transitions.
Using the matrix elements given in Eq. (15) and performing summation over spins of and baryons and applying the double Borel transformations with respect p 2 and p 2 for physical side of the correlation function we get where M 2 1 and M 2 2 are the Borel parameters. From Eq. (16) it follows that we have different structures which can be used to obtain sum rules for the strong coupling constant of → K channel. We have four couplings (see Eq. 16), and in order to determine the coupling g K we need four equations. Therefore we select the structures / q / pγ 5 , / pγ 5 , / qγ 5 and γ 5 . Solving four algebraic equations for g K , finally we get are the invariant amplitudes corresponding to the structures / q / pγ 5 , / pγ 5 , / qγ 5 and γ 5 for → K decay, respectively.
If we carry out the same procedures for → K decay, for the coupling constant g K we obtain: are the invariant amplitudes corresponding to the structures / q / pγ 5 , / pγ 5 , / qγ 5 and γ 5 for → K decay, respectively.
The general expressions obtained above contain two Borel parameters M 2 1 and M 2 1 . In our analysis we choose since the masses of the involved and ( ) are close to each other. The sum rules for the coupling constants for → K and → K transitions can be easily obtained from Eqs. (17) and (18), by replacing m → −m and λ → λ .
The OPE side of the correlation function OPE ( p, q) can be obtained by inserting the corresponding interpolating currents to the correlation function, using Wick's theorem to contract the quark fields, and inserting into the obtained expression the relevant quark propagators. The nonperturbative contributions in light cone QCD sum rules, which are described in terms of the K -meson distribution amplitudes (DAs), can be obtained by using Fierz rearrangement formula where i = 1, γ 5 , γ μ , iγ 5 γ μ , σ μν / √ 2 is the full set of Dirac matrices. The matrix elements of these terms between the K -meson and vacuum states, as well as ones generated by insertion of the gluon field strength tensor G λρ (uv) from quark propagators, are determined in terms of the K -meson DAs with definite twists. The DAs are main nonperturbative inputs of light cone QCD sum rules. The K -meson distribution amplitudes are derived in [21][22][23] which will be used in our numerical analysis. All of these steps summarized above result in lengthy expression for the OPE side of correlation function. In order not to overwhelm the study with overlong mathematical expressions we prefer not to present them here. Apart from parameters in the distribution amplitudes, the sum rules for the couplings depend also on numerical values of the and baryon's mass and pole residue, which are given in Table 1. Note that the working region of the Borel mass M 2 , threshold s 0 and β parameters for calculations of the relevant couplings are chosen the same as in the residue and mass computations. Performing numerical analysis for the relevant coupling constants we get values presented in Table 3. Using the couplings g K , g K g K and g K we can easily calculate the width of → K , → K , → K and → K decays. After some computations we obtain: and In expressions above the function λ(x 2 , y 2 , z 2 ) is given as: λ(x 2 , y 2 , z 2 ) = x 4 + y 4 + z 4 − 2x 2 y 2 − 2x 2 z 2 − 2y 2 z 2 .
The expressions for the widths of the → K and → K can be easily obtained from Eqs. (20) and (21), by the replacement m → m .
Using the values of coupling constants and formulas for the decay widths we obtain the values of the partial width at different decay channels presented in Table 3.
Using the values of the partial decay widths from Table 3, we finally obtain the ratio of the branching fractions in channel as
Br
→ K − Br → K 0 = 0.50 ± 0.14, and for channel we get As is seen, the obtained value for the ratio of the branching fractions in channel is in nice consistency with the experimental data of Belle Collaboration [12]: Note that in [24], within the coupled channel approach, a very similar results has been found. The authors have concluded that the (1690) has spin-1/2, but its parity has not been established. Our prediction for the corresponding ratio in channel is considerably small compared to the experimental data. From these results and those for the values of the corresponding masses we conclude that the (1690) state, most probably, has quantum numbers 1 2 − , i.e. it represents a negative parity spin-1/2 baryon. | 4,237.8 | 2018-05-01T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Effect of sperm concentration on boar spermatozoa mitochondrial membrane potential and motility in semen stored at 17 °C
The aim of the study was to assess the effect of sperm concentration in the ejaculate on the mitochondrial membrane potential and motility of Landrace boar spermatozoa during storage of diluted semen at 17 °C. The study was conducted on ejaculates collected from 10 boars aged 1.5–2 years. Based on sperm concentration measurements, two groups of boars were identified: Group 1 – boars providing ejaculates with a sperm concentration of at least 500 × 103/mm3 and Group 2 – boars providing ejaculates with a sperm concentration of less than 500 × 103/mm3. Four ejaculates were collected manually from each boar. Each ejaculate was diluted with Biosolvens Plus diluent, and insemination doses were prepared and stored at 17 °C. Mitochondrial membrane potential and motility of spermatozoa were evaluated at each insemination dose. The tests were carried out after 1, 24, 48, 96 and 168 h of storage. Based on the results, it was found that ejaculates with a sperm concentration ≥ 500 × 103/mm3 have a lower share of spermatozoa with high mitochondrial membrane potential than ejaculates with a sperm concentration below 500 × 103/mm3. A high correlation between the share of spermatozoa with a high mitochondrial membrane potential and motility of spermatozoa was demonstrated in the first 24 h and after 96 h of semen storage, which was confirmed by the calculated phenotypic correlation coefficients. Sperm cells in ejaculates with a higher sperm concentration are more sensitive to storage time than spermatozoa in ejaculates with a lower concentration. Ejaculate, pigs, semen quality Insemination of pigs plays an important role in the reproduction of this animal species. It is the most common method of breeding pigs in the world that allows increasing the genetic progress of the swine population. Boars used in insemination are usually characterized by good production-related predisposition, as well as high breeding value. High variability observed in the physical characteristics of ejaculates, which affects the economics of using a given individual is a fairly serious problem faced by sow insemination stations. The source of this variation can be the boar breed (Schulze et al. 2014; Yeste 2016; Wysokińska and Kondracki 2019), age (Banaszewska et al. 2015), and environmental factors (Zasiadczyk et al. 2015; Kowalewski et al. 2016). Individual variation is also important in reproduction. This variability affects the male’s predisposition for insemination use as males provide semen with different physical characteristics of the ejaculate. The ejaculate volume, sperm concentration and sperm motility are determined immediately after collection. Based on these indices, the number of insemination doses that can be obtained from the ejaculate is determined, and this determines the economic efficiency of using the individual in insemination. Some studies have shown the effect of sperm concentration on head dimensions and sperm morphology (Kondracki et al. 2011; Kondracki et al. 2013). It is possible that sperm cell concentration may also affect the potential of the spermatozoon mitochondrial membrane. Maintenance of normal mitochondrial membrane potential is necessary for mitochondria to produce adenosine triphosphate (ATP) (Luo et al. 2013). Mitochondria are thought to be important organelles that can be used to assess semen quality. Due to the fact that mitochondria contain their own DNA and membrane potential, they are easy to study (Amaral and Ramalho-Santos 2010). Disorders present ACTA VET. BRNO 2020, 89: 333–340; https://doi.org/10.2754/avb202089040333 Address for correspondence: Siedlce University of Natural Sciences and Humanities Prusa 14 Str, 08-110 Siedlce, Poland Phone: +48 25 6431388 E-mail<EMAIL_ADDRESS>http://actavet.vfu.cz/
Insemination of pigs plays an important role in the reproduction of this animal species. It is the most common method of breeding pigs in the world that allows increasing the genetic progress of the swine population. Boars used in insemination are usually characterized by good production-related predisposition, as well as high breeding value. High variability observed in the physical characteristics of ejaculates, which affects the economics of using a given individual is a fairly serious problem faced by sow insemination stations. The source of this variation can be the boar breed (Schulze et al. 2014;Yeste 2016;Wysokińska and Kondracki 2019), age (Banaszewska et al. 2015), and environmental factors (Zasiadczyk et al. 2015;Kowalewski et al. 2016). Individual variation is also important in reproduction. This variability affects the male's predisposition for insemination use as males provide semen with different physical characteristics of the ejaculate. The ejaculate volume, sperm concentration and sperm motility are determined immediately after collection. Based on these indices, the number of insemination doses that can be obtained from the ejaculate is determined, and this determines the economic efficiency of using the individual in insemination. Some studies have shown the effect of sperm concentration on head dimensions and sperm morphology (Kondracki et al. 2011;Kondracki et al. 2013). It is possible that sperm cell concentration may also affect the potential of the spermatozoon mitochondrial membrane. Maintenance of normal mitochondrial membrane potential is necessary for mitochondria to produce adenosine triphosphate (ATP) (Luo et al. 2013). Mitochondria are thought to be important organelles that can be used to assess semen quality. Due to the fact that mitochondria contain their own DNA and membrane potential, they are easy to study (Amaral and Ramalho-Santos 2010). Disorders present in the mitochondria of the spermatozoon insert may be the cause of reduced sperm motility and, consequently, lower egg cell fertilization efficiency.
Mainly semen diluted and stored in a liquid state at 17 °C is used in insemination of pigs. Because of cell membrane structure, boar spermatozoa are particularly sensitive to activities performed during laboratory treatment and storage conditions (Vyt et al. 2007;Lopez Rodriguez et al. 2012;Schulze et al. 2013) and sperm transport (Schulze et al. 2018). It was found that during storage of diluted boar semen, there are changes in the integrity of sperm cell membranes (Gączarzewicz et al. 2010;Wysokińska and Kondracki 2014;Wysokińska et al. 2015). Some studies have shown that mitochondria are organelles that are the most damaged during sperm preservation (Ball 2008;Gonzalez-Fernandez et al. 2012).
The aim of this study was to assess the effect of sperm concentration in an ejaculate on the mitochondrial membrane potential and motility of Landrace boar spermatozoa during the storage of diluted semen at 17 °C.
Animals and semen collection
The study involved 10 insemination boars of the Polish Landrace breed. The studied boars were 1.5-2 years old. All boars were healthy, kept in individual pens of an area of 6.3 m 2 on a concrete floor with thermal and humidity insulation. Boars were fed individually with a granulated complete mixture, normalized according to boar feeding standards. Boars were guaranteed constant access to drinking water supplied via nipple drinkers. The subjects were selected for the study based on the assessment of sperm concentration measurements in all ejaculates collected every 4-5 days by manual method. In addition to the concentration of spermatozoa in each ejaculate, the volume of the ejaculate, the percentage of spermatozoa showing progressive movement and the total number of spermatozoa were determined ( Table 1). The concentration of spermatozoa was determined by the colorimetric method using an AccuRead photometer (IMV Technologies, L'Aigle, France). Based on sperm concentration measurements, two groups of boars were identified: Group 1 -boars providing ejaculates with a sperm concentration of at least 500 × 10 3 /mm 3 and Group 2 -boars providing ejaculates with a sperm concentration of less than 500 × 10 3 /mm 3 (Table 1). Four ejaculates were manually collected from each boar for testing. The ejaculate sperm were diluted in Biosolvens Plus (Biochefa, Sosnowiec, Poland) commercial extender so that there were 2.7 × 10 9 sperm in one insemination dose (plastic bags, 90 ml). After dilution, insemination doses were stored at room temperature for one hour. After this time, the first semen evaluation was carried out as described below. Subsequent assessments were carried out after 24, 48, 96 and 168 h, using other insemination doses that were opened immediately prior to testing. The diluted semen were stored at 17 °C.
Semen evaluation Assessment of mitochondrial activity
Assessment of sperm cell mitochondrial activity was performed using fluorochrome JC-1 (Molecular Probes, USA). JC-1 is a fluorochrome accumulating in mitochondria. Depending on the size of the mitochondrial membrane potential (ΔΨm), JC-1 aggregates are formed (when Δψm > 80-100 mV) or JC-1 monomers (when ΔΨm < 80-100 mV). One ml of solution (5 ml of JC-1 dissolved in dimethyl sulphoxide with 800 ml of distilled water added) was added to 1 ml of diluted semen (1.2 × 10 6 spermatozoa). The entirety was mixed and incubated for Table 1. Physical characteristics of boar ejaculates depending on the sperm concentration (mean ± standard error of the mean).
Item
Group of boars 1 (≥ 500 × 10 3 /mm 3 ) 2 (< 500 × 10 3 /mm 3 ) Number of boars 5 5 Semen volume (ml) 229.32 a ± 3.44 407.44 b ± 6.86 Sperm concentration (× 10 3 /mm 3 ) 634.34 a ± 6.80 335.60 b ± 4.28 Sperm motility (%) 74.83 a ± 0.30 74.60 a ± 0.26 Number of spermatozoa in the ejaculate (× 10 9 ) 106.41 a ± 1.57 97.07 b ± 1.58 a,b -values in rows marked with different superscripts differ significantly at P < 0.05 2 min at room temperature. Then 200 μl of JC-1 Staining Buffer 5 × was added -mixed by inversion and incubated for 20 min at 37 °C and 5% CO 2 humidity. After incubation, the semen was centrifuged at 600 × g for 3 min at 2-8 °C, the supernatant was removed and the spermatozoa-containing pellet was placed on ice. The spermatozoa were rinsed with 1 ml cold solution (400 μl JC-1 Staining Buffer 5 × diluted with 1600 μl distilled water). The prepared samples were stored for a maximum of 30 min on ice. A drop of semen was collected from each sample and placed on a microscope slide. In each preparation, 200 spermatozoa were evaluated, specifying spermatozoa with a high mitochondrial membrane potential (with high ΔΨm JC-1 aggregates) (sperm cells emitting orange fluorescence in the mid-piece region), with a medium mitochondrial membrane potential (with medium ΔΨm aggregates JC-1) (orange-green fluorescent spermatozoa in the mid-piece region), and with a low mitochondrial membrane potential (with low ΔΨm JC-1 aggregates) (green fluorescent sperm in the mid-piece region) (Plate II, Fig. 1). The evaluation of the sperm mitochondrial membrane potential was performed using a fluorescence microscope (Nikon Eclipse 50i, Tokyo, Japan).
Assessment of sperm motility Motility of spermatozoa was determined using the microscopic method by placing a drop of semen (5 μl) on a microscope slide heated to 37 °C, covering it with a 22 × 22 mm cover slide. The percentage of progressive spermatozoa was determined by microscopic examination using a Nikon Eclipse 50i light microscope (Tokyo, Japan) and a heating table (37 °C). At a × 400 magnification, the percentage of spermatozoa showing normal movement in the total number of spermatozoa visible in the microscope's field of view was determined.
Experimental data were analyzed using the program STATISTICA 13.1 PL (StatSoft, Tulsa, USA). Data were analyzed by ANOVA. All results were expressed as mean ± standard error of the mean (SEM).The significance of the differences between the groups was assessed using Tukey test at P < 0.05. Phenotypic correlation indices between sperm motility and sperm mitochondrial membrane potential were established based on Sperman's rank correlation coefficients.
Results
The percentage of spermatozoa with a high mitochondrial membrane potential depending on sperm concentration and semen storage time is shown in Fig. 2. Based on this data, it was found that there are differences in the number of spermatozoa with active mitochondria depending on sperm concentration and semen storage time. Boar semen from Group 1 (with a sperm concentration ≥ 500 × 10 3 /mm 3 ) had a lower number of sperm cells with a high mitochondrial membrane potential at 1, 96 and 168 h storage than Group 2 boar semen (< 500 × 10 3 /mm 3 sperm concentration). After the first hour of diluted semen storage, the difference between the groups was small and amounted to 2%. The largest difference in the number of spermatozoa with a high mitochondrial membrane potential between the examined groups was found at 96 and 168 h of semen storage (P < 0.05). A decrease in the number of spermatozoa with a high mitochondrial membrane potential was found with the semen storage time. The largest decrease in the number of spermatozoa with a high mitochondrial membrane potential was found in storage for 48 h in ejaculates with a sperm concentration ≥ 500 × 10 3 /mm 3 . The largest decrease in the share of spermatozoa with a high mitochondrial membrane potential can be seen in Group 1, where the difference between 1 and 168 h of semen storage was over 26%, while in Group 2 this difference was over 19%.
The percentage of spermatozoa with a medium mitochondrial membrane potential depending on sperm concentration and semen storage time is shown in Fig. 3. In the first 48 h of semen storage, there was a lower proportion of spermatozoa with an average mitochondrial membrane potential found in the ejaculates of boars from Group 1 compared to the ejaculates of boars from Group 2. After 96 h of semen storage, the share of spermatozoa with a medium mitochondrial membrane potential was greater in the semen of Group 1 boars than in the semen of Group 2 boars. Significant differences were found only after 168 h of semen storage. There was a noticeable increase in the share of spermatozoa with a medium mitochondrial membrane potential in both groups, depending on the semen storage time. In boar ejaculates of Group 1, changes in the share of spermatozoa with a medium mitochondrial membrane potential were more dynamic and an increase of 19.66% was observed between 1 and 168 h of storage, with an increase of 13.77% in Group 2. Fig. 2. Spermatozoa with a high mitochondrial membrane potential depending on the sperm concentration (Group 1 -sperm concentration ≥ 500 × 10 3 /mm 3 ; Group 2 -sperm concentration < 500 × 10 3 /mm 3 ) during the storage of diluted semen at 17 °C.
a, b -P < 0.05. Bars represent means ± standard error of the mean. Fig. 3. Spermatozoa with a medium mitochondrial membrane potential depending on the sperm concentration (Group 1 -sperm concentration ≥ 500 × 10 3 /mm 3 ; Group 2 -sperm concentration < 500 × 10 3 /mm 3 ) during the storage of diluted semen at 17 °C. a, b -P < 0.05. Bars represent means ± standard error of the mean. The results characterizing the percentage of spermatozoa with a low mitochondrial membrane potential depending on sperm concentration and semen storage time are shown in Fig. 4. In the group of boar ejaculates with a sperm concentration exceeding 500 × 10 3 /mm 3 , a greater proportion of spermatozoa with a low mitochondrial membrane potential was observed compared to Group 2. The largest differences between the groups were found at 1, 96 and 168 h of semen storage (P<0.05). The data presented in Fig. 4 also show changes in the number of spermatozoa with a low mitochondrial membrane potential, depending on the duration of semen storage. Group 1 showed an approximately 7% increase in the number of spermatozoa with a low mitochondrial membrane potential from 24 to 168 h of semen storage. Slightly different tendencies of changes were observed in Group 2. From 48 to 168 h of semen storage, the share of spermatozoa with a low mitochondrial membrane potential remained at a similar level. Fig. 4. Spermatozoa with a low mitochondrial membrane potential depending on the sperm concentration (Group 1 -sperm concentration ≥ 500 × 10 3 /mm 3 ; Group 2 -sperm concentration < 500 × 10 3 /mm 3 ) during the storage of diluted semen at 17 °C.
a, b -P < 0.05. Bars represent means ± standard error of the mean. Table 2 presents phenotypic correlation coefficients between the share of spermatozoa with a high mitochondrial membrane potential and the share of spermatozoa showing progressive movement at different times of semen storage. The presented data indicate that the phenotypic correlation coefficients between sperm cell motility and the share of spermatozoa with a high mitochondrial membrane potential are positive for most properties, high and demonstrate values between 0.38-0.64 (P < 0.05).
Discussion
The results of the present study indicate that the concentration of spermatozoa in the ejaculate affects the potential of the sperm cell mitochondrial membrane. Ejaculates with a sperm concentration ≥ 500 × 10 3 /mm 3 have a lower share of spermatozoa with a high mitochondrial membrane potential than ejaculates with a sperm concentration below 500 × 10 3 /mm 3 . This is an important observation for the practice of insemination-related boar use. The concentration of spermatozoa determined in each collected ejaculate is, apart from the volume and motility of the spermatozoa, an indicator of the number of insemination doses that can be obtained from the ejaculate. The number of insemination doses determines the economic efficiency of boar use in insemination. Our study shows that ejaculates with a high sperm concentration are inferior in terms of mitochondrial activity in spermatozoa. Having considered the foregoing, it can be assumed that sperm cells from ejaculates with a high concentration of spermatozoa will be less effective in the process of egg cell fertilization. Some studies have shown a correlation between the morphometric dimensions of spermatozoa and their concentration in the ejaculate (Górski et al. 2018). Finding differences in the head dimensions of spermatozoa may be helpful in recognizing fertile subjects and those with reduced fertility (Gravance et al. 1996;Czubaszek et al. 2019).
Mainly semen stored at 15-17 °C is used in pig insemination. No detailed tests are carried out to assess the cellular structure of spermatozoa during semen storage, especially the activity of mitochondria located along the sperm insert. Sperm cell mitochondria are very active, they are subject to numerous metabolic processes present within them. These organelles are also involved in cell differentiation, reactive oxygen species (ROS) generation and apoptosis (Piomboni et al. 2012). There are many publications describing various changes occurring in mitochondria. However, the basic meaning of this sperm cell structure is based on providing ATP energy for sperm cell movement (Srivastava and Pande 2016). Mitochondrial membrane potential is the parameter that best reflects the mitochondrial function, and it is an indicator of the mitochondrial energy state. The potential of mitochondrial membrane is most often determined using fluorochromes. JC-1 is considered the most accurate indicator of mitochondrial membrane potential measurement, and it is able to detect minimal changes occurring in sperm cell mitochondria (Marchetti et al. 2004). Studies conducted mainly on human semen showed a relationship between limited mitochondrial function, demonstrated by reduced mitochondrial membrane potential, and reduced sperm motility and, consequently, reduced fertilization capacity (Paoli et al. 2011). In our study, a high correlation between the share of spermatozoa with a high mitochondrial membrane potential and motility of spermatozoa was observed in the first 24 h and after 96 h of semen storage, which is confirmed by the calculated phenotypic correlation coefficients (Table 2). Figueroa et al. (2013) showed a relationship between the mitochondrial membrane potential and the motility of conserved sperm cells. It is possible that there may be changes in the mitochondria of the sperm insert during storage of boar semen. The data presented in this paper clearly show that there are greater changes in the potential of the sperm cell mitochondrial membrane in semen with a higher sperm concentration compared to semen with a lower sperm concentration.
In this study, it was found that spermatozoa from ejaculates with a lower sperm concentration (below 500 × 10 3 /mm 3 ) are less sensitive to semen storage conditions than spermatozoa from ejaculates with a sperm concentration ≥ 500 × 10 3 /mm 3 . Along with the semen storage time, there is a more intense decrease in the number of spermatozoa with a high mitochondrial membrane potential in ejaculates with a sperm concentration ≥ 500 × 10 3 /mm 3 than in ejaculates with a sperm concentration below 500 × 10 3 /mm 3 . The reduction in the percentage of spermatozoa with a high mitochondrial membrane potential observed in the present study, along with the extension of the semen storage time, may be associated with a change in plasma membrane permeability, which in turn may affect the mitochondrial membrane potential (Kumeresan et al. 2009). Studies on the assessment of sperm cell membrane integrity showed a decrease in the share of spermatozoa that retain normal structure of the cell membrane during semen storage (Wysokińska and Kondracki 2014). Other authors have also shown a deterioration in semen quality along with the extension of the storage time (Martín-Hidalgo et al. 2013;Wysokińska et al. 2015;Iljenkaite et al. 2020). In a study conducted by De Ambrogi et al. (2006), it was shown that sperm cell motility is significantly reduced after 72 h of storage.
A decrease in the percentage of spermatozoa showing progressive motion along with the time of semen storage was shown in this study. This decrease was more pronounced in ejaculates with a sperm cell concentration ≥ 500 × 10 3 /mm 3 than in ejaculates with a sperm cell concentration < 500 × 10 3 /mm 3 .
In conclusion, it should be noted that the concentration of spermatozoa in the ejaculates of Landrace boars affects the potential of the mitochondrial membrane and sperm cell motility. Significantly fewer spermatozoa with a high mitochondrial membrane potential are observed in ejaculates with a higher sperm concentration than in ejaculates with a lower sperm concentration. Sperm from ejaculates with a higher concentration are more sensitive to the storage time, which is reflected in the intense decrease in the number of spermatozoa with a high mitochondrial membrane potential occurring with the increasing semen storage time. Sperm cells in ejaculates with a higher sperm concentration are more sensitive to the storage time than spermatozoa in ejaculates with a lower sperm concentration. Special supervision for the potential of the mitochondrial membrane should include ejaculates with a high sperm concentration. | 5,173 | 2020-01-01T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Mechanical Properties of Ternary Composite from Waste Leather Fibers and Waste Polyamide Fibers with Acrylonitrile-Butadiene Rubber
This study aimed to improve the mechanical properties of a composite material consisting of waste leather fibers (LF) and nitrile rubber (NBR) by partially replacing LF with waste polyamide fibers (PA). A ternary recycled composite NBR/LF/PA was produced by a simple mixing method and vulcanized by compression molding. The mechanical properties and dynamic mechanical properties of the composite were investigated in detail. The results showed that the mechanical properties of NBR/LF/PA increased with an increase in the PA ratio. The highest tensile strength value of NBR/LF/PA was found to have increased about 1.26 times, that is from 12.9 MPa of LF50 to 16.3 MPa of LF25PA25. Additionally, the ternary composite demonstrated high hysteresis loss, which was confirmed by dynamic mechanical analysis (DMA). The presence of PA formed a non-woven network that significantly enhanced the abrasion resistance of the composite compared to NBR/LF. The failure mechanism was also analyzed through the observation of the failure surface using scanning electron microscopy (SEM). These findings suggest that the utilization of both waste fiber products together is a sustainable approach to reducing fibrous waste while improving the qualities of recycled rubber composites.
Introduction
The textile industry is one of the largest and most important industries worldwide, producing a vast array of products ranging from clothing and household textiles to industrial materials [1]. However, this industry also generates a significant amount of fibrous waste that is non-decomposable and perpetually contaminates the soil and groundwater system [2,3]. One such material is polyamide (PA) fibers, which are widely used in textile products due to their durability, strength, and versatility. Moreover, the substantial cumulating PA waste products makes them a serious environmental concern [4]. The estimated annual waste volume of PA in the world is approximately 200,000 tons [5]. Due to its substantial rise in demand, exceeding that of other plastic materials in recent years, it is important to incorporate this product category into sustainable development strategies. As a result, researchers developed various technologies to recycle this material, such as chemical recycling and mechanical recycling (remelting).
Chemical recycling aims to recover and reuse monomers as its primary objective. In this method, the reaction of PA with decomposing agent was determined by the existence of polar amide groups in the main polyamide chain. The depolymerization reaction could be ammonolysis [6], hydrolysis [7], and glycolysis [8]. The objective of PA depolymerization is to produce hexamethylenediamine and caprolactam, both of which can be effectively employed in the synthesis of new polyamides or other polymers. Mechanical recycling is considered the most straightforward method for recycling PA. Among the techniques with potential application, melt extrusion stands out as a viable approach. However, since polyamides are thermoplastic polymers, they cannot be further used after a limited number of times, as their properties are not maintained. Therefore, having well-defined and repetitious properties for recycled products, even after undergoing multiple processes, is highly significant. Lozano-González et al. [9] investigated the effect of recycling cycles on the physical and mechanical properties of nylon 6 using injection molding. The study demonstrated that PA6 can undergo up to seven cycles of injection molding without experiencing any notable deterioration in its physical and mechanical properties. Another sustainable approach to recycling is the utilization of waste fibers for the fabrication of composite materials, which proves beneficial from both an environmental and material development perspective [10][11][12][13].
Fundamentally, the incorporation of fiber reinforcement and polymer matrix results in composite materials that exhibit remarkable stiffness, strength, and toughness. The reinforcing fibers are typically categorized into three groups: "natural fibers" (i.e., bamboo, kenaf, sisal, jute) [14,15], "synthetic fibers" (i.e., carbon fibers, glass fibers, aramid fibers) [16] and "animal-based fibers" (i.e., wool, silk, leather fiber) [17]. The polymer matrix plays a crucial role in connecting the fibrous reinforcement from the surroundings and facilitating the appropriate alignment of the fibers. G.Jayalatha et al. [18] fabricated composite polystyrene(PS)/natural rubber(NR) with nylon-6 fibers by melt mixing method. The fibers were also treated by resorcinol formaldehyde latex (RFL) to enhance adhesion with matrix. The results showed that improving interphase adhesion between the fibers and PS/NR ensured the transfer of stress from the comparatively less sturdy matrix to the fiber and, thus, enhanced the mechanical properties. Composite of natural rubber (NR) and short nylon fiber were also investigated by S. Kutty et al. [19,20] and Senapati et al. [21]. These studies indicated that the incorporation of short nylon fibers into the composite significantly enhances its mechanical properties, specifically in terms of tensile strength and tear strength. However, a disadvantage is that the non-polar nature of NR results in weak interaction with the fibers, as reported in previous research [22]. To overcome this limitation, other polar rubbers, such as styrene-butadiene rubber (SBR) [23] and acrylonitrile-butadiene rubber (NBR) [24], were used as base polymers in fiber-reinforced composites due to their favorable adhesion with nylon fibers. On the other hand, C. Rajesh's group also investigated thermal and dielectric properties of nylon-6-reinforced NBR [25,26]. These studies showed that the improvements were not limited to mechanical properties, but also extended to thermal and dielectric properties of the composite.
In our previous work [27], a significant amount of waste leather fiber (LF) was introduced into NBR matrix to prepare recycled composite. The findings from that study identified the optimal LF/NBR ratio for reinforcement as 50/50. Building upon these results, the present study maintains this ratio and partially replaces it with waste PA fibers. As well as the recycling purpose, the objective of incorporating waste PA short fibers into LF/NBR compound is to enhance mechanical properties of LF/NBR composite. The curing behavior, tensile strength, and abrasion resistance, as well as the dynamic mechanical properties, were also thoroughly investigated and compared to those of the LF/NBR composite and pure NBR.
Material
Acrylonitrile-butadiene rubber (NBR), with acrylonitrile content of about 33%, was purchased from Kumho (Seoul, Korea). The waste leather was collected from Vietnamese Leather factories (Hung Yen industrial area). The waste polyamide fibers (PA) were col-lected from textile factories in Vietnam. Zinc oxide (ZnO) and stearic acid are commercial chemicals that were supplied from Henan Kingway Chemical Co., Ltd. (Zheng Zhou, China). Sulfur (S) and accelerator (TBBS) were bought from Lanxess (Köln, Germany).
Preparation of Ternary Composite NBR/LF/PA
The leather was first reduced in size following the procedure in our previous work [27] to obtain waste leather fiber (LF). The grinding process was conducted by a hammer mill operating at a rotor speed of 2000 rpm. The mill consisted of six rows of hammers and using a 40-mesh screen. The obtained waste PA fibers were then cut to an average of 40 mm in length. Both of waste fibers were dried by oven at 80 • C for two hours before the mixing process. The structure of LF and PA were characterized by FT-IR method and the results are shown Figures S1 and S2 (Supplementary Materials). The ternary composite was prepared by mixing NBR/LF/PA with curatives as shown in Table 1. The mixing was conducted using an internal mixer (Toyoseiky Labo Plastomil; Tokyo, Japan). Initially, NBR was masticated at 125 • C for 2 min to reduce its viscosity. Then, waste LF and PA were added gradually, in the ratios outlined in Table 1, to achieve a uniform dispersion. The curatives were added last to the compound, this process continued for 3 min. The rotor speed was controlled at 50 rpm. After completing the mixing process, flat sheets of the compound were obtained by using a two-roll mill with a set nip gap of 1.5 mm. The samples were vulcanized at 150 • C and 10 MPa by a compression molding for 20 min.
Measurements
Curing behaviors were analyzed by a Moving Die Rheometer (Toyoseiky RLR-4; Tokyo Japan) at 150 • C according to ISO 6502. The tensile and tear measurement were conducted by using a tensile testing machine (Intron 5582; Norwood, MA, USA). The testing procedure for tensile and tear was following the ASTM D412-D and ASTM D624-C standards, respectively. The abrasion test was performed by a rotary drum abrasion tester (GOTECH GT-7012-DA; Taichung, Taiwan) according to the DIN-53516. The normal force was controlled at 5N by a constant weight. The rotational speed of the cylinder was fixed at 40 rpm to ensure a rubbing length of 40m. Sandpaper with 60 grit was used in this abrasion test. After the wearing process, the samples were cleaned using a soft brush to remove the wear debris. The abrasion resistance was calculated from the difference in weight before and after the test. Three measurements were conducted for each sample. Morphology of fracture surfaces after tensile and abrasion test were observed by scanning electron microscopy (Jeol JSM-6360LV; Tokyo, Japan). The samples were coated with platinum using a coating machine (Jeol JEC-3000 FC; Tokyo, Japan). The accelerating voltage was 20 kV. Dynamic mechanical properties were evaluated by a dynamic mechanical analyzer (TA Instrument DMA-800; New Castle, DE, USA). The samples were measured in tensile mode at frequency of 1Hz. The ramp temperature was from −90 • C to 30 • C and the heating rate was 2 • C/min.
Vulcanization Chracteristic
From the curing curves shown in Figure 1 and the parameters in Table 2, both of values minimum torque (M L ) and maximum torque (M H ) increased with the increase in PA content in the NBR/LF compound. It was found that the scorch time (ts 2 ) decreased when LF and PA were introduced. This could be ascribed to the presence of reactive functional groups in both type of fibers that act as activators and accelerate the rate of the vulcanization, as pointed out in our previous work [27]. This observation was also similar to the report of T.D. Sreeja et al. [20] and Ismail et al. [28], where an increase in filler content resulted in a decrease in scorch time. Additionally, we found that the introduction of LF to NBR led to a decrease in the value of M H as the LF ratio increased. However, in the present study, we observed a different trend, where a higher PA content resulted in a higher M H value. This finding is noteworthy because it suggests that the presence of PA fibers could strongly enhance the total of crosslinking in the composite. Furthermore, the value of ∆M was also higher, indicating a higher crosslink density. The enhancement of crosslink density could be proved through the good interphase adhesion between NBR and PA (Supplementary Materials Figure S4).
Mechanical Properties of Composite
According to previous studies [27], LFs demonstrated efficient reinforcement due to their good interaction with NBR. In our studies, the ground LFs exhibited a fiber bundle structure with weak collagen fibrils that can be easily untwisted (Supplementary Materials Figure S3). As the collagen fibrils untwist, voids easily form, leading to failure through crack propagation [29]. This represents the breaking phenomenon of LF-reinforced NBR composite. In our current research, PAs were introduced as long fibers to overcome the drawback of LF reinforcement. As shown in Table 3, the tensile strength gradually increased when waste PA fiber was introduced. The previous studies showed that the rubber reinforced with short fibers exhibited the remark mechanical anisotropy [30]. S. Soltani et al. [31] reported that the incorporation of virgin nylon fiber into NBR brought greater tensile strength than waste nylon fiber because of the high aspect ratio (L/D). In this study, the average length (L) of waste PA fiber was ~40mm, much higher than the diameter (D) value, which was around 22 µm (Figure 3c). Therefore, the long PA fibers were distributed as tangles after mixing. These tangled fibers included links, knots, and braids that acted as a non-woven fabric network. When the sample was stretched, the load was not only
Mechanical Properties of Composite
According to previous studies [27], LFs demonstrated efficient reinforcement due to their good interaction with NBR. In our studies, the ground LFs exhibited a fiber bundle structure with weak collagen fibrils that can be easily untwisted (Supplementary Materials Figure S3). As the collagen fibrils untwist, voids easily form, leading to failure through crack propagation [29]. This represents the breaking phenomenon of LF-reinforced NBR composite. In our current research, PAs were introduced as long fibers to overcome the drawback of LF reinforcement. As shown in Table 3, the tensile strength gradually increased when waste PA fiber was introduced. The previous studies showed that the rubber reinforced with short fibers exhibited the remark mechanical anisotropy [30]. S. Soltani et al. [31] reported that the incorporation of virgin nylon fiber into NBR brought greater tensile strength than waste nylon fiber because of the high aspect ratio (L/D). In this study, the average length (L) of waste PA fiber was~40mm, much higher than the diameter (D) value, which was around 22 µm (Figure 3c). Therefore, the long PA fibers were distributed as tangles after mixing. These tangled fibers included links, knots, and braids that acted as a non-woven fabric network. When the sample was stretched, the load was not only dissipated by friction through the tangle movement but was also transferred through the non-woven network. Thus, the reinforcing efficiency was improved. The values of elongation at break of NBR/LF/PA with various PA ratios fluctuated around 50%. It was much lower than elongation at break of pure NBR but still higher than the value of LF50. It is well known that an increased amount of filler causes a decrease in deformation by restricting the mobility of chains [32,33]. In this case, elongation at break of samples having PA fibers was higher than sample LF50 because of the slippage phenomenon occurring between fibers and matrix. A high L/D ratio results in PA fibers slipping before failure. These results are also consistent with the findings in other works [34][35][36]. The tear strength of the ternary composite was found to be significantly improved when mixed with waste PA fibers. Similar to the tensile strength, an increase in the amount of PA fibers led to an increase in the tear strength value at all ratios. This is because tear strength is related to crack propagation, and the presence of PA coils tends to hinder the growth of micro-cracks, thereby increasing the tear strength of the composite material. [34,37]. Furthermore, the incorporation of any filler into the matrix induces the energy dissipation which was reported [38,39]. The friction caused by the movement of PA fibers in polymer matrix will generate heat and increase the hysteresis loss [21]. The hysteresis curves in Figure 2 show the loss energy during deformation, the area under curve corresponds to the energy dissipation. Obviously, the value of area under curve in Table 3 shows an increase in hysteresis loss when there was an increase in the PA fibers' proportion. cracks, thereby increasing the tear strength of the composite material. [34,37]. Furthermore, the incorporation of any filler into the matrix induces the energy dissipation which was reported [38,39]. The friction caused by the movement of PA fibers in polymer matrix will generate heat and increase the hysteresis loss [21]. The hysteresis curves in Figure 2 show the loss energy during deformation, the area under curve corresponds to the energy dissipation. Obviously, the value of area under curve in Table 3 shows an increase in hysteresis loss when there was an increase in the PA fibers' proportion. Figure 3 shows the SEM image of the fracture surface of composite and the surface of PA fibers before mixing. As illustrated in Figure 3a Figure 3 shows the SEM image of the fracture surface of composite and the surface of PA fibers before mixing. As illustrated in Figure 3a,b, the fracture surfaces of LF40PA40 and LF25PA25 are presented, respectively. The observation indicates that the sample with a lower PA content (Figure 3a) displays several holes, which serves as evidence of the presence of slippage between fibers and the polymer matrix. In the LF25PA25 sample, there were many broken ends of fiber chains instead of holes (Figure 3b). This was attributed to the limitation of movement space. High PA content reduces the possibility of movement of the material under applied stress. Interestingly, the results also showed that the PA fibers were longitudinally oriented with stress direction which was also found in other research [40]. This is reasonable based on the above explanation about the high value of L/D. Figure 3d also illustrates the very good load-carrying ability of the PA fibers. The surface of waste PA fibers before mixing in Figure 3c was quite smooth, while it changed to rough after the tensile test (Figure 3d). This suggests that the non-woven networking formed by PA fibers acted as an efficient load carrier. Therefore, more energy was transferred to deformed PA fibers before failure [41]. surface of waste PA fibers before mixing in Figure 3c was quite smooth, while it changed to rough after the tensile test (Figure 3d). This suggests that the non-woven networking formed by PA fibers acted as an efficient load carrier. Therefore, more energy was transferred to deformed PA fibers before failure [41]. Figure 4 displays the mass loss of the samples from the abrasive experiment. The mass loss of the NBR/LF composite increased sharply compared to pure NBR. Figures 4a,b show the worn surfaces of NBR and LF50, respectively. Sample LF50 exhibited numerous curled debris, whereas NBR displayed distinctive "waves of detachment", as evident from the observations. Previous research showed that the wear mechanism of rubber involves Figure 4 displays the mass loss of the samples from the abrasive experiment. The mass loss of the NBR/LF composite increased sharply compared to pure NBR. Figure 4a,b show the worn surfaces of NBR and LF50, respectively. Sample LF50 exhibited numerous curled debris, whereas NBR displayed distinctive "waves of detachment", as evident from the observations. Previous research showed that the wear mechanism of rubber involves the breakdown of molecular structure and the rupture of local mechanics [36,37]. In this case, the significant increase in mass loss of the LF50 sample could be explained by the weak cohesion of the leather fiber-bundles under frictional force. When PA fibers were present in the samples, the weight loss decreased with increasing PA content. This can be attributed to the layer of PA in the composite, which creates networks with higher durability, slowing down the abrasion process. Figure 5 illustrates the dynamic mechanical properties of the samples. The storage modulus in the glassy region was slightly higher compared to that of NBR, LF50, and LF25PA25. However, in the rubbery region, the storage modulus was significantly improved in LF50 and LF25PA25 (Figure 5a). The storage modulus G' in the rubbery region of LF25PA25 was found to have increased by 1.23 times compared to LF50, which was reasonable with data of the tensile strength. In addition, the tan δ values of the samples in Figure 5b decreased in the presence of LF and PA. The decrease in the tan δ peak occurred due to the fibers' ability to restrict the mobility of polymeric chains. These results are consistent with another study [38]. Furthermore, the value of tan δ in the rubbery region also indicates that NBR/LF/PA exhibited higher energy dissipation compared to NBR and NBR/LF. Figure 5 illustrates the dynamic mechanical properties of the samples. The storage modulus in the glassy region was slightly higher compared to that of NBR, LF50, and LF25PA25. However, in the rubbery region, the storage modulus was significantly improved in LF50 and LF25PA25 (Figure 5a). The storage modulus G' in the rubbery region of LF25PA25 was found to have increased by 1.23 times compared to LF50, which was reasonable with data of the tensile strength. In addition, the tan δ values of the samples in Figure 5b decreased in the presence of LF and PA. The decrease in the tan δ peak occurred due to the fibers' ability to restrict the mobility of polymeric chains. These results are consistent with another study [38]. Furthermore, the value of tan δ in the rubbery region also indicates that NBR/LF/PA exhibited higher energy dissipation compared to NBR and NBR/LF.
191
In this study, the ternary composites NBR/LF/PA were prepared by simple mixing 192 method with the partial replacement of waste leather fibers by waste PA fibers. The me-193 chanical properties of NBR/LF/PA such as tensile strength and tear strength were signifi-194 cantly enhanced in the presence of PA fibers and were also confirmed by DMA. Moreover, 195 the abrasion resistance of ternary composite was remark improved compared to the bi-196 nary composite NBR/LF. This suggests that the utilization of both waste fiber products 197 presents a truly sustainable solution for enhancing the properties of recycled materials 198 based on nitrile rubber, while also offering the benefit of recycling and cost reduction.
Conclusions
In this study, the ternary composite NBR/LF/PA was prepared using a simple mixing method with the partial replacement of waste leather fibers (LF) by waste polyamide (PA) fibers. The mechanical properties of NBR/LF/PA, such as tensile strength and tear strength, were found to be significantly enhanced in the presence of PA fibers. In particular, the tensile strength of LF25PA25 was found to have increased by about 1.26 times compared to LF50. Similarly, the tear strength was remarkably enhanced, with LF25PA25 exhibiting a value of 84.64 N/mm compared to 72.47 N/mm for LF50. Hysteresis loss data illustrated that the presence of PA also increased the energy loss of the ternary composite, which plays a crucial role in the reinforcement of PA. Dynamic mechanical properties were also examined to confirm the mechanical properties. The breaking phenomena of the composites were discussed through morphology observation to demonstrate the different reinforcing behavior of LF and PA. Moreover, the abrasion resistance of the ternary composite was significantly improved compared to the binary composite NBR/LF. This suggests that the utilization of both waste fiber products presents a truly sustainable solution for enhancing the properties of recycled materials based on nitrile rubber while also offering the benefits of recycling and cost reduction.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,139.4 | 2023-05-25T00:00:00.000 | [
"Materials Science"
] |
A semi-analytical method for characterization of fractal spoof surface plasmon polaritons with a transfer matrix and bloch theory
In this paper, a semi-analytical approach is introduced to analyze a spoof plasmonic structure, with an arbitrary geometry. This approach is based on a combination of techniques that employ a full-wave simulator and the Bloch theorem. By applying periodic boundary conditions, the real and imaginary parts of the equation obtained from the equivalent network have been calculated. To show the accuracy and validity of this proposed approach, a complementary Minkowski fractal SSPP unit cell has been designed and analyzed, and this has been used in a surface plasmonic transmission line. The results of our proposed method have been compared to measured results, and the simulated and measured results showed that the SSPP transmission line possesses high performance, from 1.45 to 5 GHz.
Note that other conventional methods, such as mode matching or the modal method, have also often been used in the analysis of conventional structures, such as periodic solid cubes 12,13 , two-dimensional hole arrays 14 , periodic arrays of slanted grooves 15 , and gradient dielectric-filled metallic grating 16 .Furthermore, it is worth mentioning that commercial solvers (like CST eigenmode solver) are only able to calculate the frequency behaviors of the real parts of complex wave numbers, while the imaginary parts of the waves propagated in the SSPP structures cannot be displayed 17 .
Thus, there is a need for a more comprehensive method to be devised for obtaining the basic characteristics of these structures-such as the real and imaginary parts of the complex wave numbers of spoof plasmonic waves, and their impedance behaviors-regardless of the types and shapes of the structures.Recently, loss the analysis of reconfigurable spoof surface plasmon polaritons has also been done 18 .On the other hand, having a method that is independent of the shape of the unit cell and can also analyze reconfigurable SSPP structures is also useful.Since SSPP is one of the categories of one-dimensional periodic structures, it is useful to investigate different theories for analyzing one-dimensional periodic structures.Undoubtedly, having an analytical method for describing the properties of SSPP structures without any limitation in the shape of the unit cell and even reconfigurable SSPP structures will help the designer in implementing various microwave circuits and being active in this field 19,20 .are examples of combining SSPP structures with active chips that have led to the design and fabrication of active transmission lines (meta-channel).
Over the past few years, several methods have been presented for analyzing the dispersion of periodic structures.In 21 , in addition to calculating the dispersion relation for split-ring-resonator (SRR) unit cells, the authors also provided all the parameters required for systematic modeling, such as excitation and matching.In 22 , by presenting a 4-port circuit model and Bloch theory, the effects of coupling between the unit cells, for enhancing the bandwidth, was investigated and reported.
However, it is very complicated-if not impossible-to provide a circuit model for structures that have higher couplings and mutual effects, in higher-order modes.Therefore, in 17,23 , the dispersions of different structures including glide-symmetric holey EBGs and microstrip lines were analyzed through the combined methods of simulators.In those papers, the desired periodic structures were modeled as parallel and infinite combinations of equivalent unit cells, with each having a specific transfer matrix.Notwithstanding, the transfer matrix of a unit cell can be obtained by using full-wave simulators-from the values of the scattering matrices.Finally, the scattering analysis can be performed by applying the Bloch conditions to the boundaries of the unit cells.
In this paper, we analyze spoof plasmonic structures with arbitrary geometries, by using a combined method of implementing both simulation and Bloch theory 23 .To validate our method, a complex structure with a complementary Minkowski fractal SSPP (CMFSSPP) unit cell, which has been used in a surface plasmonic waveguide, is introduced and analyzed with our proposed approach.It is worth noting that this structure cannot be analyzed with other approaches presented in the literature 8,9,12,24,25 .While using our proposed method, a detailed analysis of frequency behavior in higher frequency bands, and more analytical results than commercial solvers, such as loss and impedance, are presented.Moreover, a field-limited transmission line with the possibility of further miniaturization for various applications, including the integration of microwave circuits, is also provided using this new unit cell.
This paper is arranged as follows: In Sect."Characterization of spoof surface plasmon polariton", the specification of a one-dimensional periodic structure is formulated, using the scattering matrix transformations and the transfer matrix of unit cells, as well as by applying the Bloch conditions.In Sect."Design CMFSSPP unit cell ", for validating the soundness of our proposed method in spoof plasmonic structures, a CMFSSPP unit cell is introduced, and the results are compared to the results of an Eigenmode solver for a CST simulation.A transmission line based on the CMFSSPP unit cell, with high-confinement capabilities, has been designed, and in Sect."Transmission line based on CMFSSPP", the simulation results are compared to the results of a full-wave simulator.The measurement results are also presented to confirm the accuracy of our proposed method.
Characterization of spoof surface plasmon polariton
Inspired by 23 , we assume that we have a one-dimensional periodic structure, with a period, p.The schematic view of each unit cell, which consists of a multipole with m inputs and outputs, is displayed in Fig. 1, The symbols, V n (m) and I In this case, we have the following: The multimodal scattering parameters of the unit cell shown in Fig. 1 were simulated by CST frequencydomain solver , with hexahedral meshing and open boundaries set on the four physical ports at the side faces of the cell, each excited by m modes.The desired periodic structures were modeled as parallel and infinite combinations of equivalent unit cells, with each having a specific transfer matrix.Applying the Bloch condition to all the outputs leads to the following generalized eigenvalue problem, which is formulated through a general multimode transfer matrix: In this equation, K x is the modal wave number (in general it is complex, and includes K x = β − iα ), and e −iK x P denotes the eigenvalue of the transfer matrix of each unit cell.For the general calculation of the transfer matrix, without losing the generality of the problem, it is sufficient to expose the desired structure as waves with different modes, in a full-wave simulator.Note that, in this article, due to the low amplitude of higher modes, only the dominant mode is considered, while its dispersion matrix is calculated for different frequencies, as follows.
The scattering matrix of this network can be written in terms of 4 partitioned m × m submatrices as: where the subscripts i/o represent the input-output ports.
Furthermore, using the following explicit relations, the transfer matrix can be easily calculated from the scattering matrix.In this case, I is the identity matrix, while Zi and Zo represent the characteristic impedance matrix of the input and output ports of the unit cell, respectively.The variable, K x , which satisfies Eq. ( 13), can be calculated at the Brillouin zone edge of the complex space of the propagated wave number, according to the following equation: In Eq. ( 13), is a matrix with 2m rows and columns, and the elements on the main diameter are equal to e −iK x P ; and the other values of the elements are zero.In the end, through simple coding in MATLAB, it is possible to obtain the real and imaginary values of the calculated K x value, and to achieve the Bloch impedance value of the main mode of the structure, according to the following relationship, for different frequencies: (1)
Design CMFSSPP unit cell
Today, fractal structures are used in various applications, for different reasons such as in miniaturization and multiband devices 26 , boosting frequency bandwidths 27 , increasing isolation in array antennas 28 , and so on [29][30][31] .For this purpose, various fractal geometries have been presented, such as in Contour 32 , Minkowski 33 , and Koch 34 .
In order to validate the mentioned method 23 , in one-dimensional SSPP structures, a complementary Minkowski fractal SSPP (CMFSSPP) unit cell is presented, here.Since this unit cell has a complex structure, it is difficult to use the conventional method to provide an equivalent circuit for scattering analysis.Moreover, as can be seen, this structure has a greater field confinement, and a lower wave propagation speed than similar SSPPs 35 ; where the dispersion results of the proposed unit cell are presented in Fig. 2.
As can be seen in the figure, the proposed structure is presented separately, and the final structure has been obtained in the fifth stage.It consists of a slot line and two complementary Minkowski fractal structures, which can be managed by the two parameters, Scale 1 and Scale 2 (see Fig. 3 for more clarification).In Fig. 2, by comparing the results of the configurations 1 and 5, at the same plasma frequencies, there is a 31% size reduction of the proposed unit cell, compared to a conventional unit cell.Moreover, the results of the dispersion, loss, and impedance of three unit-cells with the values specified in Fig. 3 As mentioned previously, it is worth noting that, in contrast to our proposed approach, commercial software can only calculate the dispersion diagram of a unit cell (as shown in Fig. 4), and it cannot calculate the other characterization values, such as the loss or impedance behaviors of the SPP structures (as shown in Figs. 5 and 6).However, it should be noted that for some specific unit cells, transmission line-based methods have been proposed to investigate the impedance behavior of SSPP structures 36 .
Transmission line based on CMFSSPP
In this section, to validate the results of the proposed unit cell, as well as the application of the results extracted from our presented method, in the design of microwave devices based on SSPP, a transmission line based on SSPP has been designed and fabricated.This transmission line was designed consisting of two microstrip-to-slotline transitions, and these were designed by a slot line on which the SSPP unit cells were placed with a period of 8 mm.The schematic of the structure is shown in Fig. 7.
As shown in Fig. 6, the impedance of the unit cells, at different scales, could be considered with an average of around 120 ohms, and within a frequency range of 1.5-5 GHz.Bearing this in mind, while designing the transmission line, it is tried to use two 50-Ω coaxial line transitions to 120-Ω slot lines.These transitions consisted of a microstrip line on the substrate, and a ground plane with a slot line of a certain width, on the other side.These two lines were also perpendicular to each other, and both the microstrip and slot lines had radial stubs that helped to increase the bandwidths of the impedance matching.Moreover, 20 SSPP unit cells, with periods of 8 mm, were placed in the spaces between the two transitions, to achieve maximum impedance and momentum matching.The unit cells were also arranged in descending orders of plasma frequencies, and their detailed view and dispersion results are shown in Figs.7a and 8, respectively.The substrate used was conventional FR4, with a thickness of 1.6 mm, and a loss tangent of 0.025; and the other parameters and sizes are presented in the caption of Fig. 7.
Next, in order to validate our proposed analysis method, as well as present a transmission line based on CMF-SSPP, the corresponding structure was fabricated, as shown in Fig. 9. Figure 10 shows the measurement results that were obtained with a network analyzer, and these were in good agreement with our simulation results.Table 1 also shows a comparison of our presented analysis method, and the mentioned references.The comparison of the new unit cell, as presented with the conventional example depicted in Fig. 2, together with its associated dispersion diagram and plasma frequency, indicated that there was a 31% reduction in the dimensions of the presented unit cell, and this has significant implications for their use in the miniaturization of microwave devices.
Conclusion
In this paper, a semi-analytical method for the characterization of fractal spoof surface plasmon polaritons, with a transfer matrix and Bloch theory, has been presented.This method can be used for arbitrary periodic structures, such as complex SSPPs, yet it provides more analytical results than commercial solvers like CST eigenmode solver.
In order to validate our presented method, and assess the performance of the new unit cell, a transmission line based on this idea has been designed and fabricated, and a comparison of the simulation and measurement results has verified its impressive performance within the frequency range of 1.45-5 GHz.
n
, in the figure, represent the input voltage and input current of the mth mode; and V (m) n+1 and I (m) n+1 are the output voltage and current of the mth mode, respectively.This periodic structure can be modeled through a cascade connection of each of the multipoles, which have a separate transfer matrix, T (ABCD).
Figure 1 .
Figure 1.Equivalent network of the unit cell characterized by a multimodal transfer matrix.
Figure 2 .
Figure 2. Comparison between the dispersion results for different steps of the unit cells' design.The parameters are set as a 1 , a 2 , b 1 , b 2 , b 3 , Scale 1, Scale 2 of 7, 5, 12, 5, 0.77, 0.3125 and 0.3125 mm respectively.The dielectric substrate, with a thickness of 1.6 mm, is FR-4, with a relative permittivity of 4.3, and a loss tangent of 0.025.
Figure 3 .
Figure 3.The Scale 1 and Scale 2 parameters of the three unit-cells.
Figure 4 .
Figure 4.The dispersion results of the SSPP modes for the three unit-cells presented in Fig. 3.The results of our proposed method are shown in comparison, here, to the results of a CST Eigenmode solver.
Figure 5 .
Figure 5.The effects of different values of Scale 1 and Scale 2, on the attenuation results.
Figure 6 .
Figure 6.The effects of different values of Scale 1 and Scale 2, on the impedance results.
Figure 10 .
Figure 10.A comparison of the measurement and simulated S-parameters of the proposed SSPP transmission line. | 3,309.6 | 2023-09-12T00:00:00.000 | [
"Physics"
] |
Varieties of Justification—How (Not) to Solve the Problem of Induction
The debate about induction is a mess, perhaps only surpassed in its messiness by the debate about free will. There are almost as many different proposed solutions to the problem of induction as there are different formulations of the problem itself. While there seems to be a rather broad consensus that the problem is insoluble, the standards for what would count as a justification, were one available, vary wildly. That is not to say that the different authors who formulated the problem thought that a solution was possible, but even in a sceptic argument designed to show that a solution is unobtainable, we can see what a solution would need to entail. Obviously, what could count as a justification of induction depends on how one chooses to formulate the problem. In this paper, I will differentiate between three different standards for a possible justification. The three standards for a justification of induction are (1) to demonstrate how valid inductive inferences can be truth-preserving, (2) to demonstrate how induction can be truth-conducive, and (3) to show that inductive practice is rational. I will argue that the first two are unavailable, whereas the latter is at least in principle obtainable, although I will not argue for a particular proposal here. The very first thing we need to make clear in this paper before we can even start the investigation into the varieties of justification is to make explicit what sort of inferences are affected by the classical problem of induction. Following this, I will give a brief survey of prominent formulations of the problem. We will then turn to the three abovementioned standards for a justification of induction. I will argue that the first two standards are impossible to meet, because they inevitably lead into some version of Hume’s classic dilemma. This is the case because they all require the condition that nature will remain regular, which can neither be known a priori, nor, without entering a vicious circle, by induction. The only standard that is at least possible to satisfy
Introduction
The debate about induction is a mess, perhaps only surpassed in its messiness by the debate about free will. There are almost as many different proposed solutions to the problem of induction as there are different formulations of the problem itself. While there seems to be a rather broad consensus that the problem is insoluble, the standards for what would count as a justification, were one available, vary wildly. That is not to say that the different authors who formulated the problem thought that a solution was possible, but even in a sceptic argument designed to show that a solution is unobtainable, we can see what a solution would need to entail. Obviously, what could count as a justification of induction depends on how one chooses to formulate the problem. In this paper, I will differentiate between three different standards for a possible justification. The three standards for a justification of induction are (1) to demonstrate how valid inductive inferences can be truth-preserving, (2) to demonstrate how induction can be truth-conducive, and (3) to show that inductive practice is rational. I will argue that the first two are unavailable, whereas the latter is at least in principle obtainable, although I will not argue for a particular proposal here.
The very first thing we need to make clear in this paper before we can even start the investigation into the varieties of justification is to make explicit what sort of inferences are affected by the classical problem of induction. Following this, I will give a brief survey of prominent formulations of the problem. We will then turn to the three abovementioned standards for a justification of induction. I will argue that the first two standards are impossible to meet, because they inevitably lead into some version of Hume's classic dilemma. This is the case because they all require the condition that nature will remain regular, which can neither be known a priori, nor, without entering a vicious circle, by induction. The only standard that is at least possible to satisfy is to demonstrate that inductive practice is rational. Although rarely put forward as an attempt to solve the problem of induction, there exist a number of arguments for the rationality of conditionalisation in formal epistemology. If one accepts that inductive reasoning is a form of conditionalisation, for which I will briefly argue, then any argument for the rationality of conditionalisation is an argument for the rationality of inductive practice. The two distinct arguments I will portray here are (1) that only conditionalisers are immune to diachronic Dutch books, or Dutch strategies, and (2) that conditionalisation maximises expected epistemic utility. I will argue that these arguments do at least not necessarily fall prey to Hume's dilemma. Accordingly, this is the only way forward for a possible justification of induction.
Inductive Inferences
Induction is surprisingly hard to define, and as we will see, the classic characterisations of induction as an inference from particular premises to a general conclusion, or, even more specifically, from particular observations to a general law, are much too narrow. In the following, I will treat inductive inferences as a subset of ampliative inferences. An inference is ampliative if the content of the conclusion goes beyond the content of the premises. Whereas defining induction as an inference from particularity to generality yields a too narrow notion, simply equating inductive and ampliative inferences yields a definition that is too wide. There are more forms of ampliative inferences other than induction that are in need of justification. The most prominent form of inferences of this kind is the inference to the best explanation (IBE). Like induction, IBE is ampliative because the content of the conclusion is not contained in the content of the premises. 1 In the following, we will focus only on enumerative induction. Inferences to the best explanation (IBE), while they are clearly ampliative inferences, are structurally different from inductive inferences. In an IBE, we infer the conclusion via a bridge principle that claims that the premise is the best explanation for the conclusion. In order to demonstrate the difference, let us take a look at the master argument for scientific realism, which can be interpreted as an IBE: P 1 : Science is successful. P 2 : The success of science would be a miracle if our best scientific theories were not at least approximately true. P 3 : That our current best scientific theories are at least approximately true is the best explanation for the success of science. C: Our current best scientific theories are at least approximately true.
Here, the conclusion is inferred via an implicit bridge principle that the explanatory virtue of the explanans is truth-conducive. In classical induction, the explanatory virtue of the conclusion plays no role. In its simplest form, induction is merely an inference that a certain characteristic of a sample will be retained in different samples of the same population or in the population in general. The most obvious form of this inference is classical enumeration, which both Hume's and Popper's famous expositions of the problem of induction were concerned with. Enumerative induction is traditionally taken to be an inference from a number of particular instances to a generalisation. It is an inference of the pattern: P 1 : a is an F and a G. P 2 : b is an F and a G. P i : i is an F and a G, etc. C: all F s are Gs.
However, enumerative induction from particularity to generality is not the only form of inductive inference. It is for example possible to inductively infer a particular conclusion from a general premise, as long as both concern the distribution of the same trait in subsets of the same population, such as the following: P 1 : All observed F s have been Gs. C: The next F we will observe will also be a G.
Apart from inferences from inferences about the properties of particulars, inferences about the proportional distribution of certain traits in a population also seem to be structurally equivalent to the inductive inferences we discussed so far, and should also be treated as a subclass of inductive inferences: P 1 Of n observed F s, m/n have been Gs. C m/n of all F s are Gs.
What unites all of these different types of inferences is that they are ampliative inferences from the nature of a subset of a population to the nature of a different subset of the same population, or the population in general. In these inferences, in contrast to IBE, where we are supposed to infer the conclusion because of its explanatory virtue, we infer the conclusion merely on the basis of similarity between what we perceive a large enough sample of a population. Hence, I will treat all these inferences as instances of enumerative induction, even if they are inferences from general premises to a particular conclusion.
The problem of finding a justification for the ampliative step in inductive inferences is what constitutes the old problem of induction. These inferences are ampliative since the content of the conclusion goes beyond the content of the premises. In these inferences, we are referring to different, or more, instances in the conclusion than we are talking about in the premises (all ravens, as opposed to all observed ravens e.g.); hence, we cannot rely on the inference to be truth-preserving. Unlike in deductive inferences, 2 the conclusion does not need to be true if the premises are.
But if induction is not truth-preserving, why are justified to use it? Let us now turn to the archetypical formulations of the old riddle and their accompanying standards for a possible solution.
The Old Riddle(s)
David Hume gave the maybe still best-known exposition of the problem of induction in the Treatise of Human Nature. 3 Famously, Hume formulated the old riddle as a dilemma, which serves as a template for a lot of recent formulations of the problem. Hume argues that if we wanted to demonstrate that inductive inferences were, in his words, "a product of reason", that is if we wanted to demonstrate the validity of inductive inferences, we needed to know that nature is regular, or uniform. Only through this could we be justified to infer the conclusion of an enumerative argument. Hume gives no clear account of what exactly the extra assumption that nature is uniform is supposed to do for the respective inductive inferences. Most straightforwardly, we could add it as an extra premise to any enumerative inference. By adding this further premise, the enumerative inference is transformed into a valid deductive argument. We will discuss below why this is a problematic move. A classical enumerative inference would then look something like this in Hume's analysis: P 1 All observed ferromagnets have so far attracted iron. P 2 Nature is uniform. P 3 If nature is uniform, then ferromagnets do not change their behaviour. C All future and unobserved ferromagnets attract iron.
However, that nature is uniform is a premise, which we again would need to justify. According to Hume, there are two options available for this: either we formulate an a priori, deductive (or, in Hume's words, "demonstrative" inference with the conclusion that nature is uniform, or we formulate an a posteriori, inductive (or "probabilistic") inference to that effect. The latter strategy would amount in a circularity, because we needed an inductive inference to justify the premise that would in turn justify inductive inferences in general. The first horn of the dilemma is slightly harder to analyse. That the uniformity of nature is arrived at by a deductive inference does not seem problematic at first glance. The clue is that Hume holds that this would make the uniformity of nature a necessary fact. He seems to hold that if we could give a deductive justification for the uniformity of nature, this would not be an empirical inference, one whose premises are arrived at by observation. Rather, it would make the uniformity of nature an a priori truth. But that nature is uniform seems like an empirical fact, one that we cannot justify a priori. So the problem that the first horn of Hume's dilemma expresses is not that the uniformity of nature could be an a priori fact, but that a priori knowledge about whether nature is uniform or not is unavailable. We simply cannot know whether nature is uniform without empirically investigating what nature is like.
The other locus classicus of the debate about the old riddle is Popper's famous exposition in the beginning of The Logic of Scientific Discovery. 4 Since Popper's way of stating the problem is relevant for my argument, let's take a look at the exact formulation: The problem of induction may also be formulated as the question of the validity or the truth of universal statements which are based on experience [...]. [...] Accordingly, people who say of a universal statement that we know its truth from experience usually mean that the truth of this universal statement can somehow be reduced to the truth of singular ones, and that these singular ones are known by experience to be true; which amounts to saying that the universal statement is based on inductive inference. 5 In the following, Popper, like Hume, formulates the problem the validity of inductive inferences as a dilemma. What we needed to justify induction is to find a "principle of induction", that is a principle that would demonstrate the logical validity of inductive inferences. The goal for a justification seems to be here to make inductive inferences more like deductive inferences such a way that "[. . . ] the truth of [a] universal statement can somehow be reduced to the truth of singular ones [. . . ]". This could be achieved by a set of logical norms that would demonstrate the logical validity of inductive inferences. Popper claims that the principle of induction that is supposed to demonstrate the validity of inductive inferences cannot be a "tautology" like the rules of classical two-valued deductive logic. If the principle of induction were tautological, there would not be a problem of induction. The reason Popper gives why we cannot arrive at this principle of induction is a version of Hume's dilemma: either we knew a priori that induction were justifiable or we would have to obtain this principle via experience. Again, a priori knowledge of the validity of inductive inferences is impossible for the same reasons as above, and an empirical justification would involve an inductive inference from past success of induction to its future success and would hence be circular. 6 Both Popper's and Hume's dilemma sought to demonstrate that inductive inferences, in order to be justifiable, would be furnished with an additional premise, which is unavailable because of the respective dilemma. Remember that according to Hume, a justifiable inductive inference would take approximately this form: P 1 : All observed ferromagnets have so far attracted iron. P 2 : Nature is uniform. P 3 : If nature is uniform, then ferromagnets do not change their behaviour. C: All future and unobserved ferromagnets attract iron. 4 Popper (2005), 4-5 5 Karl (1966), 4. 6 See Popper (2005, 4-5 For Popper, a justifiable inductive inference would rather have the following form: P 1 : All observed ferromagnets have so far attracted iron. P 2 : (Principle of induction:) From past or observed instances infer to future or unobserved instances. C: All future and unobserved ferromagnets attract iron.
In both cases, premise 2) is unavailable because of the respective dilemma. And crucially, both inferences are being turned into deductive ones by the addition of an otherwise suppressed premise (premise 2) in both cases). It seems that according to Popper's and Hume's way of framing the problem of induction, the problem could be solved if we eliminated inductive reasoning and reduced it to truth-preserving reasoning. But before we turn to the general issue with that way of framing the question, let us briefly take a look at the content of these additional premises.
The Uniformity of Nature
Popper never gives an account for what a principle of induction would actually look like, but supposedly it looks something like I sketched above: (PI) From past or observed instances infer to future or unobserved instances. This principle, like the suppressed premise outlined in Hume's exposition of the problem, relies on the uniformity of nature. If nature were irregular and could behave entirely different tomorrow than how it behaved so far, (PI) would be a poor guide to reasoning. We would only be justified to infer the unobserved from the observed (or the future from the past and present) if we knew that the observed were any guide to what the unobserved looked like. That would not be the case if nature were irregular. So both Popper's and Hume's account for what would be needed to justify induction, if that were possible, rely on the uniformity of nature.
At this point, we should briefly clarify what the uniformity, or regularity, of nature is supposed to be. Clearly, uniformity is not to be confused with determinism. The success of a justification of induction according to this standard is not dependent on us being able to affirm in a non-circular manner that the world is deterministic. A world could be indeterministic if, e.g., there were stochastic laws, but it would still be uniform in the sense that these stochastic regularities remained stable. To take an overused but helpful example, radioactive decay is an indeterministic process. Neither can we predict for every specific radioactive particle when it is going to decay, nor is there, if radioactive decay is indeed ontologically indeterministic, a hidden and undetectable factor that determines when any given radioactive particle is going to decay. However, the half-life of a radioactive isotope is stable. The half-life of uranium-238, for example, is 4.468 billion years. That means that within 4.468 billion years, approximately half of any sample of uranium-238 will have decayed. But although radioactive decay is an indeterministic process, a world (like ours) which is indeterministic in that way can still be regular, or uniform. The half-life of uranium-238 does not change: it is not 4.468 billion years for the first 10 billion years, and then switches to 4 years for the next 10 billion years, and so on. That is to say that the indeterministic regularities stay in place in a uniform world. 7 Unfortunately, however, that a world is uniform does not entail that the observed regularities remain stable. They could change wildly, which could be the consequence of undetected, and undetectable, fundamental regularities, which could remain stable. Such a world would appear irregular, but it would not be. So even if we knew that nature is uniform, and even if knew that a priori, that still would not entail that the observed regularities would remain stable. Their changing, if it is the consequence of more fundamental, but (in principle) undetectable regularities, would still not imply that nature is irregular. But then, the regularity of nature would not help us if we wanted to make any inferences about our observed regularities truth-preserving. Hence, even in a regular world, our inductive inferences would be defeasible. We would even need a second suppressed premise apart from that nature is regular, namely that we are able to tell which apparent irregularities are consequences of more fundamental, but stable regularities.
The Issue with the Classical Formulations of the Old Riddle
As we have seen in the reconstruction of what a justifiable inference according to Hume and Popper would look like, the justifiable inference would in both accounts cease to be inductive. This suggests that Hume and Popper both hold that inductive inferences could only be justified if they could be treated as enthymemes of deductive inferences. Enthymemes are inferences with a suppressed premise. 8 By turning inductive inferences into enthymemes of deductive inferences, they would gain all the properties inductive inferences essentially lack, which is why they were in need of justification in the first place: they would cease to be ampliative and become truth-preserving, that is we could, like in deductive inferences, be certain that the conclusion is true if the premises are true. That is what Popper presumably meant when he said that in order to solve the problem, we would need to show that "the truth of [a] universal statement can somehow be reduced to the truth of singular ones, and that these singular ones are known by experience to be true; [. . . ]" 9 However, since inductive inferences, if we treat them as enthymematic inferences, would be turned into deductive inferences, we would not justify induction, but eliminate inductive reasoning. In a way, the problem of induction in this view is not that we fail to justify inductive inferences, but that we fail to turn them into deductions, because we cannot justify the suppressed premise of the enthymematic inferences. These classical formulations of the problem entail that a justification is generally 7 At this stage, it should be noted that the New Riddle rears its head. Whether the world is regular is not only dependent on the world, but also on our description of the world. While a world in which emeralds change their colour might appear irregular in our language, it appears regular if we use a grue/bleen language. For the remainder of the paper, I will assume that the reference frame remains fixed. I will explore this issue in a different paper. 8 Laurence BonJour also reconstructs both Hume's and Popper's accounts of the problem as the problem to justify the suppressed premise in an enthymematic argument (see, BonJour (1998), 190). 9 Popper (2005), 4 unobtainable, just as Hume diagnosed: any attempt to justify induction this way implies that we can fill in the extra suppressed premises of an enthymematic inference. This requires exactly the knowledge we lack, which is the very reason we have to infer inductively at all: it requires complete knowledge of the phenomena we are reasoning about. To establish that nature is uniform, we would either enter a regress or would have to claim that the uniformity of nature is a priori knowable. Any attempt to make induction truth-preserving means to argue that justified induction is an enthymeme of a truth-preserving inference. There is no way to do this without falling into some version of Hume's dilemma.
This diagnosis is entirely unsurprising, given that Hume's and Popper's view was that the problem is insoluble. One could ask, however, whether their demands for what a justification should entail were not maybe unfair. To treat the problem of induction in this way would also entail that induction, as long as it is a justifiable sort of inference, is not sui generis: justifiable induction would be deduction in disguise. Granted, Hume and Popper held that the problem was insoluble, so there would be sui generis induction, but only because the inferences are unjustifiable and we fail to reduce these inferences to deductive ones. The problem of induction would turn out to be that there even are inductive inferences. Peter Strawson has come to a similar conclusion: the classical formulation of the problem of induction holds inductive inferences to a standard they cannot fulfil: that of deductive inferences. If we wanted to honour the ampliative nature of inductive inferences, we should instead look for a way to justify induction in a way that pays reference to its nature as genuinely ampliative and non-truth-preserving inferences. 10 One way to find a justification that would pay respect to the sui generis nature of inductive reasoning would be to demand of a justification that it does not have to demonstrate how induction could be truth-preserving but truth-conducive. A possible justification would then take the form of merely showing that in an inductive inference, the truth of the premises at least raises the probability that the conclusion is true. In the following, I will discuss proposals to that effect and ultimately reject them.
Alternative Justification: Truth-Conduciveness and Reichenbachs's Pragmatic Vindication
To require a justifiable inductive inference to be truth-preserving seems to put an unreasonably strong demand on inductive inferences and completely neglects their ampliative nature. After all, induction is by its very nature defeasible, so it is no surprise that any attempt to reduce it to an enthymematic deduction is bound to fail. However, it is less of an outrageous demand to want to know how, even in a defeasible inference, the premises can confer some degree of truth or likelihood of truth on the conclusion. It is this less demanding standard that has often been employed by people who are less sceptical about the prospect of justifying induction. As we shall see, it is the standard that stands behind the classical probabilistic attempts to solve the puzzle, the Stove-Williams account of induction, and Laurence BonJour's view. 11 The general idea is that a justification should not demonstrate that inductive inferences preserve truth like an enthymematic deductive inference, but that the premises confer some degree, or likelihood, of truth upon the conclusion. As a point of departure, let us take BonJour's account of what an epistemic justification of induction would entail: If we understand epistemic justification [. . . ] as justification that increases to some degree the likelihood that the justified belief is true and that is thus conducive to finding the truth, the issue is whether inductive reasoning confers any degree of epistemic justification, however small, on its conclusion. 12 This is consistent with what we find in the various probabilistic attempts to justify induction. In this paper, I will not differentiate en détail between the various probabilistic accounts. What these accounts all have in common is that they try to demonstrate how the accumulation of evidence make it increasingly probable that the conclusion is true. Take, e.g. the Stove-Williams account. David Stove and Donald Williams both independently hold the view that the law of large numbers could help to justify the inference that a large sample will resemble the population. According to them, we are justified to infer that based on the law of large numbers, the proportion of F s that are Gs in large sample of a finite population will resemble the proportion of F s that are Gs in the population. If we randomly draw a large sample of a population, it is very likely that the sample will be representative. If we drew any logically possible large sample from a population, the majority of these samples will show a distribution of the trait we are interested in that falls within a very small margin around the original distribution in the sample. 13 It is thus probable that the conclusion of an inductive inference from the composition of a large sample to the composition of the population is true. 14 While these accounts do not demand that induction can be made truth-preserving by being reduced to enthymematic deductive inferences, they unfortunately all, and necessarily so, share one problem with the abovementioned sceptical accounts. Necessarily, any attempt that even merely claims that the truth of the conclusion of an inductive inference becomes more likely as more and more evidence is accumulated, relies upon the uniformity of nature. That a sample, even a large one, and even in the long run, resembles the population (of which some members exist in the future or are so far unobserved) cannot be assumed with certainty if it is possible that nature can completely change tomorrow-if it is possible that tomorrow, negatively charged particles attract each other, cats bark, and dogs meow. To put it in the terms above, that a large sample will resemble the population is only apparent for a population that is not going to change. Let's say we want to infer whether all future ravens will 11 See, e.g. Reichenbach (1935), Stove (1986), Williams (1947), andBonJour (1998) 12 BonJour (1998), 189. 13 See Smart (2013, 327. 14 See Williams (1947) and Stove (1986). continue to be black, as were the past and present (observed) ones. The population in question is the entirety of ravens-past, present, and future. That the future ravens will resemble our sample is only given if we know that in the future, the regularities will not change. If that were the case, our sample of ravens that we have observed before would not have been random. It would have been a temporally ordered and restricted sample from a section of time when ravens were not white, so the sample would have been skewed. And there is no way to tell that our sample is not skewed, unless we knew that nature is uniform. But we cannot, given Hume's dilemma.
So, these attempts might be more realistic in the sense that they do not hold that in order to be justified, induction would have to be eliminated, which would deny that induction is a sui generis type of inference. Yet, such a more realistic justification can still in principle never be achieved. That nature is uniform, and that the observed regularities will remain stable as a consequence of this uniformity of nature, would still have to be independently inferred: either deductively or inductively. And that exactly is Hume's dilemma. So even granted that we could solve all the problems associated with these accounts, such as the problem of how to demonstrate that a specific inductive inference rule is in any way better than, e.g. counter-induction, Hume's dilemma still bites.
So without going much deeper into the various extant probabilistic accounts, we can reject them altogether. So is all lost then? If even the more modest demand that the premises merely raise the probability of the truth of the conclusion is impossible to be met, what hope can we have? If it is at all possible to solve the problem of induction, what we would need for this is an account that did not presuppose the regularity of nature. In the next section, we will see that the various accounts that aim to demonstrate the rationality of specific formal frameworks for inductive inferences often do not require that nature is uniform. Maybe we can derive a justification of inductive practice from this, a justification of why we indeed use induction, even if that does not amount to a demonstration that the conclusion of an inductive inference is true or likely to be true. Before we move on to the next section, where we take a closer look at such attempts to justify induction. Let us briefly discuss Reichenbach's pragmatic vindication of induction, which can be understood as a sort of intermediary step towards such an account. As we will see, Reichenbach's account comes with one important restriction of the views discussed in the next section that do not have and which poses a severe problem for his account.
At first sight, Reichenbach's pragmatic vindication of induction and Wesley Salmon's reformulation of it seem to fit the same mould as the law of large numbers views discussed above. 15 Salmon claims that induction can never be shown to be truth-preserving, but what we can assert is that inductive inferences are supported by the evidence stated in the premises. That is to say that, any scepticism about the possibility to solve Hume's dilemma at all aside, there is a relation between the truth of the premises and the truth of the conclusion. Crucially, if any method is successful in extending our knowledge in this view, induction is. Reichenbach's and Salmon's principles of induction are quite close to the Stove-Williams view in that they explicitly 15 See Reichenbach (1935) and Salmon (1974). refer to the importance of the long run. Salmon, for instance, proposes the following inductive rule: [G]iven m/n of observed A are B, [. . . ] infer that the 'long run' frequency of B among A is m/n. 16 The crucial difference between Reichenbach's original account and the law of large numbers views discussed above is that Reichenbach holds that while the uniformity of nature cannot be simply assumed to justify inductive reasoning, he holds that induction is the only sort of ampliative reasoning that could be successful if nature were uniform, and that if nature is not kind enough to be uniform, then no mode of reasoning can be successful. It would hence be irrational if we would not engage in inductive reasoning. 17 The pragmatic vindication thus lies not within a demonstration that induction actually is in practice truth-conducive, but that it is the only mode of reasoning that even has a hope of being truth-conducive if nature is kind enough to us. It would hence be irrational for not to not use inductive reasoning.
I will not discuss Reichenbach's account in greater detail here. The view is notable for holding a middle ground between attempts at justification which allude to the truth-conduciveness of inductive reasoning, which he seems to presume in case that nature is uniform, and attempts that allude to the rationality of inductive practice. There even exists an interesting analysis of Reichenbach's vindication on the grounds of modern decision theory by Michael J. Shaffer. 18 The reason I will not discuss Reichenbach's account any further is that it still presupposes that induction will be truth-conducive given that nature is uniform in order to argue why we should reason inductively. I have argued above that this assumption is false: we cannot expect induction to be truth-conducive, even if nature turns out to be uniform. But without that condition, Reichenbach's vindication is on shaky grounds. However, Reichenbachs's account is notable for shifting the focus towards the rationality of induction, away from the question whether induction is actually successful. So let us now turn to justifications of the rationality of inductive practice that do not make any assumption about the truth-conduciveness of induction, uniformity of nature notwithstanding.
Rationality
From now on, we will depart from the traditional debate about induction and focus on the justification of formal belief updating norms. There exist a number of attempts to justify conditionalisation that are designed to show that conditionalisation is rational under certain rationality constraints. The arguments to justify conditionalisation are usually derived from more established arguments for probabilism, i.e. the view that we should treat our beliefs in a way that fulfils the axioms of probability theory. These arguments are of interest to our discussion here since if one accepts that inductive 16 Salmon (1974), 50. 17 See, e.g., Reichenbach (1935), chapter 11. 18 See Shaffer (2014). reasoning is a species of conditionalisation, then any argument for the rationality of conditionalisation is an argument for the rationality of inductive practice. In the following, I will treat conditionalisation as any formal probabilistic framework to update one's beliefs in the light of evidence. Inductive inferences can thus be construed as belief updating procedures in which the available evidence represents a sample of a population we are updating our beliefs about. Even for the strictest cases of enumerative induction, conditionalisation can be seen as a way to formalise each iterative step of individual observation and the resulting adjustment of our beliefs. Since most of the arguments I discuss in this section are formulated in a Bayesian framework, I will not discuss alternative frameworks for belief updating such as Ranking Theory, e.g. although I have no reason to doubt that the arguments can be adapted to Ranking Theory. 19 Take the following example to illustrate why I take conditionalisation to be suitable to formally represent enumerative induction as sketched above. Let's say it is the late nineties, and I have never consciously listened to a song by Radiohead (obviously, Creep doesn't count). Before I listen to one of their songs for the first time, I have some prior belief about whether I like their music, which we will express in terms of subjective probability P r 1 (A). At this time, my subjective probability that I will like any music that is new to me is probably quite low, because I am at this time a self-important late teen who thinks they've figured out music, and as such I am unimpressed by default. I am then presented with evidence E in the shape of one of the songs on Radiohead's OK Computer. After I have received such evidence, my new, updated belief that I like Radiohead P r 2 (A) should now be equal the initial conditional belief that I will like their music, given that I am presented with positive evidence of their brilliance P r 1 (A|E). Since E consisted in a song that I liked and my initial credence that I would like new music was quite low, my new, updated credence that I like Radiohead P r 2 (A) is now higher than P r 1 (A).
We can immediately see how such a way of belief updating is compatible with how we sketched induction above. The inference is defeasible in the sense that it is always possible to gather new evidence that could lead me to reevaluate my credence regarding whether I like Radiohead or not. The inference is also ampliative in the sense that I extrapolate from a sample of Radiohead's oeuvre in order to infer whether I like their catalogue. We should be clear here that not necessarily all instances of conditionalisation can be reconstructed as inductive reasoning, but that inductive reasoning can be reconstructed as an a subset of all instances of conditionalisation. If I want to check whether there really is a bottle of milk in my fridge, open the fridge, and conditionalise upon my prior credence about the contents of my fridge, then clearly this is a case of conditionalisation, but not of (enumerative) induction.
Given that we can understand inductive inferences as a subset of all instances of conditionalisation, any justification of that particular updating rule is a justification of inductive practice. Let us now take a look at three different strategies of demonstrating the rationality of conditionalisation, and see whether they can avoid Hume's dilemma, and whether this might lead the way to a justification of inductive practice that is not concerned with whether induction is truth-conducive, but whether it is rational.
Dutch Strategies
Dutch book arguments have traditionally been applied to argue for the rationality of probabilism. The basic idea is that if your belief system adheres to the axioms of probability theory, you are immune to accepting bets that guarantee a sure loss. In probability theory, the probabilities of all the possible outcomes of a certain situation should add up to 1. And if you assign subjective probabilities to possible outcomes, these too should add up to 1. The Dutch book argument now matches these subjective probabilities with odds of bets a bookie might sell you. If you have a certain degree of belief that p, say 0.8, you will accept to buy a bet for 80 cents that pays 1 dollar in case p comes about, and you should pay no more than 20 cents on a bet that pays 1 dollar in case that p does not come about. A person with incoherent belief states might now be susceptible to accept a set of bets that will result in a sure loss for the agent. If you have a degree of belief of .6 that p, and again .6 that not-p, a bookie can sell you a bet for 60 cent each that pays 1 dollar if p comes about, or 1 dollar if not-p comes about. The two bets together cost 1.20 dollars, but you can only receive 1 dollar, whatever the outcome. This set of bets is called a Dutch book. Basically, we can demonstrate that an agent has incoherent belief states if it is possible to construct a Dutch book against them. It has been argued that adhering to the axioms of probability theory makes you immune to being Dutch-booked, which is why it is rational to adhere to them, given you want to maximise utility.
Since we are dealing with a arguments for the rationality of inductive reasoning, we should take a look at rationality arguments not for probabilism, but for probabilistic belief updating, for conditionalisation. In order to construct a Dutch book argument for conditionalisation, the argument needs to be made diachronic, since conditionalisation concerns not only how coherent your credences are at a certain point in time, but how you update your credences after you have gathered relevant evidence. An iterated set of bets over time that deliver a sure loss for the agent is called a Dutch strategy. A person adhering to conditionalisation should adjust their credences in the following way. Say at a certain time t 1 you have a degree of belief P r 1 (A), and a degree of belief how likely A is given a certain event E occurs that we would take as evidence for A: P r 1 (A|E). Now after E has either come about or not at t 2 , your new credence that A should be equal the conditional subjective probability that A given E you had at t 1 : P r 2 (A)=P r 1 (A|E).
It has been demonstrated that if an agent violated this simple updating rule, but rather update their credences in a way where the updated subjective probability is, e.g. P r 2 (A) < P r 1 (A|E), they are susceptible to accepting a Dutch strategy. A bookie could sell them the following set of bets, which are all fair in the eyes of the agent: 20 Bet 1) for P r 1 (A ∧ E) Since the agent we are dealing with here violates conditionalisation in that they set the probability of P r 2 (A) lower than, instead of equal P r 1 (A|E), the agent will in case that E occurs suffer a net loss of again [P r 1 (A|E) -P r 2 (A)] · P r 1 (E). Hence, no matter if E or ¬E is the case, the agent will lose [P r 1 (A|E) -P r 2 (A)] · P r 1 (E). An agent who did not violate conditionalisation would not be susceptible to a Dutch strategy. Hence, it is rational to adhere to conditionalisation.
Whether diachronic Dutch book arguments are a good strategy to justify probabilism is controversial, especially since they offer merely a pragmatic argument for an epistemic norm: manage your beliefs thus and you will not be exploitable. 21 I will not attempt to settle the debate here. However, we can see that a possible attempt at a justification of induction by demonstrating the rationality of inductive practice by employing diachronic Dutch book arguments does not necessarily lead into Hume's dilemma in the way that the standards of justification discussed above do. The beauty of a Dutch strategy argument is that it does not depend on the outcome of the bets: whatever the outcome, the agent will make a sure loss, if they fail to adhere to probabilism. Granted, diachronic Dutch books do depend on the stability of the bet: in its classic form above the argument depends on the agent fixing a certain way to violate conditionalisation beforehand. If the agent decides to violate the rule in a different manner over time, the betting strategy above does not necessarily lead to a sure loss for the agent. 22 The argument also depends in a very trivial sense on the continued existence and commitment to the set of bets of both bookie and agent. But, crucially, the argument does not depend on nature staying uniform in the sense that a particular outcome comes about, or that the regularities remain stable. Whatever happens, as long as bookie and agent stay committed to their bets and updating strategy, the agent is susceptible to be Dutch-booked if they violate conditionalisation, and remains immune if they don't.
Hence, if we want to accept avoiding Dutch books as a pragmatic measure of the rationality of conditionalisation, this would be a possible justification of a certain formalised form of inductive practice. Since Dutch books are a controversial rationality measure, especially for epistemic rationality, let us briefly take a look at another argument for the epistemic rationality of conditionalisation, i.e. that conditionalisation maximises expected epistemic utility. 23
Expected Epistemic Utility
One way to provide a justification for epistemic norms is such as the conditionalisation to employ the methodology of expected utility theory. 24 Expected utility theory is the orthodox framework of decision theory. The expected epistemic utility argument are of great interest here because they offer an epistemic justification rather than a pragmatic one.
Very briefly put, according to expected utility theory, an agent's choice is supposedly rational if it is the one of which the agent expects the greatest value according to their preferences. If I wish to eat ice cream, I expect to fulfil this preference by going out and buying some ice cream, and hence it is rational for me to go out and buy ice cream. In expected epistemic utility theory, we substitute the usual items of decision theory with epistemic analoga in order to use the formal framework of expected utility theory: instead of judging the rationality of a decision on which action to take, we are talking about the rationality of 'choosing' 25 a credence function based on the available relevant evidence and our epistemic norms. We can call this an epistemic act. The utility we are trying to maximise is again purely epistemic: given that being in a belief state that is closest to the truth is desirable, an agent maximises expected epistemic utility if they adopt a credence function that we can expect to be as close to the truth as possible, or, to phrase it differently, of being as accurate as possible (or least inaccurate). So if we value being in a belief state that is as close to the truth as possible, we are rational to adopt an epistemic norm such as conditionalising if it maximises epistemic utility in such a way that we can expect it to furnish us with credences that are as close to the truth as possible.
To take an easy example of conditionalisation, consider that I am faced with choosing one and exactly one ball of ice cream from an ice cream parlour with a very 23 These are not the only extant justifications of conditionalisation. Recently, e.g. R.A. Briggs and Richard Pettigrew explored a way to justify conditionalisation by demonstrating that it is accuracy dominant (Briggs and Pettigrew forthcoming). For the sake of this paper, I will stick to Dutch strategies and expected epistemic utility as successful examples for pragmatic and non-pragmatic justifications of conditionalisation. 24 See, e.g. Greaves and Wallace (2006), Leitgeb and Pettigrew (2010a, b). 25 This does not need to be a deliberate, conscious choice in the way that we use the term in action theory of the free will debate (see Greaves and Wallace 2006). It can now be shown that an agent maximises epistemic utility if they adopt the credence function provided by conditionalisation, but not if they adopt any alternative updating rule, which would yield a different credence function. Adopting the formalism of expected utility theory, we can now go ahead and assign a utility U(s, p), represented by a real number, to every possible pair of credence function p and possible state s of the world, where these states of the world are understood as the outcomes we assigned our credences to. In our example here, any choice of ice cream flavour represents a different possible state of the world. If the agent values truth, they are going to assign a high utility to any pair, where they have a high credence in the state that actually obtains. If we add up all the pairs of credence functions and states of the world with their respective assigned utility, we yield the agent's utility function. A rational agent will, given their prior credences, and relative to a credence function P , 'choose' the credence function that gives them they highest expected utility. The 'choosing' of the respective credence function is depicted here as an epistemic act a, whose expected utility function we can represent as follows: EU p (a) : s S p(s) · U (s, (a, s)). For the sake of not cluttering up the paper with proofs, we will not go into the details here. Greaves and Wallace now go on to show that every other updating rule will yield a lower expected utility than conditionalising does, given that the agent places a high utility on being correct (Greaves and Wallace 2006, 615).
So, although this justification of conditionalisation is epistemic in the sense that it purports that an updating rule will maximise expected epistemic utility if it gets us close to the truth, it does not rely on what the world actually looks like. The important question here is whether the expected epistemic utility argument falls prey to Hume's dilemma. I argue it does not, at least not necessarily. Here, the regularity of nature plays a different role than for options 1) and 2). Whereas there, the truthconduciveness, or even truth-preserving nature of inductive inferences could only be asserted (if even at all) if nature is uniform, this is not the case for expected epistemic utility arguments. Here, it does not matter if nature is regular or not for it to be true that conditionalisation maximises expected epistemic utility. That it does is just a consequence of conditionalisation and expected utility theory. So, if we accept the expected epistemic utility argument as a justification of conditionalisation, then it holds regardless of whether the world is regular or not.
However, this does not imply that conditionalisation always gives you the best results. It is easy to construct a world in which conditionalisation produces worse predictions than other updating rules, because the regularities might change in a way the conditionaliser cannot foresee: consider a world which will last for exactly 5,000 years. For the first 4,999 years, all swans are white. Only in the last year, all swans will be black. Importantly, there is no underlying mechanism that could account for the change. The distribution of swan colour is also not indeterministic in the sense that there is an indeterministc law that governs that a certain percentage of swans are either colour, and we were just unlucky that all the white ones occurred in the first 4.999 years. The world in this example is hence not just indeterministic in the sense that it contains statistical regularities, but it is a sort of flip-flop world: the laws change suddenly and for no underlying reason. In the year 4,999, a conditionaliser will have a pretty high credence that all swans are white. In contrast, someone who does not conditionalise might, regardless of the evidence, always believe that not all swans are white. That person would end up making the correct prediction regarding the colour of swans in the year 5,000, whereas the conditionaliser fails to predict correctly.
We could even construct a world in which conditionalisation always delivers the wrong result if we want to make a prediction. There could be a particularly nasty world in which every time somebody conditionalises over the evidence and their priors to make a prediction, the exact opposite of what was indicated in the agent's posterior credences happens. And if the agent conditionalises over their past experiences of predictions and comes to the conclusion that always the opposite of what they predicted happened, and adjusts their credences to accommodate this, then the world changes again to screw up their predictions once more, and so forth. 26 So if nature is irregular, there is no way to tell which way of predicting is the most successful. But what we do know is that regardless of whether the world is regular or not, conditionalising maximises expected epistemic utility. 27 But is that enough, is that all we expect from a justification of induction, given that we accept a justification of conditionalisation as a justification of inductive practice? I argue that it is, and that it is all we can, or even should hope for. That the world might be (or rather, is) such that conditionalisation sometimes produces false predictions is exactly what we should expect from a defeasible form of inference. That induction might fail is the very nature of induction. We should not fall into the same trap as people who try to demonstrate the truth-conduciveness of induction and deny the special defeasible character of induction: induction can lead from true premises to a false conclusion. We have to allow that by inferring inductively, we might get things wrong; not just occasionally, but often, or even always, if we are very unlucky and live in a particularly nasty world. The important task is to show why we would still be justified to infer inductively, even if we cannot be certain that the conclusion will (likely) be true. The expected epistemic utility justification does just that by demonstrating that conditionalisation maximises expected epistemic utility, even if our predictions might turn out false. So if we want to avoid denying that induction is sui generis, and if we want to avoid Hume's dilemma, this is the only possible option we have. Since for this justification, it is irrelevant whether our predictions actually turn out true, there is a way around Hume's dilemma in a way there was none for attempts to justify induction that are designed to show that induction leads towards the truth.
The same goes for the Dutch strategy argument. It, too, does not rely on whether there is any regularity behind what happens in our world, since it only shows that conditionalisation produces coherent credence functions, not that they in any way correspond to any worldly regularities. So as controversial as Dutch strategy arguments are as a tool to justify conditionalisation, they have very few restrictions on the regularity of nature: the agent themselves and the bookie must not change the bets, and the agent must not change their updating norm. Otherwise, Dutch strategies are just measures of whether the agent holds coherent beliefs, and that is independent from whether nature remains constant. Since it is entirely in the agent's hands whether they prefer coherent credence functions, this defence stands.
While these defences of conditionalisation show that a justification of inductive practice which does not rely on the uniformity of nature is possible, neither conditionalisation nor its defences are entirely uncontroversial. There is a sizeable debate whether conditionalisation is actually the best way to update one's credences, even in a Bayesian setting. Bas van Fraassen, for example, proposes two different updating rules, which are both a bit weaker than conditionalisation in the sense that both are implied by conditionalisation, but not vice-versa: special and general reflection. 28 Moreover, while arguments such as the expected epistemic utility and the Dutchbook argument may not rely on the uniformity of nature, conditionalisation itself may come with some other metaphysical preconditions. Michael J. Shaffer, for example has argued that conditionalisation, since it involves assertions about an epistemic agent's future credences, requires a view of the future in which the truth value of future contingents does not routinely turn out being false or indeterminate in order for conditionalisation to be coherent. 29 While I do not want to get too deep into the discussion of alternative updating rules, two things can be said in reply to this challenge. Firstly, while conditionalisation may, if Shaffer's argument is correct, come with this metaphysical demand about the possibility that future contingents can possibly have a positive truth value, at least the justification of conditionalisation does not require a principle such as the uniformity of nature that can only be empirically (or, more specifically, inductively) established, which would be circular. And secondly, I agree that there might be alternative updating rules to conditionalisation, such as van Fraassen's special and general reflection, which each come with their own attempts at a justification. 30 I do not hold a stake in this debate. The attempts to justify conditionalisation discussed above are meant to serve as an example of how a justification of induction might be possible, not as an argument that conditionalisation is a better updating rule compared to its rivals. As long as the justification of it can escape Hume's dilemma, any alternative updating rule has a chance of being justifiable, if it does not fail for other reasons.
So if we accept that rationality arguments for updating rules such as conditionalising are justifications of inductive practice, these rationality arguments seem to be the only way forward for a possible justification of induction.
Conclusion
The debate about induction has for a long time been dominated by accounts, sceptical and positive alike, which presupposed standards for justification that are impossible to satisfy without falling prey to some version of Hume's dilemma. The recent arguments for conditionalisation can be seen as a way to justify our inductive practice by demonstrating why it is rational to update our beliefs according to new evidence. Crucially, our rationality to do so does not depend on the regularity of nature, and 28 See, e.g., van Fraassen (1984van Fraassen ( , 1995 29 See Shaffer (2014). 30 See, e.g. van Fraassen (1984, 1995. hence evades Hume's dilemma. If we accept arguments for the rationality of inductive practice as a justification of induction, then this is the only way forward for a possible justification. To forego any attempt to show how induction could lead to true conclusions, or how it is likely that the conclusion is true, is no cop-out: it goes against the very nature of induction as a defeasible type of reasoning to even try that. Instead, if we want to justify induction, we would have to demonstrate that we are rational to engage in inductive reasoning. The Dutch strategy and expected epistemic utility arguments do just that. I do not claim that the old riddle has been solved. In order to make such a claim, I would have to argue that the Dutch strategy and the expected epistemic utility arguments actually show what they are designed to show, and I will refrain from judgement regarding that matter. However, if we wanted to solve the old riddle by actually proposing a justification of induction, demonstrating the rationality of inductive practice is the only available option. | 13,662.6 | 2018-11-08T00:00:00.000 | [
"Philosophy"
] |
Efficient chromatin profiling of H3K4me3 modification in cotton using CUT&Tag
In 2019, Kaya-Okur et al. reported on the cleavage under targets and tagmentation (CUT&Tag) technology for efficient profiling of epigenetically modified DNA fragments. It was used mainly for cultured cell lines and was especially effective for small samples and single cells. This strategy generated high-resolution and low-background-noise chromatin profiling data for epigenomic analysis. CUT&Tag is well suited to be used in plant cells, especially in tissues from which small samples are taken, such as ovules, anthers, and fibers. Here, we present a CUT&Tag protocol step by step using plant nuclei. In this protocol, we quantified the nuclei that can be used in each CUT&Tag reaction, and compared the efficiency of CUT&Tag with chromatin immunoprecipitation with sequencing (ChIP-seq) in the leaves of cotton. A general workflow for the bioinformatic analysis of CUT&Tag is also provided. Results indicated that, compared with ChIP-seq, the CUT&Tag procedure was faster and showed a higher-resolution, lower-background signal than did ChIP. A CUT&Tag protocol has been refined for plant cells using intact nuclei that have been isolated.
Background
Epigenomic regulations of gene expression play key roles in the growth and development of multicellular organisms in which all cells harbor the same genomic sequences. Epigenomic regulations on the chromatic level, including DNA methylation, histone modification, and the differential binding of transcription factors and their recruited protein complexes, lead to differences in gene expression in different tissues and different developmental periods [1]. Chromatin immunoprecipitation (ChIP) with DNA sequencing is a widely applied chromatin profiling method for genome-wide mapping of DNA-protein interactions. However, the strategy suffers from its high background signal and false-positive artifacts caused by formaldehyde cross-linking and solubilization of chromatin during immunoprecipitation [2,3].
Similar to the DamID, ChEC-seq, and CUT&RUN strategies, CUT&Tag is an enzyme-tethering method in which the specific chromatin protein (e.g., histone, RNA polymerase II, or a transcription factor) is recognized by its specific antibody in situ, and it then tethers a Protein A (pA-Tn5) transposase fusion protein. The tethered pA-Tn5 transposase is activated by adding Mg 2+ . Because the pA-Tn5 fusion protein is already loaded with sequencing adapters, the generated fragments at chromatin protein-binding sites are integrated with adapters and ready for polymerase chain reaction (PCR) enrichment and DNA sequencing [3]. Compared with ChIP-seq, the CUT&Tag technology has more advantages, including (1) high resolution and a low background signal due to the activation of the transposase in situ to generate fragments; (2) freedom from the epitope masking caused by the cross-linking in ChIP; (3) a saving of time because the steps of the cross-linking of material and DNA sonication are not necessary; (4) integration of the fragments generated by the transposome with sequencing adapters, which are ready for PCR enrichment; and (5) a requirement for small amounts of starting material due to the procedure's high sensitivity.
CUT&Tag was first designed for cultured mammalian cells. With the addition and binding of cells to concanavilin A-coated magnetic beads, CUT&Tag can be performed on a solid support [3]. Alternatively, the centrifuge method can be used to collect the cells or nuclei at low speed. The application of a similar enzyme-tethering strategy, CUT&RUN, was previously documented in Arabidopsis [12]. However, few CUT&Tag protocols were developed that were suitable for plants. Allotetraploid cotton is the largest natural fiber resource for textile products. The cotton genome is also a model for polyploid crop domestication and transgenic improvement because of its high-quality sequenced genomes [13,14]. Here we use cotton as the model system for developing an effective CUT&Tag protocol for epigenomic research. We aimed to (1) set up the detailed steps for CUT&Tag that can be widely used in other plants; (2) compare the signal resolution of CUT&Tag with that of ChIP using the same starting material; and (3) provide the workflow and general information about required reads for polyploid plants to meet the efficient resolution required for bioinformatic analysis.
Workflow of CUT&Tag-seq vs. ChIP-seq
The workflow of CUT&Tag and ChIP in parallel with the performing time for each step was roughly estimated (Fig. 1). The detailed method was described in the Materials and Methods section. Unlike ChIP, the CUT&Tag was applied with an in situ strategy, so no cross-linking was needed to stabilize the protein-protein and protein-DNA interactions. We found that cross-linking relied on formaldehyde in ChIP usually caused difficulties in isolating the nuclei with 20% Triton. In CUT&Tag, the intact nuclei were subjected to antibody incubation in the presence of a nonionic detergent, digitonin, which has been successfully used in other in situ methods [8,10]. This allowed antibody permeabilization of the nuclei without compromising nuclear integrity. In the ChIP procedure, the chromatin lysis from the isolated nuclei needed to be sonicated into random fragments at 100-500 bp before the immunoprecipitation reaction with the antibody. We used a Bioruptor ™ (Diagenode, Denville, NJ, USA) to shear the DNA (aliquot of 350 μL in each tube for sonication) to 100-500 bp in length. It usually takes at least 30 min for each sample. If the sample number increases, hours are needed in the sonication step. After the CUT&Tag or ChIP reaction, the DNA was isolated for library construction and NGS. As in ChIP, the DNA-protein was cross-linked; it was difficult to extract the DNA without reverse cross-linking. Alternatively, the protein can be digested with proteinase K before DNA extraction, which makes the performance time of the DNA isolation step longer compared with the CUT&Tag procedure. Finally, after the fragmentation of proteinbinding chromatin by Tn5, the fragments were already integrated with adapters and ready for PCR enrichment and NGS. In comparison, it took 4-5 h longer to construct the NGS library for the ChIP DNA we obtained. In summary, the CUT&Tag procedure outperforms the ChIP procedure in operational simplicity and experimental time needed.
Nuclei used in CUT&Tag can be semi-quantified by DNA determination
The presence of the cell wall in plant cells makes it difficult for antibody to penetrate the cells. As an alternative, intact nuclei were used in the assay (Fig. 2a). The other unknown was the amount of nuclei that should be used in each CUT&Tag reaction. We found it was difficult to count the number of nuclei under the microscope because the nuclei isolated from plants usually clustered together. We tried to semi-quantify the nuclei by determining the DNA that could be extracted. In the test for histone H3K4me3 modification in the leaves of cotton (G. barbedense, accession H7124), 150 µL of nuclei suspension was used in each CUT&Tag reaction (step 9 in the protocol), which equal to ⁓ 1.5 µg of chromatin according to the semi-quantification of nuclei by DNA determination (Fig. 2b). We also semi-quantified the nuclei isolated from different tissues including root and fiber of cotton (G. barbedense, accession H7124), results indicated that nuclei from 1 g root or 4 g fiber (from 3-4 20 D cotton balls of H7124) equal to 15-20 µg of chromatin, which was enough for 10 CUT&Tag reactions.
CUT&Tag biological replicates showed high repetitiveness and high signal-to-noise ratio
Trimethylation of lysine 4 of histone H3 (H3K4me3) is a universal active marker of gene expression. We set up two biological replicates for the CUT&Tag reaction of H3K4me3 antibody. The reaction of each replicate was set up separately at the beginning using the intact nuclei isolated. The CUT&Tag reaction with IgG antibody was used as a control. The ChIP for H3K4me3 antibody was set up using the same material, and the ChIP mock reaction without the addition of H3K4me3 antibody was used as a control. Qubit analysis was performed after PCR enrichment and purification. Results indicated that the CUT&Tag_IgG control showed a low background signal, and the two replicates of the CUT&Tag_H3K4me3 group had fragments with a peak size of ~ 350 bp (Additional file 1: Figure S1), indicating the successful fragmentation of the chromatin. We then performed NGS and mapped the clean reads to the reference genome [14]. We obtained 10.61 million (M), 15.30, and 14.16 M mapped reads for two replicates of CUT&Tag profiling for H3K4me3 and the IgG control, respectively (Table 1). In comparison, we carried out parallel H3K4me3 profiling using the conventional ChIP procedure. The NGS generated mapped reads of 23.17 and 31.34 M for ChIP and its mock control, respectively (Table 1). We first did the correlation analysis for the CUT&Tag and ChIP groups, and results indicated that both of the replicates of CUT&Tag showed a very low correlation with the CUT&Tag_IgG control (r = 0.01, Pearson's correlation), indicating that the CUT&Tag experimental group and the control group varied significantly and the CUT&Tag experimental group was different from the background noise (Fig. 3a). In comparison, the ChIP_H3K4me3 group showed a high correlation with its mock control (r = 0.89, Pearson's correlation), which indicated that the signal-tonoise ratio in the ChIP assay would become a problem (Fig. 3a). We also dot plotted the correlation of the two replicates of CUT&Tag_H3K4me3. Data showed that they had a near perfect correlation (r = 0.97, Pearson's correlation) (Fig. 3b), indicating the high repetitiveness within different biological replicates.
In order to evaluate the signal resolution between the CUT&Tag and ChIP data, we randomly sampled the same depth of sequencing reads ranging from 6 to 24 M from each sample and summarized the number of called peaks from them. Results showed that 42,367 and 46,779 peaks were called from two replicates of CUT&Tag, respectively, but only 18,024 peaks were called from the ChIP data when using 6-M clean reads (Table 2). There were 40,859 peaks called when using as much as 24-M clean reads from ChIP (Table 2), which means that 6-M with ChIP-seq profiling for the H3K4me3 histone modification. The same antibody was used in all experiments. Pearson correlations were calculated in deepTools (the multiBamSummary was followed with plotCorrelation tools) using the read counts split into 500-bp bins across the genome. b Scatterplot correlation of CUT&Tag replicates (rep1 and rep2). Pearson's r was indicated. (c) Number of shared peaks and unique peaks in CUT&Tag replicates (rep1 and rep2) and ChIP-seq. Peaks were called by macs2 using randomly sampled 6-M clean data of CUT&Tag and 24-M clean data of ChIP. Peaks overlapped across the genome and with the distance of peak summit < 300 bp were considered as the same peak
Table 2 Number of called peaks and FRiP value under the same sequencing depth as indicated
Data were generated by random sampling of clean reads from the NGS fastq files. FRiP (Fraction of reads in peaks) [15] values which act as an indicator of a signal-tonoise ratio were provided within the brackets. clean reads of CUT&Tag can provide signals equivalent to 24-M clean reads of ChIP. The The overlapped peaks were determined using the peaks from 6-M clean data of the CUT&Tag and 24-M clean data of the ChIP. Among these peaks, 25,597 (54.7-62.6%) peaks were shared by CUT&Tag and ChIP, and 37,168 (79.5-87.7%) peaks were shared between two replicates of CUT&Tag, indicating the high reproducibility of two replicates of CUT&Tag data (Fig. 3c). The FRiP (fraction of reads in peaks) values calculated the ratio of mapped reads that fall into peaks among all mapped reads, and they act as indicators of the signal-to-noise ratio [15]. The FRiP value for each group of peaks was calculated, and the results indicate that CUT&Tag generated high signal-to-noise ratio (FRiP = 0.7; Table 2). These results suggest that CUT&Tag has higher signal resolution compared with ChIP.
The genomic locations of the peaks were divided into eight categories, including 1-2 kb promoter (1-2 kb 5′ upstream of translation starting site), 1-kb promoter (≤ 1-kb 5′ upstream of translation starting site), first exon, first intron, other exon, other intron, 1-kb downstream (≤ 1-kb 3′ upstream of translation terminating site), and intergenic (out of the region described above). Here we only summarized the distribution of peaks called using 6-M clean reads of CUT&Tag-seq data and 24-M clean reads of ChIP-seq data. The H3K4me3 signals from both the CUT&Tag and ChIP data were predominantly (60-70%) enriched in the 1-kb promoter, first exon, and first intron categories (Fig. 4). This is consistent with previous reports showing that H3K4me3 signals were mainly located in the promoter and 5′ regions of the gene [16,17]. However, on the heatmap of all of the H3K4me3 signals normalized with the CUT&Tag_ IgG control or ChIP_mock control in the region of the gene body and its 5-kb flanking region, the signals from CUT&Tag had higher intensities than those from ChIPseq (Fig. 5a). The correlation analysis of peaks near the genes showed a high correlation between two CUT&Tag replicates (Fig. 5b, r = 0.94, Pearson's correlation), and a strong correlation between CUT&Tag and ChIP (Fig. 5c, r = 0.71, Pearson's correlation).
As an additional step, we observed the H3K4me3 signals of both a large genome region (i.e., a randomly selected region covering ⁓ 1600 kb) and a small chromatin region of individual genes (selected with different expression levels) in the CUT&Tag and ChIP data using Integrative Genomics Viewer (IGV) software [18]. Consistent with the heatmap intensities indicated, the CUT&Tag signal outperformed the ChIP signal in resolution and sensitivity (Fig. 6a), especially in those genes with relatively low expression (e.g., the genes in GB_A11G1394, GB_D10G1774, and GB_A13G1872 in Fig. 6b). Overall, the CUT&Tag signal showed higher resolution and lower background noise for H3K4me3 profiling genome-wide.
Histone H3K4me3 signal intensities are associated with active gene expression
The allotetraploid cotton G. barbadense harbors a genome of approximately 2.22 Gb in size, with 75,071 high-confidence protein-coding genes (PCGs) [14]. We did the transcriptome sequencing for the same leaf issue and identified 44,789 genes expressed with a TPM (transcripts per kilobase of exon model per million mapped reads) that greater than 1 (Fig. 7a). We further examined the number of peak-related PCGs (with peaks located within genes and a flanking region of ≤ 1 kb). Results showed that there were 38,513 and 42,265 peak-related PCGs from two replicates of CUT&Tag-seq, respectively, which covered 34,072 (76.1%) and 36,988 (82.6%) of the expressed genes with a TPM of greater than 1. In comparison, 33,229 peak-related PCGs from ChIP-seq covered 30,016 (67.0%) of the expressed genes with a TPM of greater than 1. Thus, H3K4me3 modification is a nearly universal histone modification that is well documented to be associated with the active transcription of genes [16,[19][20][21]. The correlation analysis between the intensities of gene-associated H3K4me3 signals and the transcriptional levels of corresponding genes was performed. Results indicated that the H3K4me3 intensities of gene related peaks had a weak correlation with gene expression levels (Additional file 1: Figure S2, r = 0.31). However, a descending trend of H3K4me3 signals in the heatmap was found when the plotted genes were arranged in the descending order of their TPM (Additional file 1: Figure S3). Instead, we further boxplot the expression levels of genes that divided into two different subclasses of with or without CUT&Tag-seq peaks, results showed that PCGs with H3K4me3 peaks are significantly higher expressed (Fisher Pairwise Comparisons, P < 0.001) (Fig. 7b). Alternatively, we boxplot the H3K4me3 peak intensities from CUT&Tag-seq at six different subclasses of genes that descending ordered and artificially divided by TPM values from mRNA-seq (TPM > 100, 50-100, 10-50, 5-10,1-5 and < 1), results showed that the corresponding H3K4me3 signal intensities in each group of genes decreased significantly (Fisher Pairwise Comparisons, P < 0.001) (Fig. 7c). These data indicated that histone H3K4me3 signal intensities are associated with active gene expression.
Discussion
Plant tissues is still very challenging due to the presence of the cell walls, large vacuoles, and secondary metabolites [22]. The isolation of plant chromatin Fig. 4 The histogram diagram showed the annotation of peaks for the H3K4me3 histone modification from CUT&Tag and ChIP data. a, b Peak distribution in CUT&Tag replicates (rep1 and rep2). c Peak distribution in ChIP. Peaks were called by macs2 using randomly sampled 6-M clean data of CUT&Tag and 24-M clean data of ChIP needs a plant-specific approach; for example, nuclei of high quality need to be isolated before chromatin lysis is performed [22]. Cotton fiber is a specialized cellulosic tissue from which it is difficult to isolate enough nuclei for a ChIP reaction. Slight modification in the procedures of nuclei isolation and PCR enrichment after fragmentation is recommended if the amount of starting material is small at the signal cell level, such as anthers, fibers, and ovules. We highly recommend optimizing the Triton incubation time for nuclei isolation. The nuclei in CUT&Tag must be intact. Broken nuclei will lead to the non-specific tethering of Protein A (pA-Tn5) transposase fusion protein to the chromatin, subsequently the non-specific fragmentation in situ arises a high level of background noises in CUT&Tag.
In the original study [3], the addition and binding of cells to Concanavalin A-coated magnetic beads was performed, allowing magnetic handling of the intact cells in all successive washing and reagent incubation steps. This step can be replaced by gentle centrifugation (< 600 × g) between steps [3]. Our data showed that gentle centrifugation (300 × g) to precipitate the nuclei works well. For antibody efficiency, H3K4me3 is an abundant chromatin modification mark that can generate sufficient signals for profiling. For other chromatin modification marks or chromatic proteins with relatively low abundance, a secondary antibody against the protein-specific primary antibody is recommended to amplify the signal [3]. Because the antibody binds to the epitopes in situ and CUT&Tag has high sensitivity, antibodies successfully tested in immunofluorescence would work with CUT&Tag. Accordingly, CUT&Tag in transgenic plants tagged with a GFP or His fused target protein can be used with the anti-tag antibody instead of the protein-specific antibody.
Regarding the NGS depth for CUT&Tag, it was reported that approximately 8 M mapped reads of the human genome (~ 3 Gbp in size) displayed a clear pattern for lysine-27-trimethylation of the histone H3 tail (H3K27me3), an abundant histone modification that marks silenced chromatin regions [3]. In addition, CUT&Tag populated peaks at low sequencing depths, where approximately 2-M reads are equivalent to 8-M reads for CUT&RUN (or 20 M for ChIP-seq), demonstrating the exceptionally high efficiency of CUT&Tag [3]. It was documented that 6-to 8-M unique deduplicated reads by CUT&RUN could provide genome-wide H3K27me3 landscapes with high sensitivity, specificity and reproducibility in the model plants of Arabidopsis harbors a genome of 125 Mbp in size that encoded 25,498 PCGs [12]. According to our data, 6-to 8-M clean reads from CUT&Tag are equivalent to 24-M clean reads for ChIP-seq (Table 2); and 8-M unique deduplicated reads from CUT&Tag is sufficient for profiling the H3K4me3 signal genome-wide for allotetraploid cotton plants with the genome size of approximately 2.2-2.3 Gbp which encoded ~ 75,000 high-confidence PCGs. Regarding the cost of sequencing and differences in plant genome size and the number of PCGs, pilot sequencing is recommended for your libraries (e.g. sequencing 2-3 G raw base using 150 × 150 bp paired-end sequencing) first to test the sensitivity of CUT&Tag libraries in your plants, and then did more sequencing if needed.
Based on findings in the previous publication [3], the Pearson's correlation coefficient r value between CUT&Tag and ChIP profiling for the H3K4me1 histone modification is 0.7-0.8. We did the same correlation analysis using the same parameter and found that the Pearson's correlation coefficient r value is 0.3 between CUT&Tag and ChIP for H3K4me3 (Fig. 3). The low r value is mainly caused by the different profiling procedures of the methods (i.e., fixed chromatin in ChIP vs. native chromatin in CUT&Tag; fragmentation of the DNA by sonication to ~ 500 to 1000 bp in size in ChIP vs. fragmentation of the DNA in situ by Tn5 transposase to ~ 350 bp in size); this leads to heterogeneity between CUT&Tag and ChIP. However, when we perform dot plotting of the correlation of peaks signals near the genes, the r value is 0.71 between CUT&Tag and ChIP (Fig. 5c), the peaks signals generated from both of the methods showed high homogeneity. Also, the Pearson's correlation coefficient r value between ChIP-seq and its mock control is high, indicating low signal-to-noise ratio in ChIP assay. For this reason we are seeking a more efficient chromatin profiling method for our research on epigenetics in cotton. In our study so far we have successfully established a CUT&Tag protocol for cotton that can also be widely applied to other plants.
Cotton plants (Gossypium spp.) bear seed trichomes (cotton fibers) that are an important commodity worldwide. Until now, the profiling of epigenomic modifications in cotton fibers was difficult because of the amount of starting materials required to harvest enough chromatin. Cotton fibers are single-cell structures. After differentiation, the fiber cells move into a stage of rapid elongation to increase the cell length up to 2-3 cm without cell division. This means the nuclei do not increase during the fiber cell-elongation stage. The chromatin enrichment for fiber in the elongation stage requires large amounts of fiber tissue at relatively low efficiency. We are interested in the nuclei that can be isolated from cotton fibers. From the DNA extracted, we found that fiber nuclei extracted from four cotton balls (20 D cotton fiber) were sufficient for about 20 CUT&Tag reactions (Fig. 2b). In comparison, according to our experience, at least 20 µg of chromatin is needed in each ChIP reaction to obtain enough DNA for the library construction of cotton. Thus the CUT&Tag needed only approximately 1/20 of the starting material needed by the conventional ChIP strategy. In addition, few chromatin profile methods were successfully applied to study the specific transcription factors that play key roles in regulating fiber differentiation and elongation. The CUT&Tag we established provided a promising strategy for further application in the study of epigenomics in cotton fiber development.
Histone modification that alters the nucleosome structure and recruits regulatory proteins is recognized as an integral part of the gene regulation in eukaryotes from yeasts to humans. The trimethylation of lysine 4 of histone H3 (H3K4me3) is one of the most established histone modifications. It has a well-established association with gene expression [23], is often described as an "activating" histone modification, and is assumed to have an instructive role in the transcription of genes. However, it has not been convincingly supported on a genome-wide scale and lacks a conserved mechanism [24]. Consistent with previous publications [17], our "meta" data for genes showed that the H3K4me3 signals, on average, are enriched at the 5′ end of genes (Fig. 5a). Previous studies have focused on the mechanism of this enrichment and found that H3K4me3 depends on the phosphorylation of the C-terminal domain of RNA polymerase II at serine 5 by TFIIH-associated kinase [25]. This phosphorylation signal has a sharp peak at the 5′ region of the gene body [25], which could explain why the H3K4me3 signal is predominantly found at the 5′ end of the gene. Ng et al. [25] proposed that H3K4me3 may provide a molecular memory of recent transcriptional activity. This theory is based on the finding that H3K4me3 persist within the mRNA coding region for a considerable time after transcriptional inactivation and Set1 (yeasthistoneH3-lysine4 (H3-K4)methylase) dissociation from the chromatin [25]. In plants, the flowering of the Arabidopsis shoot was studied with a focus on the dynamics of gene expression and H3K4me3 markers, and the results suggested a general congruence between the H3K4me3 dynamics and gene expression changes. However, no precise correlation r value has been calculated [26]. Our results in the allotetraploid cotton G. barbadense were similar; the H3K4me3 modification represented an active trend for gene expression (Fig. 7).
Conclusions
In summary, we developed effective CUT&Tag protocols and refined conditions that can be widely used in plants for chromatin profiling. We showed that CUT&Tag outperforms the traditional chromatin profiling method of chromatin immunoprecipitation (ChIP) in allotetraploid cotton plants in terms of operational simplicity and experimental time needed. Most importantly, CUT&Tag needs less starting materials and generates high-resolution signals with low background noise. Our optimized CUT&Tag protocols specifically designed for plant cells had a broad spectrum of for plant epigenetic research.
Plant materials
The allotetraploid cotton cultivar Gossypium barbadense (accession H7124) was used in this study. Cotton seedlings were grown in pots at 28 °C in a greenhouse in a 16/8-h light/dark cycle with 60% humidity. Leaf and root samples were collected when the seedlings had two or three true leaves (i.e., from 4-week-old seedlings). Fiber samples were collected from 20 D cotton bolls of H7124.
Note: Check the antibody affinity of the protein A or protein G that is fused with the Tn5. Generally speaking, proteins A and G have broad antibody affinity. However, protein A has a relatively higher affinity to rabbit antibodies and protein G has a relatively higher affinity to mouse antibodies. Select the appropriate transposase products that match your primary antibody.
Working solutions
Prepare fresh working solutions; refer to Additional file 1: Table S2 for detailed recipes.
Performing Assay (Days 2 and 3)
Day 2 Nuclear preparation 5. Take 1 g of the leaf tissue to be analyzed in the procedure. Grind the leaves in liquid nitrogen to a fine, dry powder. 6. Resuspend the ground and frozen leaf powder (1 g) in a 50-mL tube containing 30 mL of nuclear isolation buffer A (ice cold), and mix immediately with gentle shaking. Filter the solution through two layers of Miracloth, and put the filtered solution in a new ice-cold 50-mL tube. Centrifuge the filtrate for 5 min at 600 × g at 4 °C. Note: If using a starting material with low input, skip the filter action through the Miracloth step. 7. Remove the supernatant, and add 5 mL of nuclear isolation buffer B (4 °C) to the pellet cells. Transfer the solution immediately to five 1.5-mL tubes (1 mL/each tube; use end-cut tips to transfer). Centrifuge for 3 min at 600 × g at 4 °C. 8. For each tube, wash the pellet three times using 1 mL of nuclear wash buffer. 9. For each tube, resuspend the nuclei in 1 mL of antibody buffer. Take 150 μL aliquot of the nuclei suspension using end-cut tips to a 1.5-mL tube for one reaction. An amount of 1 mL of nuclei can be set up for six reactions. 10. Add 1 μL of antibody (anti-H3K4me3 antibody or IgG control antibody) to each reaction (1:50 to 1:100 diluted; the final concentration of antibody is 10-20 μg/mL). Perform immunoprecipitation overnight at 4 °C with gentle shaking.
Day 3 Transposase incubation
11. Add 800 μL of IP wash buffer to each reaction. Sit the tubes at room temperature for 5 min, and then centrifuge for 3 min at 300 × g at 4 °C to collect the nuclear pellet. Repeat the nuclear pellet washing step for three times. 12. Add 9.375 μL of transposase (generated on Day 1) to 1 mL of transposase incubation buffer, and mix gently. 13. Add 150 μL of transposase from the above step to each reaction. Immunoprecipitate for 1 h at room temperature with gentle shaking. 14. Wash with 800 μL of IP wash buffer, Sit the tubes at room temperature for 5 min, and then centrifuge for 3 min at 300 × g at 4 °C to collect the nuclear pellet. Repeat the nuclear pellet washing step for three times. [16][17][18] cycles are recommended when using the 100-µL nuclei described above in the protocol (equals approximately 1 µg of chromatin). Generally, using 20 PCR cycles is commended when using starting nuclei of less than 1 k; 17-18 cycles for 1 k to 1 week, and 15-17 cycles for 1-10 week. The criteria for PCR cycle selection are starting with low numbers of cycles and increasing the numbers if needed. In this way the library has enough enrichment of fragments at low levels of PCR duplicates to achieve high "complexity" for NGS.
PCR product purification 25. Purify the PCR products using a commercial column or beads. 26. Load 2 µL of the purification product on 2% agarose gel for electrophoresis to detect the fragment concentration and distribution. 27. Use Qubit fluorometric quantitation to detect the library concentration and quality.
Additional file 1: Figure S1. Qubit fluorometric quantitation of DNA libraries. Figure S2. Correlation analysis of H3K4me3 peak intensities and gene expression. Figure S3. Heatmap of H3K4me3 signals near PCGs with TPM values in descending order. Table S1. Oligos used in this study. | 6,587 | 2020-08-31T00:00:00.000 | [
"Chemistry"
] |
Multiple D2-Brane Action from M2-Branes
We study the detail derivation of the multiple D2-brane effective action from multiple M2-branes in the Bagger-Lambert-Gustavsson (BLG) theory and the Aharony-Bergman-Jafferis-Maldacena (ABJM) theory by employing the novel Higgs mechanism. We show explicitly that the high-order F^3 and F^4 terms are commutator terms, and conjecture that all the high-order terms are commutator terms. Because the commutator terms can be treated as the covariant derivative terms, these high-order terms do not contribute to the multiple D2-brane effective action. Inspired by the derivation of a single D2-brane from a M2-brane, we consider the curved M2-branes and introduce an auxiliary field. Integrating out the auxiliary field, we indeed obtain the correct high-order F^4 terms in the D2-brane effective action from the BLG theory and the ABJM theory with SU(2)\times SU(2) gauge symmetry, but we can not obtain the correct high-order F^4 terms from the ABJM theory with U(N)\times U(N) and SU(N)\times SU(N) gauge symmetries for N>2. We also briefly comment on the (gauged) BF membrane theory.
By relaxing the requirement of the positive definite metric on three algebra, three groups [24,25,26] proposed the so called BF membrane theory with arbitrary semi-simple Lie groups. However, the BF membrane theory has ghost fields and then the unitarity problem in the classical theory due to the Lorenzian three algebra. To solve these problems, the global shift symmetries for the bosonic and fermionic ghost fields with wrong-sign kinetic terms are gauged, which ensures the absence of the negative norm states in the physical Hilbert space [40,43]. However, this gauged BF membrane theory might be equivalent to three-dimensional N = 8 supersymmetric Yang-Mills theory [47] via a duality transformation due to de Wit, Nicolai and Samtleben [86].
Very recently, Aharony, Bergman, Jafferis and Maldacena (ABJM) have constructed three-dimensional Chern-Simons theories with gauge groups U(N) × U(N) and SU(N) × SU(N) which have explicit N = 6 superconformal symmetry [44] (For Chern-Simons gauge theories with N = 3 and 4 supersymmetries, see Refs. [87,88]). Using brane constructions they argued that the U(N) × U(N) theory at Chern-Simons level k describes the low-energy limit of N M2-branes on a C 4 /Z k orbifold. In particular, for k = 1 and 2, ABJM conjectured that their theory describes the N M2-branes respectively in the flat space and on a R 8 /Z 2 orbifold, and then might have N = 8 supersymmetry. For N = 2, this theory has extra symmetries and is the same as the BLG theory [44].
On the other hand, D-branes are the hypersurfaces on which the open strings can end, and their dynamics is described by open string field theory [89]. The low-energy world-volume action for D-branes can be obtained by calculating the string scattering amplitudes [90] or by using the T-duality [91]. As usual in string theory, there are high-order α ′ = ℓ 2 s corrections, where ℓ s is the string length scale. For a single D-brane, the D-brane action, which includes all order corrections in the gauge field strength but not its derivatives, takes the Dirac-Born-Infeld (DBI) form [92]. For multiple coincident D-branes, Tseytlin assumed that all the commutator terms should be treated as covariant derivative terms for gauge field strength, and thus should not be included in the effective action [90]. And he proposed that the action is the symmetrized trace of the direct non-Abelian generalization of the DBI action [90]. This non-Abelian DBI action gives the correct terms up to the order F 4 that were completely determined previously [93,94]. But it fails for the higher order terms [95,96]. Because the F 3 terms can always be written as the commutator terms, they are not interesting in the discussions of the D-brane effective action.
With the multiple M2-brane and D2-brane theories, we can study the deep relation between them. As we know, the full effective action of a D2-brane can be obtained by the reduction of the eleven-dimensional supermembrane action [97]. So, whether we can obtain the effective non-Abelian action for multiple D2-branes from the reduction of the BLG and
ABJM theories is an interesting open question. Mukhi and Papageorgakis proposed a novel
Higgs mechanism by giving vacuum expectation value (VEV) to a scalar field, which can promote the topological Chern-Simons gauge fields to dynamical gauge fields [8]. And they indeed obtained the maximally supersymmetric Yang-Mills theory for two D2-branes from the BLG theory at the leading order. Also, there exists a series of high-order corrections [8].
In this paper, we consider the derivation of the multiple D2-brane effective action from the multiple M2-branes in the BLG and ABJM theories in details. Concentrating on pure Yang-Mills fields, we show that the high-order F 3 and F 4 terms are commutator terms, and argue that all the high-order terms are also commutator terms. Thus, these high-order terms are irrelevant to the multiple D2-brane effective action. Note that the (gauged) BF membrane theory does not have high-order terms, the BLG theory, the (gauged) BF membrane theory, and the ABJM theory give the same D2-brane effective action. In order to generate the non-trivial high-order F 4 terms, inspired by the derivation of a single D2-brane from a M2brane [97], we consider the curved M2-branes and introduce an auxiliary field. In particular, the VEV of the scalar field in the novel Higgs mechanism depends on the auxiliary field.
After we integrate out the massive gauge fields and auxiliary field, we indeed obtain the highorder F 4 terms in the D2-brane effective action from the BLG theory and the ABJM theory with SU(2) × SU(2) gauge group. However, we still can not obtain the correct F 4 terms in the generic ABJM theories with gauge groups U(N)×U(N) and SU(N)×SU(N) for N > 2.
The reason might be that the SU(2) × SU(2) gauge theory has three-dimensional N = 8 superconformal symmetry while the U(N) ×U(N) and SU(N) ×SU(N) gauge theories with N > 2 may only have three-dimensional N = 6 superconformal symmetry [72]. We also briefly comment on the (gauged) BF membrane theory. This paper is organized as follows. In Section II, we briefly review the novel Higgs mechanism in the BLG theory and (gauged) BF membrane theory, and study the novel Higgs mechanism in the ABJM theory. In Section III, we calculate the effective D2-brane action with the leading order F 2 , and high-order F 3 and F 4 terms from M2-branes. In Section IV, we generate the high-order F 4 terms by considering the curved M2-branes and introducing an auxiliary field. Our discussion and conclusions are given in Section V.
II. NOVEL HIGGS MECHANISM
In this Section, we briefly review the novel Higgs mechanism from M2-branes to D2branes in the BLG theory and (gauged) BF membrane theory, and study it in the ABJM theory.
A. The BLG Theory and BF Membrane Theory
In the Lagrangian for the BLG theory with gauge group SO(4) [4], we define where k is the level of the Chern-Simons terms. We also make the following transformation on the Yang-Mills fields Then the Lagrangian for the BLG theory with gauge group SO(4) becomes where A = 1, 2, 3, 4, I = 1, 2, ..., 8, and As we know, the strong coupling limit of Type IIA theory is M-theory, and the coupling constant in Type IIA theory is related to the radius of the circle of the eleventh dimension in M-theory. Thus, for D2-branes, the gauge coupling constant is also related to the radius of the circle of the eleventh dimension. And at the strong coupling limit the D2-branes become M2-branes. To derive the D2-branes from M2-branes via the novel Higgs mechanism, we compactify the M-theory on the circle of the eleventh dimension by giving VEV to a linear combination of the scalar fields X A(I) [8]. Because we have the SO(8) R-symmetry and SO(4) gauge symmetry, we can always make the rotation so that only the component where we split the index A into two sets a = 1, 2, 3 and φ = 4. In addition, the gauge fields And then the Chern-Simons terms can be rewritten as where we neglect the total derivative term. Combining these two terms, the Chern-Simons action becomes where Similarly, the kinetic terms for the scalar fields are Substituting these back into the action and setting X φ(8) → X φ(8) + v, we obtain the terms involving B a µ from the scalar kinetic terms where i = 1, 2, ..., 7, and the new defined covariant derivative is D µ X a(I) = ∂ µ X a(I) − 2ǫ a bc A b µ X c(I) . Therefore, the relevant Lagrangian for pure Yang-Mills fields is Next, we would like to briefly review the result of the novel Higgs mechanism in the BF membrane theory [24,25,26]. Here, we follow the convention in Ref. [25] except that we In this theory, the equation of motion for ghost field X I − gives the constraint ∂ 2 X I + = 0. So, we can give a constant VEV to X 8 + , i.e., X 8 + = v. And then we obtain the relevant Lagrangian for pure gauge fields It should be noted that unlike the Lagrangian in Eq. (13) in the BLG theory, there is no cubic term for B a µ in above Lagrangian. And this is one of the motivations of the work [47] which showed that the gauged BF membrane theory might be equivalent to the maximally supersymmetric three-dimensional Yang-Mills theory via a duality transformation due to de Wit, Nicolai and Samtleben [86].
After gauging the shift symmetries for the ghost fields X I − and Ψ − in the BF membrane theory [40,43] by introducing new gauge fields, we could make the gauge choice to decouple the ghost states. And the equation of motion for the new gauge fields gives the constraint ∂ µ X I + = 0, which indicates that X I + must be a constant. We emphasize that in this case the relevant Lagrangian for pure Yang-Mills fields is still given by Eq. (15). becomes identical to the BLG theory. In this subsection, we will study the novel Higgs mechanism in the ABJM theory.
Following the convention in Ref. [46], we can write the explicit Lagrangian in ABJM theory as follows where where X i belongs to the bifundamental representation of U(N) × U(N) or SU(N) × SU(N), and here we do not present the potential V ferm and V bos since they are irrelevant in the following discussions. For our convention, we choose where T a,b,c are the generators of the corresponding gauge group.
Similar to the novel Higgs mechanism in the BLG theory, we give the diagonal VEV to X 8 as follows where I N ×N is the N by N indentity matrix. Also, we define So we have From the kinetic term for W 2 and the Chern-Simons terms, we obtain the relevant Lagrangian for pure Yang-Mills fields where Note that the BLG theory with SO(4) gauge group is the same as the ABJM theory with SU(2) × SU(2) gauge group, so we can obtain the Lagrangian in Eq. (13) from that in the above Eq. (23) by rescaling f abc .
III. EFFECTIVE ACTION FOR THE PURE GAUGE FIELDS
Because B a µ is massive, we will calculate the effective action for pure Yang-Mills fields by integrating it out. Due to the absence of the cubic term for B a µ in the (gauged) BF membrane theory, we do not have the high-order corrections in the effective action of gauge fields. Thus, we will concentrate on the BLG theory and ABJM theory. The relevant Lagrangians for pure gauge fields are the same for the BLG theory and the ABJM theory with SU(2) × SU(2) gauge symmetry, and the ABJM theory is more general. Thus, we will use the Lagrangian in Eq. (23) in the following discussions.
From the Lagrangian in Eq. (23), we get the equation of motion for
We can solve the above equation by parametrizing the solution in 1/v 2 expansion Substituting it back into Eq. (24), we obtain Because we only know for sure the high-order terms up to the order of F 4 in D2-brane effective action [90,95,96], we only need to calculate the solution to Eq. (24) up to the order of 1/v 10 or (C 10 ) µ a . And the non-vanishing terms in the solution are Integrating B a µ out, we get the Lagrangian for pure Yang-Mills fields where Using Eqs. (27) and (28), and the useful identities in the Appendix A, we obtain Thus, L YM is the kinetic term for the gauge fields A µ a and is the leading order of the supersymmetric Yang-Mills effective action. Moreover, the gauge coupling in the BLG theory is and the gauge coupling in the ABJM theory is So for very large v 0 and k, we can still keep the gauge coupling as a fixed constant. For D2-branes, the gauge coupling is related to the string coupling and the string length as And then for the fixed string coupling, we have g 2 Y M ∝ α −1/2 . Therefore, 1/v is proportional to α ′1/4 , L YM only have commutator terms, these high-order terms are covariant derivative terms and then do not contribute to the effective action for the D2-branes [90].
We conjecture that all the high-order terms obtained by this approach are the commutator terms. The point is that the equation of motion for B a µ in Eq. (24) can be rewritten as follows Because all the high-order terms originally come from the last term in the above equation which is a commutator term, all the high-order terms should be the commutator terms and then the covariant terms. Thus, moduloing the commutator terms or covariant derivative terms, we only have the kinetic term for the gauge fields A µ a from the BLG and ABJM theories, which is the leading order in the D2-brane effective action. And then the effective action for pure Yang-Mills fields from the BLG and ABJM theories is the same as that from the (gauged) BF membrane theory after we integrate B µ a out. Therefore, how to obtain the non-trivial F 4 terms in the D2-brane effective action from the BLG theory, the (gauged) BF membrane theory, and the ABJM theory is still a big problem.
IV. D2-BRANES FROM THE CURVED M2-BRANES
In spired by the derivation of a single D2-brane from a M2-brane [97], we would like to consider the multiple curved M2-branes. To employ the trick in Ref. [97], we only need to introduce gravity. For simplicity, we do not consider the dilaton, the vector and scalar fields in the eleven-dimensional metric due to compactification, and RR fields, etc. And our ansatz for the Lagrangian of the curved M2-branes is where β 0 is a positive constant like membrane tension, g µν is the induced metric on the world-volume of multiple M2-branes, and L M2s is formally given in Eq. (3) for the BLG theory or in Eq. (16) for the ABJM theory. In L M2s , we need to replace η µν and ∂ α by g µν and ∇ α , respectively. Also, we replace ǫ µνλ by ε µνλ = √ −gǫ µνλ which will be covariant under coordinate transformation. This is a natural action for the multiple M2-branes in the curved space-time since it can come back to flat theory after we decouple the gravity.
Similar to the discussions in Ref. [97], we introduce an auxiliary filed u and rewrite the above Lagrangian as follows We can obtain the Lagrangian in Eq. (40) from Eq. (41) by integrating out the auxiliary filed u.
To match the convention in [90], we give the following VEV to the scalar field φ where we can take φ = X 8(φ) , K ′ = 1/f , and N = 1 in the BLG theory, take φ = X 8 + , K ′ = 1 and N = 1 in the (gauged) BF membrane theory, and take φ = X 8 and K ′ = K in the ABJM theory. Thus, the relevant Lagrangian is Using the results of the novel Higgs mechanism in the Section III and neglecting the commutator terms for the A a µ field strength, we obtain Moreover, we use the following identity for the 3 × 3 matrices that is proved in the where "Str" is the symmetrized trace that acts on the gauge group indices, and "det" acts on the world-volume coordinate indices. Integrating out the auxiliary field u, we obtain the Lagrangian for multiple D2-brane effective action However, the well-known Lagrangian for the multiple D2-brane DBI action is [90] where c 0 is a constant. Because in general the Lagrangian in Eq. (46) is not equivalent to that in Eq. (47), we still can not get the correct F 4 terms for generic case.
Interestingly, for gauge symmetry SU(2) × SU(2) in the BLG theory, or the (gauged) BF membrane theory, or the ABJM theory, we indeed can get the correct F 4 terms. Let us prove it in the following. From the Lagrangian in Eq. (46), we obtain Expandind the above Lagrangian, we have the relevant Lagrangian for pure Yang-Mills fields at the Minkowski space-time limit From the known effective action for multiple D2-branes, the relevant Lagrangian for pure Yang-Mills fields up to the F 4 terms is [90] where c 1 = π 2 α ′2 c 0 . For gauge group SU(2), we obtain Therefore, neglecting the commutator terms and rescaling the gauge fields, we can show that the correct F 4 terms in the effective D2-brane action in Eq. (49) from the two M2-branes in the BLG and ABJM theories are equivalent to these in the known DBI action in Eq. (51).
In short, we can generate the correct F 4 terms in the effective D2-brane action from the BLG theory and the ABJM theory with gauge group SU(2) × SU (2). However, we can not get the correct F 4 terms from the ABJM theory with U(N) × U(N) and SU(N) × SU(N) gauge symmetries for N > 2. It seems to us that the reasons are the following: the BLG theory and the ABJM theory with gauge group SU(2) × SU(2) have three-dimensional N = 8 superconformal symmetry while the ABJM theory with U(N) × U(N) and SU(N) × SU(N) gauge symmetries for N > 2 might only have three-dimensional N = 6 superconformal symmetry [72]. However, for the (gauged) BF membrane theory, although the constraint ∇ µ X 8 + = 0 is still satisfied, it might be equivalent to three-dimensional N = 8 supersymmetric Yang-Mills theory. In particular, for the (gauged) BF membrane theory with SU(2) × SU(2) gauge symmetry, we can generate the correct F 4 terms since it is similar to the corresponding BLG and ABJM theories.
V. DISCUSSION AND CONCLUSIONS
Using the novel Higgs mechanism, we considered the derivation of the multiple D2-brane effective action for pure Yang-Mills fields from the multiple M2-branes in the BLG theory and the ABJM theory. We showed that the high-order F 3 and F 4 terms are commutator terms, and we argued that all the high-order terms are commutator terms as well. Thus, these high-order terms do not contribute to the multiple D2-brane effective action. In order to generate the non-trivial high-order F 4 terms and inspired by the derivation of one D2brane from one M2-brane, we considered the curved M2-branes and introduce an auxiliary field. In particular, the VEV of the scalar field in the novel Higgs mechanism depends on the auxiliary field. After we integrate out the massive gauge fields and auxiliary field, we obtain the correct high-order F 4 terms in the D2-brane effective action from the BLG theory and the ABJM theory with SU(2) × SU(2) gauge group. However, we still can not obtain the correct F 4 terms in the generic ABJM theory with gauge groups U(N) × U(N) and SU(N) × SU(N) for N > 2. This might be related to the possible fact that the SU(2) × SU(2) gauge theory has three-dimensional N = 8 superconformal symmetry while the U(N) × U(N) and SU(N) × SU(N) gauge theories for N > 2 might only have threedimensional N = 6 superconformal symmetry. We also briefly comment on the (gauged) BF membrane theory.
Acknowledgments
This research was supported in part by the Cambridge-Mitchell Collaboration in Theoretical Cosmology (TL). | 4,803.4 | 2008-07-08T00:00:00.000 | [
"Mathematics"
] |
Random Matrix Analysis of Ca2+ Signals in β-Cell Collectives
Even within small organs like pancreatic islets, different endocrine cell types and subtypes form a heterogeneous collective to sense the chemical composition of the extracellular solution and compute an adequate hormonal output. Erroneous cellular processing and hormonal output due to challenged heterogeneity result in various disorders with diabetes mellitus as a flagship metabolic disease. Here we attempt to address the aforementioned functional heterogeneity with comparing pairwise cell-cell cross-correlations obtained from simultaneous measurements of cytosolic calcium responses in hundreds of islet cells in an optical plane to statistical properties of correlations predicted by the random matrix theory (RMT). We find that the bulk of the empirical eigenvalue spectrum is almost completely described by RMT prediction, however, the deviating eigenvalues that exist below and above RMT spectral edges suggest that there are local and extended modes driving the correlations. We also show that empirical nearest neighbor spacing of eigenvalues follows universal RMT properties regardless of glucose stimulation, but that number variance displays clear separation from RMT prediction and can differentiate between empirical spectra obtained under non-stimulated and stimulated conditions. We suggest that RMT approach provides a sensitive tool to assess the functional cell heterogeneity and its effects on the spatio-temporal dynamics of a collective of beta cells in pancreatic islets in physiological resting and stimulatory conditions, beyond the current limitations of molecular and cellular biology.
Even within small organs like pancreatic islets, different endocrine cell types and subtypes form a heterogeneous collective to sense the chemical composition of the extracellular solution and compute an adequate hormonal output. Erroneous cellular processing and hormonal output due to challenged heterogeneity result in various disorders with diabetes mellitus as a flagship metabolic disease. Here we attempt to address the aforementioned functional heterogeneity with comparing pairwise cell-cell cross-correlations obtained from simultaneous measurements of cytosolic calcium responses in hundreds of islet cells in an optical plane to statistical properties of correlations predicted by the random matrix theory (RMT). We find that the bulk of the empirical eigenvalue spectrum is almost completely described by RMT prediction, however, the deviating eigenvalues that exist below and above RMT spectral edges suggest that there are local and extended modes driving the correlations. We also show that empirical nearest neighbor spacing of eigenvalues follows universal RMT properties regardless of glucose stimulation, but that number variance displays clear separation from RMT prediction and can differentiate between empirical spectra obtained under non-stimulated and stimulated conditions. We suggest that RMT approach provides a sensitive tool to assess the functional cell heterogeneity and its effects on the spatio-temporal dynamics of a collective of beta cells in pancreatic islets in physiological resting and stimulatory conditions, beyond the current limitations of molecular and cellular biology.
INTRODUCTION
Pancreatic islets are collectives of endocrine cells. Based on the end hormone that these cells exocytose in a Ca 2+ -dependent manner after being stimulated, several types of cells have been described to compose an islet, with 3 dominant cell types: alpha, beta and delta (Briant et al., 2017). Islets from different parts of pancreas can contain different fractions of each of these cell types, but the bulk cellular mass in a typical islet is in an non-diabetic organism composed of a collectives of insulin secreting beta cells Rorsman and Ashcroft, 2017). Early studies assumed these beta cell collectives to be a rather homogeneous population of cells, however, the subsequent functional analyses have revealed a remarkable degree of heterogeneity even in dissociated beta cells in culture. The beta cells were found to differ in a number of physiological parameters, among others in glucose sensitivity and Ca 2+ oscillation pattern (Zhang et al., 2003), electrical properties (Misler et al., 1986), redox states (Kiekens et al., 1992), or pattern on cAMP oscillations (Dyachok et al., 2006). These early quests (Pipeleers, 1992) have been mostly searching for morphological, physiological and molecular features that would presumably satisfy at least 3 criteria: (a) entitle special roles for individual cells within the collectives, (b) remain valid even after cell dissociation, and (c) enable to trace embryonic and postnatal development as well as changes during pathogeneses of different forms of diabetes. Recent onset of efficient high-throughput analyses has catapulted these approaches on mostly dissociated cells to a completely new level and enabled identification of a multitude of functional and nonfunctional subpopulations with their functional characteristics, gene and protein expression, incidences and diabetes-related changes (for a recent review refer to Benninger and Hodson, 2018). A major limitation of these analyses is that the subpopulations described in different studies have relatively little in common and currently their translational relevance is weak. This is, however, not surprising since these approaches primarily deal with sample averages and present merely a small number of discrete snapshots of a very dynamic complex activity. Nevertheless, they turned out to be extremely useful in initial attempts to construct a pseudotime map of the sequence of events in pancreatic endocrine system (Damond et al., 2019) and its interaction with the immune system (Wang et al., 2019) during the progression of type 1 diabetes mellitus (T1D) in humans. Still, better tools are needed to assess the general rules underlying functional heterogeneity in a real-time spatio-temporal dynamics within collectives of beta cells or collectives of any other cell types in an organism. Instead of an ultra-reductionist approach to precisely associate individual molecular markers yielding more or less random correlations in respect to functional heterogeneity, a network of cells is first segmented to their level of functional influence expressed in high deviating eigenvalues to lead a more efficient further discovery of few collective parameters which may also have an identifiable molecular signature. Pancreatic islets have been described as broad scale network (Stožer et al., 2013b) and the search for important principal components is well justified.
The fact is that most of what we know about pancreatic beta cells has been gained by studying dissociated beta cells in cell culture. Therefore, even phenomena that can only be observed in isolated groups of electrically coupled beta cells, like electrical activity (Rorsman and Trube, 1986) or cytosolic Ca 2+ oscillation, are currently still mostly modeled within the framework of a single cell excitability (Sherman et al., 1988;Bertram et al., 2018). However, each beta cell interacts with several immediate and distal neighboring cells in a pancreatic islet, implicating high-ordered interactions between a large number of elements. Therefore, there is a rich exchange of signals within such a beta cell collective, both through direct cell-cell coupling (Bavamian et al., 2007) as well as through paracrine signaling (Caicedo, 2013;Capozzi et al., 2019). Such an organization necessarily yields a complex response patterns of cell activity after stimulation with physiological or pharmacological stimuli. Until recently, the richness of the aforementioned cell-cell interactions also could not be experimentally documented. However, recent technological advancements made it possible to use various optical tools to address these issues (Frank et al., 2018). For example, the functional multicellular imaging (fMCI) enabled completely new insights into our understanding of a beta cell in an islet as a biological network (Dolenšek et al., 2013;Stožer et al., 2013a,b). The dynamics of a measurable physiological parameter can namely be recorded in hundreds of beta cells within their intact environment (Speier and Rupnik, 2003;Marciniak et al., 2014) simultaneously. The measured oscillatory cytosolic Ca 2+ concentration changes, which are required to drive insulin release turned out to be a practical tool to trace cellular activity and fundamental to study their interactions in such big collectives (Dolenšek et al., 2013;Stožer et al., 2013a,b). With the use of the tools of statistical physics we and others reconstructed, for example the complex network topologies in beta cell activation, activity and deactivation during transient glucose challenges (Stožer et al., 2013b;Gosak et al., 2015;Markovič et al., 2015;Johnston et al., 2016). As in some other, previously analyzed biological systems, also for the pancreatic islets, the minimal model incorporating pairwise interactions provides accurate predictions about the collective effects (Schneidman et al., 2006;Korošak and Slak Rupnik, 2018).
Along these lines we have recently shown that beta cell collectives work as a broad-scale complex networks (Stožer et al., 2013b;Markovič et al., 2015;Gosak et al., 2017a), sharing similarities in global statistical features and structural design principles with internet and social networks (Milo et al., 2002;Barabási and Márton Pósfai, 2016;Daniels et al., 2016;Perc et al., 2017;Duh et al., 2018). In addition to complex network description when strong cell-cell interaction are primarily taken into account, the analyses of weak pairwise interaction enabled us to use a spin glass model , as well as the assessment of self organized criticality (Bak, 2013;Marković and Gros, 2014;Gosak et al., 2017b;Stožer et al., 2019), also often found in biological samples (Schneidman et al., 2006). The important result from these functional studies is that a faulty pattern of hormone release due to deviating numbers of individual cell types or changes in their function lead to one of the forms of a large family of metabolic diseases called diabetes mellitus (American Diabetes Association, 2014; Pipeleers et al., 2017;Skelin Klemen et al., 2017;Nasteska and Hodson, 2018;Capozzi et al., 2019).
The basic object that we study here is the correlation matrix C with elements computed from measured Ca 2+ signals: where y i (t) is the i-th time series of Ca 2+ signal out of N signals measured simultaneously in a collective of pancreatic beta cells. Random matrix theory (Guhr et al., 1998;Mehta, 2004) (RMT) is concerned with statistical properties of matrices with random elements. Applying RMT to correlation matrices, we study the spectrum of the correlation matrix C given by the set of its eigenvalues λ n : where u n are the corresponding eigenvectors.
Statistical properties of the spectra of random correlation matrices for N uncorrelated time series with M random elements where q = N/M is finite in the limit N, M → ∞ are known analytically (Marchenko and Pastur, 1967;Bun et al., 2017). The eigenvalue probability density is: where the spectral boundaries are: When the spectrum of the correlation matrix is unfolded (Guhr et al., 1998) by mapping eigenvalues λ k → ξ k so that the probability density of the unfolded eigenvalues is constant ρ(ξ ) = 1, the RMT predicts that the distribution P(s) of nearest neighbor spacings s k = ξ k+1 − ξ k is approximately given by the Wigner surmise (Mehta, 2004): Possible pair correlations in the eigenvalue spectrum on scales larger than nearest neighbors can be revealed with the use of variance of n ξ (L), the number of eigenvalues in the interval of length L around eigenvalue ξ . This number variance (Mehta, 2004) is given by: If the eigenvalue spectrum is poissonian the number variance is 2 (L) ∼ L, while real, symmetric random matrices exhibit correlated spectra for which RMT predicts 2 (L) ∼ log L (Mehta, 2004). Previous work using RMT in different systems, e.g., on statistics of energy levels of complex quantum systems (Guhr et al., 1998;Mehta, 2004) or correlations in financial markets (Plerou et al., 2002) identified that a bulk of the eigenvalue spectrum agreed with RMT predictions, which suggested a large degree of randomness in the measured cross-correlations in these systems. Only a small fraction of typically a few percent of eigenvalues were found to deviate from universal RMT predictions and were instrumental to identify system specific, non-random properties of the observed systems and yielding key information about the underlying interactions. Biological systems are often complex with large number of interacting parts, high dimensional systems that are basically black boxes "in which a large number of particles are interacting according to unknown laws" (Dyson, 1962). One way to approach such high dimensional systems is to look at the spectrum of the covariance matrix and try to find few principal components that describer most of system variance. This method, principal component analysis (PCA), has been suggested as "'hypothesis generating' tool creating a statistical mechanics frame for biological systems modeling" (Giuliani, 2017). PCA works best for systems where we can find few eigenvalues in the covariance spectrum well separated from the bulk and the system can be described in low dimensional space. Usually, there is no clear separation in the eigenvalue spectrum and other methods such as RMT or methods using renormalization group approach (Bradde and Bialek, 2017) are more suitable. In biological systems, RMT has been used to filter out critical correlations from data-rich samples in genomic, proteomic and metabolomic analyses (Luo et al., 2006;Agrawal et al., 2014), as well as in brain activity measured by EEG (Šeba, 2003) and dynamic brain networks constructed from fMRI data (Wang et al., 2015(Wang et al., , 2016. While eigenvalue spacing distributions showed agreement with RMT predictions, the number variance distributions often display deviations pointing to the physiologically relevant reduction in correlated eigenvalues fluctuations with partially decoupled components transiting toward Poisson distribution (Šeba, 2003). Such transitions have also been used as an objective approach for the identification of functional modules within large complex biological networks (Luo et al., 2006). Additionally, as for protein-protein interactions in different species, these latter deviation from RMT predictions has been interpreted as an evidence to support the prevalence of non-random mutations in biological systems (Agrawal et al., 2014).
In this paper we used the RMT approach to test the cross-correlations in the cytosolic Ca 2+ oscillations under nonstimulated and glucose stimulated conditions. We demonstrate that statistical properties of cross-correlations based on functional multicellular imaging data follows those predicted by RMT, with both high-and low-end deviating eigenvalues, suggesting local as well as global modes driving this correlation in functional islet. In addition, our results show that the long range correlations in eigenvalue spectrum deviate in a stimulus dependent manner.
DATASET DESCRIPTION
We define beta cell collective activity to sense nutrients and produce metabolic code as the relevant constraining context for the physical outcomes of analysis (Ellis and Kopel, 2018;Korošak and Slak Rupnik, 2018). Our data consist of a typical Ca 2+ activity recorded by multicellular imaging on an islet of Langerhans of fresh mouse pancreatic slice. All data analysis has been performed using custom scripts in Python 3.5 software and customized scripts (RMThreshold) in R software. We used raw data for each calcium signal, but we detrended the signals to remove possible sources of spurious correlations due to systematic slow variations caused by the imaging technique. A common problem in the analysis of fMCI Ca 2+ signals in living tissue is selection of regions of interest corresponding to a true signal originating from a cytosolic area of an individual cell and not two or more neighboring cells. In practice the reproducibility of the results depends on the level of experience of the operator to subjectively recognize structure from the patterns of activity. While we are primarily interested in the activity of a large population of cells, their interactions/correlations and their collective response, it is crucial that this signals originate from regions of interest that FIGURE 1 | (Left) Correlation coefficient distribution of N = 4000 Ca 2+ signals randomly chosen from experimental dataset (outer, black line). Dashed blue line is the distribution of correlations between residuals after removing the influence of the largest eigenvalue in signals. (Right) Distribution of distances between signals randomly chosen from Ca 2+ imaging data. Black line is the experimental data, thinner red line is the theoretical distribution of distances between random points in a square (Philip, 2007). correspond to individual cells. Collectives of cells, like beta cells in the islet of Langerhans are densely packed structures, where extracellular space and the cell membrane represent a relatively small to negligible cross-section area on the image of twodimensional optical section obtained by confocal microscopy. Therefore we decided to avoid the aforementioned subjectivity issue by the random sampling of pixel level signals in the recorded time series. For this analysis we randomly selected N = 4000 signals out of the complete dataset of 256 × 256 signals each M = 23,820 timesteps long (about 40 min recordings at 10 Hz resolution). Glucose concentration was changed during the recording from 6 mM (lasting about first 5,000 timesteps) to 8 mM and back to 6 mM (approx. last 5,000 timesteps) near the end of experiment.
The source of correlations in a cell population where the terminal action is a calcium-dependent process (e.g., exocytosis of insulin in beta cells) are the individual events in a form of plasma membrane ion channel or transporter activity, internal membrane ion channel or transporter activity, as well as calcium leak from activated immediate neighboring beta cell (Berridge et al., 2000). The correlations between the activities of beta cells depend strongly upon the glucose concentration (Dolenšek et al., 2013;Markovič et al., 2015), however in the physiological plasma glucose range (6-9 mM), most correlations are weak , so that the probability of detecting coactivation basically equals the product of the probabilities of activities of individual beta cells. The correlations are statistically significant for almost all pairs of immediate neighbors.
RESULTS
The distribution of correlation coefficients reveals that most of the correlations between the pairs of Ca 2+ signals are weak, but there is also non-negligible contribution of highly correlated pairs of signals (Figure 1, left, black outer line). We also checked the sampling procedure by comparing the computed distribution of distances between pairs of randomly chosen points from 256 × 256 image square to the analytical probability distribution of distances between two random points in a square (Philip, 2007) (Figure 1, right). We found a perfect match between the distance distribution computed from data and the theoretical distribution, confirming that our random sampling of data points was non-biased.
Guided by the observed non-gaussian nature of correlation distribution (Figure 1, left) we explored a detailed structure of the correlation matrix, since distribution of correlation coefficients only partially hints to the nature of cell to cell coordination. To this end we computed the eigenvalues and eigenvectors of the correlation matrix (Equation 2) and compared the obtained eigenspectrum with the RMT prediction. In Figure 2 (top left) we show the distribution of eigenvalues that belong to the empirical correlation matrix (black trace) and the RMT prediction (red line) given by Equation (3). While most of the eigenvalues falls within the limits λ ± of the RMT spectrum, there are also significant deviations from RMT prediction. We found the largest empirical eigenvalue λ max two orders of magnitude away from the upper limit of the RMT spectrum, and also a part of the empirical spectrum that extends below the lower RMT limit. To see if the deviations from the RMT are inherent to the measured Ca 2+ signals, we prepared a surrogate dataset by randomly shuffling each signal's time series. We then computed the correlation matrix and its eigenvalue spectrum from randomized surrogate dataset. As shown in Figure 2 (top right), the match between the eigenvalue distribution of randomized dataset and RMT is perfect.
Previous RMT analysis of stock correlations in financial markets consistently showed (Laloux et al., 1999;Plerou et al., 1999Plerou et al., , 2002 that the distribution of components of the eigenvector u max corresponding to largest eigenvalue λ max strongly deviates from Gaussian form, suggesting that this mode reflects the collective response of the system to the stimuli. In our case this corresponds to collective response of beta-cells to glucose stimulus. In a linear statistical model for Ca 2+ signals, we model the response common to all beta-cell with Y(t) and the signals are expressed as: where δy i (t) is the residual part of each signal. Coefficients a i , b i are obtained by regression. Following Plerou et al. (2002) we approximated the common response Y(t) with the projection of all signals on the largest eigenvector: where u max,i is the i-th component of the eigenvector corresponding to largest eigenvalue λ max . To see the influence of the collective response to the distribution of correlation coefficents, we computed using Y = y max the residuals δy i (t) for all N signals and their correlations C res(i,j) = Corr(δy i , δy j ). The dashed blue line (inner trace) on Figure 1 (left) shown the distribution of C res and reveals that the collective response predominantly contributes to large correlations.
To test further if the largest eigenvalue and the corresponding eigenvector capture the collective calcium response we compared the average signal y(t) = 1/N j y j (t) with y max . The correlation between signals projected on the largest eigenvalue mode and mean signal was high: Corr(y max , y) ≈ 0.8, confirming the expectation that the largest eigenvalue represents collective effect. Similarly, we checked how similar are the signals corresponding to the bulk RMT eigenvalues: where λ i is the eigenvalues from the RMT interval [λ + , λ − ].
The computed correlation between signals projected on bulk eigenvectors and mean signal, averaged over all signals was < Corr(y bulk,i , y) >= −0.0044 ± 0.0047 suggesting no correlation between the mean signal and signals coming from the bulk RMT regime. To further characterize the eigenvector structure of the empirical Ca 2+ correlation matrix, we looked at the inverse participation ratio (IPR) of eigenvector u(λ) corresponding to eigenvalue λ defined as (Plerou et al., 1999(Plerou et al., , 2002: The value of 1/I(λ) reflects the number of nonzero eigenvector components: if an eigenvector consist of equal components u(λ) i = 1/ √ (N) then 1/I(λ) = N, in other extreme case 1/I(λ) = 1 when an eigenvector has one component equal to 1 and all others are zero. Figure 2 (lower left) shows the computed values of IPR for all eigenvectors as function of corresponding eigenvalue. The red datapoints are the IPR data computed for the surrogate, randomized timeseries data for which we found 1/I ∼ N as expected. We found similarly values 1/I ∼ N for the largest eigenvalues of the empirical spectrum (black datapoints) suggesting that to this eigenvectors almost all signals contribute. Deviations from flat RMT prediction at the edges of the RMT spectrum ([λ + , λ − ] interval, vertical dashed lines) with large I(λ) values suggests that these states are localized with only a few signals contributing. This points to a complex structure of the empirical correlation matrix C with coexisting extended and localized eigenvectors similar to one found in correlations in financial markets (Plerou et al., 1999(Plerou et al., , 2002). In addition, as shown in Figure 2 (lower right, open dots), we observe a scaling behavior in rank-ordered plot of eigenvalues of empirical correlation matrix that has been connected with a fixed point in renormalization group sense (Bradde and Bialek, 2017;Meshulam et al., 2018). For comparison, we plot also the rankordered eigenvalues of randomized data (open squares) and RMT prediction based on eigenvalue density given by Equation (3) (full line) which perfectly describes the randomized dataset. The observed scaling of eigenvalues hints toward the critical behavior that was conjectured for beta-cell collective at the transition from (Right) Number variance of eigenvalue spectra of calcium signals. Open squares: shuffled, randomized data; open dots: 8 mM glucose; crosses 6 mM glucose; full line RMT prediction Σ 2 (L) = 1/π 2 (log(2πL) + 1 + γπ 2 /8) (Mehta, 2004); dashed line Poissonian limit Σ 2 (L) = L.
To explore the statistical differences of signals in nonstimulated and stimulated phase, we separated the original data into two groups of N signals each with M = 10 4 timesteps corresponding to response to 6 and 8 mM glucose stimuli. For each group we computed the unfolded eigenvalue spectra and also for randomized data. The results for the nearest-neighbor spacing and number variances are shown in Figure 3. For nearest-neighbor spacing distribution we find a good agreement with the RMT prediction both, for nonstimulatory and stimulatory conditions, as well as shuffled stimulated data. All three datasets are well described with the Wigner surmise (Equation 5), so nearest-neighbor spacing does not seem to be sensitive to stimuli changes. On the other hand, however, the number variance is sensitive to stimuli change already during physiological stimulation of the beta-cell collective. The random matrix prediction is in this case valid for shuffled stimulated data only (Figure 3, right).
DISCUSSION
The unique spatio-temporal resolution of functional multicellular imaging and sensitivity of advanced statistical approaches for a plethora of different modes of complex network scenarios and levels of criticality, makes these approaches a method of choice to assess the nature of cell-cell interactions under different stimulation conditions. At the same time it enables us to test the validity of experimental designs for study of beta cell function, primarily in the domain of stimulation strength and dynamics. We suggest that without such validation the most critical events in the activation chain within the beta cell collective have been and shall be overlooked or misinterpreted (Stožer et al., 2019). The predominant use of supraphysiological glucose concentration can namely severely deform the relatively slow beta cell recruitment in a collective at physiological glucose concentrations (Stožer et al., 2013a;Gosak et al., 2017a) and miss the typically segregated network clusters of Ca 2+ events (Benninger and Piston, 2014;Markovič et al., 2015;Westacott et al., 2017), turning critical behavior into disruptive supracritical activity (Gosak et al., 2017b;Stožer et al., 2019). Only under rather narrow physiological conditions it shall be possible to extract the fine structure of cell-cell interactions causing long-term and efficient cell collaboration with the collective. Breaking apart this delicate structure of cell-cell interaction does result in a massive activity, which can be readily described by tools of statistical physics, but this activity does not necessarily serve its physiological or biological purpose (Ellis and Kopel, 2018).
A common denominator of the previous attempts to categorize different beta cell types points to some metabolic and secretory features that can be either reproduced between different classifications or not. Usually there exist a bulk of one subtype and one or more less frequent subtypes (Benninger and Hodson, 2018). These less frequent subtypes can nevertheless have important regulatory roles that may not be immediately apparent. This issue is particularly critical if the frequency of a beta cell subtype represents only a couple of percent of the entire beta cell population in an islet. Along these lines there have been some indications regarding the beta cell subtypes that can serve as pacemakers or hubs within a dynamic islet cell network (Johnston et al., 2016;Lei et al., 2018), however due to the nature of complexity of network features, we may still be short of evidence for definitive conclusions. The full description of heterogeneity of endocrine cells within an islet, ultimately producing a adequate release of hormones is therefore still lacking. In trying to grasp this complexity, it is important to take into account interaction of beta cell collectives with other cell types in and around an islet, like glucagon-secreting alpha cells (Svendsen et al., 2018;Capozzi et al., 2019) or somatostatin secreting delta cells (Rorsman and Huising, 2018) as well as neurons and glial cells (Meneghel-Rozzo et al., 2004), but also endothelial, immune cells (Damond et al., 2019), as well as acinar and ductal cells (Bertelli and Bendayan, 2005).
Random matrix theory is a fitting mathematical framework which provides powerful analytical tools to separate cellcell interactions happening by chance from those produced by specific coordinated interactions after a changed chemical composition of cell's surrounding. In the financial sector, adequate asset allocation and portfolio risk-estimation can lead to a higher profit and is therefore clear why it makes sense to invest time into cross-correlation analyses (Plerou et al., 2002). But what would be the gain of knowing that randomness of cellcell correlation matrices is physiologically regulated? Firstly, we suggest that the analysis of the universal properties of empirical cross-correlations is a valuable tool to identify distinct types and further subtypes of endocrine cells within an islet through their non-local and local effects. The largest eigenvalue of C namely represents the influence of non-local modes common to all measured Ca 2+ fluctuations. Other large eigenvalues can be used to address cross-correlations between cells of the same type, cells with specific functions in the collective or that these cells reside in topologically similar area of the islet. Quantifying correlations between different beta cells in an islet is therefore an exciting scientific effort that can help us understand cell communities as a complex dynamical system, estimate the amount of factors ruling the system or potential presence of a stress situation (Gorban et al., 2010).
The large values of inverse participation ratio (IPR) (Figure 2, bottom left) compared to the IPR values in the bulk, indicate that only a few cells contribute to these eigenstates with eigenvalues at the edges of the RMT bulk spectrum. In contrast, all cells contribute to the eigenstates corresponding to the largest eigenvalues. This means that we find delocalized states for the largest eigenvalues and localized states as we move toward the RMT edge of the spectrum. Similar findings were recently reported in RMT analysis of single-cell sequencing data (Aparicio et al., 2018), where the spectrum of covariance matrix of single-cell genomic data followed RMT predictions with deviations at the bulk edge. The localized states at the edge of the bulk spectrum were connected with the true biological signal.
Our results show that the number variance reflecting the correlation between subsequent eigenvalues (a measure for long range correlations in eigenvalue spectrum) follows the RMT predictions up to a certain distance L, however at larger distances it starts to deviate in a stimulus dependent manner, suggesting structural features in the beta cell network. Transitions between Poissonian and GOE statistics in biological systems have been previously described during the process of either integration or segregation of complex biological networks, showing various degrees of long range correlations at various physiological conditions (Luo et al., 2006). This understanding has a vital practical value since it can help decipher different roles that beta cells can play in a collective and to further validate the importance, if any, of previously defined and continuously appearing novel molecular markers of beta cell heterogeneity (Benninger and Hodson, 2018;Damond et al., 2019;Wang et al., 2019). An advanced knowledge about the dynamic properties of the functional cell types will shed a new light into understanding of physiological regulation of insulin release and the assessment of perils of stimulation outside of the physiological range. Furthermore, it can help us elucidate the mechanisms on how this function changes during the pathogenesis of different forms of diabetes mellitus and lead us to novel approaches of therapy planning and prevention. And finally, it can help us understand the general principles ruling the interactions in collectives of other cell types.
DATA AVAILABILITY STATEMENT
The raw data used in this research is available upon request to the corresponding author.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. | 7,187.6 | 2019-03-29T00:00:00.000 | [
"Biology"
] |
Impact of MTA blend % in melt spinning process and polyester properties
In Polyethylene terephthalate (PET) processing the raw materials are purified terephthalic acid (PTA), moderately purified terephthalic acid (MTA), monoehalene glycol (MEG). The processing of PTA is very difficult and costly. For reducing PTA percent we are using different percentages of MTA blend in PET processing. MTA affected the properties of polyester and melt spinning process. The properties such as elongation, tenacity, molecular chain length, b-color and IV, thermal properties, % carboxyl groups will change with MTA percentage. MTA percentage also affects fiber breakage percentage s, the melting point of PET. FTIR results show a change in chemical composition. Particle size, 4-CBA content of MTA affects the properties of the fiber.
Introduction
Polyethylene terephthalate (PET) is an outstanding barrier material, it also a thermoplastic polymer resin of the polyester family. PET is widely used to made plastic bottles for soft drinks and its film is frequently used in tape applications because it has extraordinary mechanical strength properties. PET fibers are the most widely produced synthetically organic fibers and more than 60% of PET production is for synthetic fibers. PET fibers are mainly used for both traditional textile purposes and progressively for industrial applications. These applications are rapidly increasing later; mainly these cables find usage in mooring ropes for boats, ships. These cables consist of many millions of fine synthetic fibers and these have breaking loads of up to two thousand tons [1,2].
In the creation of most polyester applications, the purified terephthalic acid (PTA) development initiated the extensive replacement of dimethyl terephthalate (DMT [3]. Purified terephthalic acid is very cheaper, easier to transport, and easier to handle than DMT, and which allows for faster reaction times than DMT, turn out to be the raw material of choice in most new polyethylene terephthalate based plants and affected a revamping of already existing plants [3,4]. A quit recent improvement in terephthalic acid (TA) procedure technology is the most availability of so-called medium quality terephthalic acid, variously known as EPTA, QTA and MTA [4,5]. MTA is now production inroads in the polyester markets. Eastman and Mitsubishi Chemical are offering their own proprietary medium quality terephthalic acid process without limitations, and offering their technology EPTA and QTA (qualified terephthalic acid) for license respectively [4]. In addition, Sam Nam in Korea producers is marketing QTA for all polyester applications, including bottle, film and fiber. In Asian countries, Indo Rama Synthetics (I) Ltd introduce MTA blend in polyester fibers. MTA production costs are lower than of PTA, with this cost advantage (discount $20-$30 per ton) MTA
PET processing
PET processing is two routes one is the DMT route [3], another one is the PTA route [6]. The PTA route, which is easier to handle, transport and cheaper, and allows for very quick reaction times than the DMT route. That's why the PTA route is preferable in industrial applications [6][7][8][9][10][11]. In the PTA route, mono ethylene glycol and purified terephthalic acid (PTA), and moderately purified terephthalic acid (MTA) are used as raw materials to make PET. Impurity levels of PTA and MTA are listed in Table 1. Here MTA mixed with different percentages are 0 to 25% with the step of 5%. The main steps involved in PET processing (shown in Fig. 1a) are first one raw material preparation in this raw material continuously fed together with the catalyst solution into the paste preparation vessel. In the paste preparation vessel, a chemical reaction does not take place. Second one esterification or transesterification, in this step the esterification is carried out in two reactors. The paste is transferred from the paste preparation vessel into the first esterification reactor, where the reaction proceeds to more than 90% conversion. In the second reactor, the conversion reaches more than 96% conversion. Third step is pre polycondensation, the prepolycondensation is carried out in the prepolycondensation reactor, in this the esterification degree increases to more than 99%, and the chain length of the polymer increases. The final step is polycondensation, in this step, the polycondensate is transferred into the disc ring reactor, which functions as the main polycondensation stage. To start the polycondensation reaction, antimony is used as a catalyst. The product from the esterification step is then sent to the pre polycondensation unit where the reaction takes place under a vacuum. The pre-poly condensation product is served to the final polycondensation reactor which controls under vacuum and amplified temperature; after that, the polyester melt is managed into different forms like fibers, chips, filaments, etc.… Melted polymer is transported to spinning blocks under pressure (100-200 bars) through metering pumps. Then the polymer melt forced through a fine filter. Filtration of PET molten polymer before it enters into the spinneret further homogenizes melt; Filtration removes solid impurities like semisolid degraded gel, any metal pieces and also eliminates gaseous bubbles. Molten polymer extruded from a spinneret stretch. Stretching of the fiber in both solid and molten states provides for the alignment of the polymer chains along the fiber axis and solidifies between a spinneret and take-up in the melt spinning. After leaving the spinneret, molten filaments are cooled (quenched) and solidified by a laminar and uniformly controlled and conditioned airflow. A quench air enhances the solidification of fiber or filament and it helps to make uniform fiber/filaments. PET process in the melt spinning method is shown in Fig. 1b.
In the melt spinning process, some required processing variables are spinneret diameter should be 3 times more than polymer particle size (diameter). The tensile stress on the melt stream must never exceed the tensile strength of the filament. So take up velocity should not exceed the spinning speed [6]. The molecular weight is generally relatively low (16,000-19000 g/mol) in the case of polymer to be used as a fiber. LRV (laboratory relative viscosity) increases as polymer chain length grows as well as the molecular weight of the polymer increases. Resistance to the flow of polymer also increases. Oligomers and polymers are heated more than 320 °C, become black solid-like, non-pourable residue. When heated for more days (> 4 days at 280 °C or > 2.6 days at 300 °C) polymer or oligomer starts degradation. Finally, Cooling conditions also may impact the properties of polyester fiber.
Chemical analysis
In order to identify the chemical constituents and to quantify the same, FTIR (Fourier Transform Infra-Red Spectrophotometer) analysis for Polymer samples having 100%v PTA and at different MTA blend ratio. FTIR graphs were shown in Fig. 2 and FTIR study reveals the following findings listed in Table 2.
In addition polycondensation itself, even in the absence of oxygen certain thermal-induced degradation of PET melt always takes place. This degradation reaction catalyzed by metal ions, which causes chain breaks. Carboxyl end groups and vinyl ester are formed due to these chain breaks. These vinyl ester end groups can react with the hydroxyl end group of another polymer chain and acetaldehyde is formed as a by-product. The total effect of this side reaction is the substitution of hydroxyl end groups by carboxyl end groups and the formation of an equivalent quantity of acetaldehyde. The thermal induced degradation depends on the residence time (linear correlation) and temperature (exponential correlation) of the polymer. As acetaldehyde is formed during the entire process, it is found in all vapor systems and must be removed, because of its high vapor pressure and relatively low explosion limits. Acetaldehyde will be washed out in an off-gas scrubber [14,15].
The concentration of COOH groups in the final product is an indicator of the thermal degradation of the polymer. Because it is not possible to distinguish between COOH groups from non-esterified acid and those arising from thermal degradation, an assessment of polymer quality must be carried out.
This results show increasing the MTA percentage COOH group also increases because with increasing MTA % the content of metal particles also increases. These metal ions catalyzed the above reaction. The analysis shows increasing MTA blend % degradation of polymer increases.
Thermal study
The samples (fibers) of PET are prepared with different MTA were prepared by the method of melt spinning process intrinsic viscosity of these fibers were 0.628. DSC curves were recorded on a STAR e SW 9.20 DSC. The sample weighs about 3.5 g. the fibers were cut into fine grains, then were put into aluminum pans. An empty aluminum pan was used as reference material. Pure nitrogen is used as furnace atmosphere. The pans were put into the Furnace, DSC curves from ambient temperature to 20 °C above the melting point measured at a heating rate of 10 °C/min. DCS curves were obtained at a heating rate of 10 °C/min.
After reaching 300 °C, the pan was rapidly cooled to ambient temperature (30 °C). melting behavior of polymer depends on the specimen history and particular upon the rate at which the specimen is heated [16]. The melting behavior also depends upon the rate at which the specimen is heated. For eliminating cooling history differences, the thermal scan of the samples was carried out again. DSC results are shown in Figs. 3 and 4.
In Fig. 5, 'M.P1′ is a melting point at the first scan; 'M.P2′ is a melting point repeated scan of the same sample these values are listed in Table 3. The result shows that melting point/ heat of enthalpy and % crystallinity has no significant influence with the increase of MTA blend against 100% PTA sample. The melting point is ranging between 254 and 256.2 °C with no significant trend with an increase of MTA blend%. It also shows the melting point also depends on the cooling history of the sample.
b Color and Process Breaks in POY:
Effect of MTA blend ratio on polymer (POY) color and process breaks were studied and results are shown in Table 4.
With an increase in MTA Blend %, there is a slight increase in Polymer 'b' color because 4-CBA and metal content increases with increasing MTA blend %.
According to a so many technology producer and holder, 4-CBA below 500 ppm levels does not act as a discoloration agent [18,19], but slightly it was a host of additional impurities like conjugated polynuclear organics, and these are co-produced with 4-CBA; and those levels are associated with the 4-CBA content in terephthalic acid, this may cause yellowing [19]. Eastman Chemical patent identifies one more reason for discoloration is fluorenones [6]. Fluorenones that exist in terephthalic acid are mostly dicarboxy fluorenones with a very small amount of tri and mono carboxyfluorenones di-carboxybipheny. Di-carboxybenzophenone and dicarboxybenzil, are also existing. So, these all contaminations are hydrogenated to colorless materials. A color-b value is lower usually far better and it indicates that less yellow) [17]. For PTA, color-b value is less compare to color-b value of MTA.
There is a slight rise in process break, higher process break ending higher waste and lower full packages. The reasons for higher breaks with MTA could be higher concentrations of 4-CBA which act as a chain terminator as well as a branching agent. 4-CBA limit chain growth and molecular mass pileup, and it also cap the polyester chain in the polymerization process.
Carboxylic COOH group
MTA has a low Medium practical size and it causes an impact on PTA/MEG slurry Viscosity (higher) and it also impacts Mole Ratio.
MTA Particle Size is low as compared to PTA and it requires less temperature to react hence to maintain the same Carboxylic end group esterified temperature requires (shown in Table 5) to be reduced with an increase in MTA blend%. Metal content also acts as a catalyst to promote the reaction to lower temperatures [2,20].
Intrinsic viscosity
It has been observed that with a 10-20% MTA blend as compared with 100% PTA, the variability of intrinsic viscosity (I.V) of Polymer is slightly reduced.
Reasons for high variability may be due to higher 4-CBA which has a tendency to cap the Polymer chain and hence stopping the chain build up [6].
The thermal and oxygen-induced degradation of PET generates, In addition, polycondensation itself, even in the absence of oxygen certain thermal-induced degradation of PET polymer melt always takes place. This reaction catalyzed by 'metal ions' , it causes chain breaks and as a result carboxyl and vinyl ester end groups. Due to the degradation reaction, the viscosity of the polymer decreases considerably. In order to maintain I.V process optimization and control are required.
Tensile properties
In order to understand the effect of MTA on PET properties, a study of average PET properties in terms of Draw Force, Elongation and Tenacity of various PET products was Table 6. The draw is a force that requires drawing the yarn or fiber in the melt spinning process without generating too many filament breaks. Draw is a force of the fibers is calculated using Dynafil M draw force tester. Wrap Reel is used to checking Denier/Count of Yarn. This Yarn Testing Instrument is used to accurately determine count and strength of the various yarns and fibers.
Elongation and Tenacity of fiber or fiber using statmat ME tensile tester, test speed 300 mm/min. test length of POY is 150 mm. pretensions is 0.15cN/tex or 1.33 g/d. Tensile testers using a range of sensitive load cells are used for determining breaking load and load elongation curve of yarn. The yarn is held between two jaws with the upper jaw connected to the load cell and the lower jaw traversed downwards at a constant rate of the traverse. Insertion of a new specimen into the clamps and clamping of the specimen at a pre-determined tension are done automatically. Automatic package changers (with up to 20 packages) are also provided with the tester so that after a prescribed number of tests are carried from a package, the package is automatically changed to a new package and insertion of new yarn to the clamps is automatically done. A series of high resolution load cells enable testing of yarn with strength between 100 to 1000cN. Software is provided for determining mean, maximum, minimum, S.D., CV, Confidence limits and a host of other useful information. A high resolution opto-electronic sensor measures the elongation of yarn. The tenacity of fiber is also dependent upon the following factors chain length of molecules in the fiber orientation of molecules size of the crystallites distribution of the crystallites gauge length used the rate of loading type of instrument used and atmospheric conditions. The PET properties have been compiled for the MTA = 10% and IV = 0.628 and PTA = 100% and IV = 0.628). The product-wise average results are as shown in Figs. 6, 7 and 8. The viscosity of the polymer decreases considerably with increasing MTA blent % because the 4-CBA, metal ion content increases with MTA%. The reason is 4-CBA cap the polymer chain, metal ions catalyze polymer degradation reaction. Because of these reasons intrinsic viscosity decrease, this reduces the draw force.
Average draw force, elongation and tenacity of pure PTA and 10% MTA blend polyester yarn sample shown in Figs. 9 and 10 respectively. These graphs show that the relation between draw force and properties of the yarn follows same trend in both cases.
The above results show that with a 10% MTA blends the draw force, tenacity and % of elongation have slightly reduced. The reason is the chain length of polymer reduced with increasing MTA blend%.
Conclusion
• Increasing the MTA percentage 4-carboxybenzaldehyde (4-CBA) is increase. It caps the polyester chain and limits molecular mass • Draw force, Tensile strength and elongation properties decrease with increasing MTA%. The reason is the chain length of polymer reduced with increasing MTA blend%. • FTIR analysis shows carboxyl group increases with MTA %. • DSC results shows MTA not effect on melting point. • With a 10-20% MTA blend, the slight impact on product quality observed and the same is within the acceptable quality norm. • Based on the above-detailed study and investigation, 10-20% MTA may be blended with PTA within confirming product quality. With increasing competition and decreasing margin in Polyester business use of MTA in small proportion is a viable proposition looking into cost-benefit of MTA. | 3,760.4 | 2021-01-23T00:00:00.000 | [
"Materials Science"
] |
MARES Project: Hydrographic data of the San Jorge Gulf from R/V Coriolis II cruise in 2014
PROMESse (Multidisciplinary program for the study of the ecosystem and marine geology of San Jorge Gulf and the coast of the Province of Chubut) was an international cooperation research program among the Ministry of Science and Technology (MINCyT), the National Scientific and Technical Research Council (CONICET), the Province of Chubut (Argentina) and the University of Quebec at Rimouski (UQAR/ISMER, Canada). Within the framework of this program two projects were carried out, MARES (Marine Ecosystem Health of the San Jorge Gulf: Present status and Resilience capacity) 5 and MARGES (Marine Geology). The main goal of MARES was to drive a comprehensive study of the dynamics of physical, chemical and biological parameters vitals for the San Jorge Gulf ecosystem. The observational component of this project consisted on a multidisciplinary oceanographic cruise on board of the research vessel Coriolis II in Feb. 2014 integrated by three legs designed to identify and characterize areas of high primary productivity, which will serve as indicators of the ecosystem health. This paper reports the hydrographic data collected during the second leg of the Coriolis II cruise. This leg 10 was aimed to study the frontal dynamics associated to a region of high tidal dissipation rate south of the Gulf, and to study the vertical displacements of the pycnocline at a fixed site in the center of the Gulf mouth. To this end, high-resolution data was collected in the southern tidal front, including quasi-continuous CTD vertical profiles, underway surface temperature and salinity, Scanfish II CTD and shipboard ADCP data. The data sets are available in the National Oceanographic Data Center (NODC) from NOAA. DOI: https://doi.org/10.7289/V5MP51J2 15
Introduction
The San Jorge Gulf (SJG) is a semi-open basin of approximately 40000 km 2 located between 45°S and 47°S with depths slightly over than 100 m in the central region (Fig. 1). Its broad mouth extends 230 km from Bahía San Gregorio to Cabo Tres Puntas along the meridian 65°45' W, connecting the Gulf with the Argentine Continental Shelf through a sill that increases in the S-N direction, reaching a maximum depth of ∼ 60 m near 46°48' S ( Fig. 1). This geomorphological feature in interaction 20 with tidal mixing produces changes of well-stratified conditions to well-mixed conditions within a few kilometers during the warm seasons.
The SJG circulation is driven by intense westerly winds and high amplitude tides (Palma et al., 2004;Tonini et al., 2006;Moreira et al., 2011). Estimates of tidal energy dissipation by bottom friction derived from numerical models results (Glorioso 1 andFlather, 1995, 1997;Palma et al., 2004;Moreira et al., 2011) suggest that most of the dissipation occurs at the mouth of the SJG, mainly in the southeast region. The dissipation rate is high enough to break up the seasonal thermocline and give rise to the formation of an intense tidal front. Due to its configuration and variability, the tidal front enhances the biological productivity nearby (Glembocki et al., 2015), plays a key role in the development of ecological processes and is closely related to fishery resources (Acha et al., 2015;Alemany et al., 2014). Studying the frontal variability, both spatial and temporal, is essential to 5 understand the mechanism responsible for that enhancement and to define main frontal properties related to biological effects.
Thus, the use of a high-resolution sampler system was key to evidence the high-frequency frontal variability (Carbajal et al., in review).
The main objective of MARES leg 2 was to evaluate the high-frequency variations of the Southern Tidal Front (STF) of the SJG. In order to achieve this purpose, a high-resolution sampling was carried out for a complete tidal cycle during three 10 tidal states (see Sect. 2.1). Knowledge of mesoscale variability is not only crucial to interpreting the biological influence of the fronts (Landeira et al., 2014), but it will also contribute to the establishment of new conservation strategies and the management of marine resources. In this article, we will describe the cruise design and the procedures used for the acquisition, calibration and the processing of the dataset obtained during MARES leg 2.
2 Field measurements and equipment 15 Two types of surveys were carried out between Feb. 4-10 2014 on board of the Canadian research vessel (R/V) Coriolis II during MARES leg 2 cruise: one located in the STF region and another in a fixed position near the center of the Gulf mouth.
While towed undulating vehicle systems have been used by investigators (Twardowski et al., 2005;Brown et al., 1996), the PROMESse program was the ideal framework for the application of new technologies in the Argentine Continental Shelf such as this vehicle, achieving unprecedented high-temporal and spacial-resolution data in the region, particularly in the frontal 20 zone. Table 1 summarizes the characteristics of the sensors used in each instrument and which are described in the following sections. Date and time from data sets are reported in Coordinated Universal Time (UTC).
Sourthern Tidal Front observations
Eighteen cross-front transects (six in late spring tide (Feb. 5), six in intermediate tide and six in early neap tide were occupied in the STF using a towed undulating vehicle EIVA Marine Survey Solutions model Scanfish II (http: 25 //aquaticcommons.org/3106/1/ACT_WR07-01_Tower_Vehicles.pdf), fitted with a modular CTD Sea-Bird Electronic (SBE) model 49FastCAT (16 Hz) and a combined fluorometer and turbidity sensor WetLabs model ECO FLNTU (8 Hz). The modular CTD has no memory nor internal batteries, and does not support auxiliary sensor inputs either. Therefore, the ECO FLNTU could not be directly associated with the CTD data and thus the fluorescence and turbidity signal was acquired without the corresponding date, time and position. Nevertheless, it was possible to link both signals to get the missing data in a post- 30 processing. The sections length ranged between 17.4 km to 63.1 km, which is equivalent to 1:09 h and 4:33 h of transit (see Table 2 for details). The sections occupied in late spring and early neap tide covered an area of approximately 29.8 km (NW-SE) by 15.1 km (NE-SW) during a semi-diurnal tidal cycle each (Figs. 1c and 1e, respectively), while the intermediate tide survey consisted of a single transect (T1) occupied six times, back and forth, also during a semi-diurnal tidal cycle (Fig. 1d).
Surveys detail above are shown in Fig. 1. The horizontal separation of Scanfish sawtooth profiles was approximately 81 m-291 m, the latter largely dependent on bottom depth and the condition in the sea surface, descending (ascending) the vehicle at an absolute rate of nearly 0.9 m s −1 .
5
On board, the towed vehicle was monitored through the roll and pitch sensors. The vehicle attitude was governed through two rear-mounted flaps and depths were provided by the CTD pressure sensor (Brown et al., 1996). The data collected with the Scanfish II provided a quasi-synoptic spatial and temporal resolution to characterize the influence of the high/low tide behavior and determine the front displacements relative to the phase of the tide.
Complementary observations
Plankton net and video plankton recorder (VPR) were performed during the full depth CTD-rosette casts in the FS (Sect. 2.2).
In addition, two sequential sediment traps were moored above and below the pycnocline during seven days (Feb. 7-13). The data sets of sediment traps, plankton nets, and VPR are not reported in this data collection.
An underway CTD Sea-Bird model SBE19plus (4 Hz) coupled with a Seapoint fluorescence sensor was used to identify the 20 position and orientation of the STF and remained operational throughout the entire cruise. Underway data were recorded every 10 s along the tracks.
Direct velocity measurements were collected with a Teledyne RD Instruments (TRDI) 150 kHz Ocean Surveyor hullmounted Acoustic Doppler Current Profiler (ADCP). The seabed map and the distribution of biological species in the water column were achieved using a hull-mounted scientific echo-sounder SIMRAD model EK60 working in multiple frequencies 25 (38 kHz, 120 kHz and 200 kHz).
CTD Vertical Profiles
An additional cross-frontal transect was occupied across the STF (on Feb. 5 at night and Feb. 8 at late afternoon) to study the biological and chemical characteristics of the water column. Each realization consisted of five CTD vertical profiles spaced at 30 distance intervals of ∼ 4.9 km, using a CTD Sea-Bird model SBE911plus, equipped with oxygen, pH, fluorescence, nutrients, photosynthetically active radiation (PAR), beam transmission, altimeter and pCO 2 sensors ( Table 1). The altimeter sensor was used to determine distance to the bottom. Most vertical profiles reached to within ∼ 9 m off the bottom. Oxygen, pH and fluorescence sensors were calibrated as described in Sect. 3.3. Data from the remaining sensors are reported based on factory calibrations only. Particularly, the pCO 2 sensor did not work properly (it recorded a constant value of −0.00088).
5
Down and up-casts profiles are reported in this dataset, but it has to be mentioned that during downcast the CTD sensors measure the water column with minimal interference from the underwater package.
Water Samples
A Carousel Sea-Bird model SBE32 rosette package with twelve 12 L Niskin bottles was employed during the cruise. At pre-defined depths, water samples were collected for the determination of salinity, dissolved oxygen (DO), nutrients, pH and CTD salinity values. The water samples most likely experienced evaporation during this period and therefore the salinity measurements were overestimated. Bottle salinity data were considered questionable, and because the CTD conductivity sensor was factory calibrated in 2013, the bottle salinity data were not used to calibrate it. Salinity values were calculated and reported in Practical Salinity Units (UNESCO IWG, 1981). 20 DO water samples were drawn in DBO glass bottles with frosted neck to avoid evaporation. DO concentrations were determined with a modified Winkler method (Carpenter, 1965) using an automatic Metrohm volumetric titrator.
Nutrients water samples were drawn in 250 ml plastic bottles and frozen immediately at -20°C without previous filtration, until they were analyzed at the Laboratorio de Oceanografía Química y Contaminación de Aguas (LOQyCA) of the Centro Nacional Patagónico (CENPAT). Analysis of four macro-nutrients (nitrate, nitrite, silicate and phosphate) concentrations were 25 determined using colorimetric techniques on Skalar San Plus autoanalyser (Skalar Analytical® V.B, 2005a, b, c), according to the methods described in Strickland and Parsons (1972).
Chl concentrations were determined fluorometrically by filtering 500 to 1500 ml seawater samples through 25 mm Whatman GF/F (0.7 µm porosity) glass fiber filters. Chl and phaeopigments were extracted on board in 90 % acetone for a period of 12 to 24 h in the dark. Fluorescence measurements were performed on board in the dark with a Turner Designs model 10-AU pH of water samples were measured on board using a YSI 556 MSI multiparameter probe.
Nutrients, DO, chl and pH data from the water samples collected in MARES leg 2 cruise were compared with historical data available in the Argentine Oceanographic Data Center (CEADO, http://www.hidro.gov.ar/ceado/ceado.asp) for austral summer (Jan-Feb-Mar) in the region of interest. CEADO archives data originated by national and international research institutions.
Water samples observations were consistent with historical observations.
CTD ancillary sensors calibration
The SBE43 oxygen sensor has a very stable electronic system, therefore, any loss of its accuracy with time is primarily attributed to fouling of the membrane, either biological or waterborne contaminants (e.g., oil). The manufacturer provides 5 an algorithm to adjust the drift by using a quality reference sample such as Winkler titrated and the SBE43 measured DO concentrations at the times the water samples were collected (Sea-Bird Electronics Inc, 2012). DO differences greater than one standard deviation were discarded and not used in the DO sensor calibration. The standard deviation of the residuals after calibration was approximately 0.08 ml l −1 (N = 48, DO(µmol/kg) = 44660 · DO(ml/l)/(σ theta + 1000)). Figure 2 presents the residual (CTD DO -Winkler DO) before and after calibration. 10 The MBARI-ISUS sensor is designed mainly to determine nitrate concentrations in the 200-400 nm range of the ultravioletspectrum, as the main nutrients required for growth of phytoplankton (Satlantic LP, 2012). A post-cruise inspection of the vertical distribution of nitrate together with the nutrients water samples were carried out in the frontal zone and in the FS. The analysis suggested that the nutrient sensor did not work properly, showing negative values of nitrates in some profiles. Thus, the data of this sensor are reported based on its factory calibration.
15
The fluorescence sensor calibration was performed using a linear adjustment with Chl concentration from water samples to obtain the calibration coefficients. In the same way, pH water samples were fitted to pH sensor data. The squared correlation coefficient and the standard deviations post-calibration were R 2 = 0.891 (N = 37), 0.233 mg m −3 and R 2 = 0.908 (N = 54), 0.049, respectively.
4 Scanfish II data calibration 20 In April 2015, the SBE49FastCAT sensor was sent to SBE factory for the post-cruise calibration. Thus, the pre-and post-cruise calibration was used to generate a slope (in conductivity) and an offset (in temperature) corrections for these data, following Sea-Bird Electronics Inc (2010). The laboratory calibration showed drift corrections of -0.00010 psu month −1 and -0.00017°C year −1 ; corresponding to a conductivity slope value of 1.00010 and a temperature offset value of 0.00052°C, respectively.
Fluorescence and turbidity data from the ECO FLNTU sensor are reported based on factory calibrations only. 25 5 Scanfish II and CTD data processing Scanfish and CTD data were post-processed using SBE Data Processing software routines (v. 7.23.2, Seasoft-Win32, http: //www.seabird.com/software/software). The processing sequence for the SBE911plus did not follow the typical sequences suggested by the manufacturer because the 'scans to average' parameter was set to 24 in the configuration file used on board and so the raw vertical profiles were stored at 1 Hz averages. SBE technical support suggested skipping any filter steps, since the data 30 was already averaged (see SBE manual for details, http://www.seabird.com/sites/default/files/documents/SBEDataProcessing_ 7.26.4.pdf). The Deck Unit was programmed to advance conductivity 0.073 s relative to pressure. Because oxygen data is also systematically delayed with respect to pressure, several tests were carried out to determine the best aligment, which was set to +3 s. Each CTD profile was then inspected and density inversions were removed and filled in by a linear interpolation of the original temperature and conductivity data. Finally, all derived parameters were recalculated at the interpolated pressure.
5
The processing sequence for the SBE49FastCAT was based on the manufacturer suggestions. Raw temperature and conductivity data are often misaligned with respect to pressure in areas with strong vertical stratification due to vertical temperature gradient, causing spikes in derived variables, mainly in salinity. This misalignment, which depends on temperature, conductivity and pressure, was partially corrected with the SBE Data Processing software align routine by advancing temperature +0.063 s relative to pressure. Then, a low-pass filter was applied to pressure, conductivity and temperature to smooth high frequency 10 data. The main issue was the modeling of the thermal inertia effects within the conductivity cell (Lueck, 1990;Lueck and Picklo, 1990), which induces changes in the derived salinity values. These thermal effects are contemplated in the algorithmic cell-thermal mass of the SBE Data Processing software, through the coefficients α (initial magnitude of the fluid thermal anomaly) and β −1 (relaxation time of the fluid thermal anomaly). The salinity signal after applying both the standard SBE coefficients (α = 0.03 and β −1 = 7) and those proposed by Lueck and Picklo (1990) (α = 0.028 and β −1 = 9) was very similar.
15
The highest difference between the salinity profiles before and after applying the cell-thermal mass algorithm (0.03 psu) was found at the depth of the pycnocline (z∼ 38 m). Modeling the anomaly was particularly challenging for the sawtooth profiles considering the results from Lueck and Picklo (1990), who found that the anomaly persists 45 s after crossing the thermocline.
Following the manufacturer suggestions, it was decided not to apply the cell-thermal mass algorithm to the conductivity signal at all. Finally, the derived variables were calculated. Remaining salinity spikes could be related to shed wakes from the CTD 20 package that mix up the surrounding water. Different experiments with the median filter routine on derived variables data were performed to minimize spiking. A window size of 3 · 16Hz = 48 scans was used and the filter was applied consecutively three times for conductivity, temperature, salinity and density data.
Underway data
A linear least squares fit was made for each variable between the CTD profiles data extracted at the 2 dbar level during down-25 and up-cast and the underway data to determinate an offset and a slope for the underway data corrections. This calibration was conducted using only CTD and underway data collected simultaneously in time and space (35 data points in total). Figure 3 shows the pre-and post-calibrated differences between CTD and underway data for each variable.
Commonly, spurious data can be recorded by pump malfunction that alters the flow of water through the internal conduits or by chemical or biological depositions in the measurement cells or filters. An inspection of the data was held and questionable 30 data were rejected. Finally, to smooth the noise in the underway calibrated data caused by flow rate disturbances, temperature and conductivity were filtered by applying the median filter routine from the SBE Data Processing software, using a window size of 512 scans. This filter was carried out twice consecutively before the derive routine was applied.
7 ADCP data
The hull-mounted ADCP data was collected with the TRDI VMDAS software version 1.3. At the beginning of the cruise the vessel position was provided by a GPS Furuno GP-31, but no heading signal input was set. The problem was discovered on Feb. 6 at 0 h UTC after the late spring survey across the STF. Therefore, only along-track velocities can be used from this data.
The ancillary navigation input was then modified and GPS directional Applanix POS MV data was then set as input for the 5 heading and attitude data.
The transducer depth and the blanking interval were 3.93 m and 4 m, respectively. The ADCP data were set to profile with a vertical bin size of 4 m (50 bins total) with single-ping bottom track enabled to a maximum depth of 500 m, and a ping interval of 2 s between ensembles. The data were then averaged to 3 min temporal resolution (approximately 0.6 km in average).
Lessons learned
Prepare a detailed list including all measurements to be made, along with methodology, sampling and analytical instrumentation (Pollard et al., 2011). Collection of ocean data of the highest possible quality requires careful Quality Control and Quality Assurance procedures for the underwater unit and all sensor components (https://www.oceanbestpractices.net/handle/11329/ 15 336). These control tasks should be carried out and if possible formally reported before the cruise starts.
The overall quality of CTD data collection depends on a number of factors, such as: sensor calibration; equipment performance; software configuration and bugs during acquisition and processing; hardware problem; etc.
The selection of instruments and sensors used for data acquisition was carried out before the cruise in the Québec -Ocean Laboratory (UQAR/ISMER, Canada). They checked sensors calibration and equipment performance to acquire the best pos-20 sible hydrographic data. However, some system presented problems during the cruise as were documented in this paper. The control tasks will be more carefully checked in the next cruises.
Data availability
The final dataset concatenates the different collections during the cruise, which are quasi-continuous CTD vertical profiles, the underway surface temperature, salinity and fluorescence data, the Scanfish II CTD and FLNTU data, and the shipboard ADCP 25 data. CTD vertical profiles and underway data are reported in standard Sea-Bird Data File (cnv) format. Converted files consist of a descriptive header followed by the calibrated-processed data in columns. Scanfish II CTD and FLNTU data are reported in comma-separated values (csv) format. One file for section and for sensor is provided for each survey across the STF (e.g., file named T1_lst correspond to CTD data for transect 1 in late spring tide and T1_FLNTU_lst correspond to FLNTU data for transect 1 in late spring tide). Shipboard ADCP data also are reported in comma-separated values (csv) format, each file | 4,691.4 | 2018-07-06T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Applying Spinal Cord Organoids as a quantitative approach to study the mammalian Hedgehog pathway
The Hedgehog (HH) pathway is crucial for embryonic development, and adult homeostasis. Its dysregulation is implicated in multiple diseases. Existing cellular models used to study HH signal regulation in mammals do not fully recapitulate the complexity of the pathway. Here we show that Spinal Cord Organoids (SCOs) can be applied to quantitively study the activity of the HH pathway. During SCO formation, the specification of different categories of neural progenitors (NPC) depends on the intensity of the HH signal, mirroring the process that occurs during neural tube development. By assessing the number of NPCs within these distinct subgroups, we are able to categorize and quantify the activation level of the HH pathway. We validate this system by measuring the effects of mutating the HH receptor PTCH1 and the impact of HH agonists and antagonists on NPC specification. SCOs represent an accessible and reliable in-vitro tool to quantify HH signaling and investigate the contribution of genetic and chemical cues in the HH pathway regulation.
Introduction
Hedgehog (HH) signaling is of great importance in embryonic development controlling morphogenesis, organogenesis, and the organization of the central nervous system [1].Deregulation of the pathway is associated with congenital defects in the nervous, cardiovascular, or musculoskeletal systems [1].Conversely, increased HH signaling contributes to cancer development as observed in basal cell carcinoma of the skin or medulloblastoma in the brain [2].
The HH pathway involves multiple layers of repressive interactions.The main transducer of HH signals is Smoothened (SMO), which belongs to the family of G-protein coupled receptors (GPCR).When the pathway is inactive, SMO is repressed by the HH receptor Patched1 (PTCH1) and restricted to intracellular compartments.Binding of one of the three mammalian Hedgehog ligands (Desert, Indian, or Sonic HH) to PTCH1 initiates the cascade by releasing SMO.SMO translocates to the plasma membrane, where it further progresses into the primary cilium.Here, SMO facilitates post-transcriptional modifications of the glioma-associated oncogene homolog (GLI) transcription factors which promote the transcription of HH target genes [3].In the absence of HH signals, GLI proteins are subjected to degradation to their repressor forms (GliR).SMO activation prevents GLI processing at the primary cilia and allows the accumulation of its active form (GliA) [3].
Genetics has played a central role in the discovery and mechanistic understanding of HH signaling [4].Many components of the HH cascade have been identified by genetic screens in Drosophila.Several of these factors have been studied subsequently in mammals suggesting the conservation of crucial players throughout evolution.However, divergence between Drosophila and vertebrates has been observed and requires systems for studying specific aspects of mammalian HH signaling [4].In particular, primary cilia are crucial to vertebrate HH signaling but dispensable in Drosophila [4].Additionally, HH signaling patterns have different anatomical features in invertebrates or vertebrates [4].For example, the wing disc and neural tube are unique to Drosophila and vertebrates, respectively.
Studying the neural tube provides a valuable opportunity to investigate the effects and mechanisms of HH signaling during vertebrate development [5].During neural tube development, a Sonic Hedgehog (SHH) gradient initiated at the notochord and later also promoted by the floor plate, defines the ventral domains of the neural tube.The intensity and duration of HH pathway activation leads to the expression of a different set of transcription factors, which can be considered as markers along the dorsal-ventral axis.SHH triggers a gradient of GLI activity in the neural tube that contributes to establishing the "GLI code" [6,7].This occurs by reducing GLI inhibitory effects and enhancing its function as an activator in a gradual manner.Pax7, Pax6, and Irx3 expression occurs in the dorsal compartment and is antagonized by SHH signaling.In the ventral compartment, expression of Nkx6.1, Olig2, Nkx2.2, and Foxa2 requires SHH signaling.While the absence of GliR is sufficient for Nkx6.1 and Olig2 transcription, the expression of the most ventral genes Nkx2.2 and Foxa2 requires additional GliA functions [6].The classification and quantification of marker gene expression can be employed as a tool to infer HH pathway activity and effectively identify new genetic modulators of the HH pathway.
Analysis of markers in embryonic neural tubes is challenging.Firstly, for each investigated modulator, mutations would need to be established in mice.Preparing embryos and dissecting neural tubes requires effort, time, and funds.Secondly, genes that are crucial in early development, or neural tube specification cannot be analyzed if embryos do not survive until E9.5.This has led to the consideration of cell culture models for studying HH signaling.Current cell-based assays, including mouse embryonic fibroblast (MEFs, 3T3s), are widely used to mechanistically dissect the HH pathway by analyzing GLI-mediated transcriptional effects and ciliary localization of SMO.However, these systems are limited in understanding HH function in tissue patterning.Spinal Cord Organoids (SCO) are an in vitro system that recapitulate to a large extent the regulation of neural tube patterning and hence capture relevant aspects of HH signaling in a physiological context [8].Importantly, the repressive function of GliR can be assessed.SCOs can be obtained from embryonic stem cells (ESCs) within six days.Although SCOs do not show a regional pattering, the specification of neural progenitors with dorsal and ventral identities is observed, similar to the neural tube.
In our recent work [9], we apply SCOs obtained using a method previously established by Wichterle and colleagues [8] to quantify HH signaling.We used this approach to define the role of the COP-1 receptor TMED2 in the HH signaling cascade.Here, we provide a detailed description of the procedure, including several optimizations, to generate homogenous SCOs and to monitor HH pathway activity by assessing specific marker gene expression for different positions along the dorsal-ventral axis.
Expected results
Starting from 129-mouse ESCs, this protocol first generates neural embryonic bodies (nEBs) and later Spinal Cord Organoids (SCOs) (Fig 1A).ESCs are plated in AggreWell plates to form uniform nEBs, which are then transferred to 10cm dishes for SCO maturation (Fig 1B).Supplemented retinoic acid (RA) promotes neuralization and a caudal identity of neural progenitors (NPCs), and the subsequent addition of SHH triggers the maturation of ventral versus dorsal phenotypes of NPCs (Fig 1E).The extent of this ventral signal can be later used to measure the effect of gene mutations and compounds on the HH signaling cascade.
We characterized HH signaling in SCOs on two different levels; first, as the gene expression profile on the level of expressed mRNA, and second, as protein expression, which we detect by immunofluorescence (IF).Within our qPCR analysis, we evaluated the expression of transcription factors that are involved in the differentiation of the neuroectoderm and the specification of neural tube progenitors.Sox1, which is also expressed in the neural tube of E9.5 embryos, highlights the stemness and neural identity of the obtained SCOs To investigate the use of SCOs for characterizing HH pathway modulators, we generated SCOs from mouse Ptch1 KO ESCs and analyzed their patterning along the dorsal-ventral axis.Ptch1 KO mice die in utero at around E9.5 displaying severe deformations in cephalic regions, including defects in the neural tube closing [11].Sections of these tubes display a strong ventral identity where NKX2.2 expression is also detected in dorsal domains.Here, SCOs derived from Ptch1 KO ESCs show a high percentage of OLIG2 (64%) or NKX2.2 (18%) expressing NPCs (Fig 2H).Notably, HH hyperactivation induced by Ptch1 depletion is abrogated by the treatment with Vismodegib (Fig 2I), an FDA-approved SMO antagonist that is used to treat HH-dependent tumors [12].As expected, the chemical blockage of the HH cascade prevents SCO ventralization (Fig 2I ).
In conclusion, we show a uniform differentiation of NPCs within SCOs by qPCR and IF.Marker expression and patterning of these NPCs are dependent on HH stimuli or genetic alterations and resemble the in vivo neural tube specification.
Discussion
Spinal cord organoids (SCOs) have already been applied to study neural tube development [13], the differentiation of neural progenitors along the rostro-caudal axis [14] or to study latestage neural specification [15].In this paper, we focus on how SCOs can be applied to investigate the impact of genetic and chemical interventions on the HH signaling cascade.Applying an optimized protocol, based on the original report by Wichterle et al. [8], we show that SCOs recapitulate responses to genetic and chemical manipulations in the HH signaling in a quantifiable manner, thus making a comparison between different mutations and calibration possible.The in vitro assay involves the specification of NPCs along the dorsal-ventral axis and can be utilized to measure the effects of gene candidates on the HH pathway in a developmentalrelevant context.
The size of nEBs and SCOs is of great importance for the expression of neural stemness markers such as Sox1 and for establishing pattering in SCOs.To improve the sample preparation process, we use AggreWell plates which ensure the formation of nEBs of uniform size within the first three days of the experiment.This facilitates a high level of homogeneity between samples and biological replicates.These nEBs respond equally to RA treatment and give rise to SCOs with a consistent SOX1 expression.This improvement enables a focused analysis of HH regulation, reducing variability in neuroectoderm specification.
To characterize SCOs, we use a large panel of marker genes in an RT-qPCR approach to obtain a rapid and comprehensive overview of the expression pattern along the dorsal to the ventral axis of neural progenitor markers within our SCOs.An immunofluorescence-based characterization is then performed to evaluate protein levels of selected markers for different positions along the dorsal-ventral axis.The quantification of the NPC fractions expressing dorsal or ventral markers allows to categorize and quantify the HH response.A fraction of progenitors expresses the dorsal markers PAX7 or PAX6 and represents a level of HH signaling activity that would correspond to a first threshold of HH signaling activation, where GliR represses HH target gene expression.A second level threshold of increased HH signaling is observed through the expression of the first HH-dependent genes, such as Olig2.Strong and SCOs treated with and without SHH.Fold induction is relative to the untreated sample (mean ± SD).Each dot represents a biological replicate.D) Translocation of SMO to the PC in SCOs upon SHH stimulation.ARL13B is shown as a PC marker.SCOs were treated for 24 hours with and without SHH.Nuclei are labeled with Dapi.Scale bar 2,5 μm.E) Model mouse E9 neural tube with the expected marker pattering along the dorsal-ventral axis.F) Marker expression in SCOs at day 6 upon HH stimulation.Analysis by RT-qPCR of neuroectoderm (Sox1) markers with a dorsal (Pax6 and Pax7) or ventral (Olig2, Nkx2.2, and Foxa2) identity.Fold induction is relative to the ESCs sample (mean +/-SD).Eif4a2 expression was used for normalization.Each dot represents a biological replicate.https://doi.org/10.1371/journal.pone.0301670.g001prolonged pathway activation then leads to the expression of the most ventral genes, such as Nkx2. 2 and Foxa1 where the level of HH signal strength is above a third threshold.The effects of perturbations in HH signaling can be classified accordingly to these thresholds and compared between experiments.
Our SCOs show a similar marker expression response to SHH stimuli as the neural tube.SCOs initially have a dorsal identity but are responsive to ventralizing stimuli like SHH, which induces the expression of ventral markers and promotes the repression of dorsal ones (Fig 2B -2D).Mutations that constitutively activate the HH pathway, such as a Ptch1 depletion (Fig 2G ), promote a strong ventralization of SCOs, with more than 75% of NPCs expressing ventral markers.These results mirror defects detected in the neural tubes of Ptch1 KO embryos [16].Further treatment with the SMO antagonist Vismodegib abrogates the HH hyperactivation in Ptch1 KO SCOs, showing that the SCO culture yields physiologically consistent and measurable responses.
Since ESCs can be easily modified by gene editing, SCOs with specific mutations can be generated to study the impact on the HH pathway.We have used SCOs in an earlier study that screened haploid ESCs by insertional mutagenesis [17] and discovered new modulators of the HH cascade [9].Among them, we characterized the role of the COP-I receptor Tmed2 as a negative modulator of the HH pathway [9].The Tmed2 mutation results in embryonic lethality, and it is impossible to study its effects in mouse models [9].Thereby, Tmed2 mutant SCOs enabled us to quantify the effect on HH signaling in neural patterning, revealing a ventralizing effect.In this case, two main advantages of this technique can be appreciated.First, SCOs can be used to study genes that cause major developmental defects that would make quantification impossible in vivo.Second, fast gene editing in ESCs allows a rapid analysis without the need for animal experiments.
An alternative to the here described SCOs is a 6-day protocol in 2D obtaining spinal cord precursors [18].This technique has been used to characterize the role of novel candidates in regulating the HH pathway [19].In this approach, the maturation of NPCs and their specification is highly dependent on the confluency within the samples, making the analysis and quantification of markers in our hands less robust and comparisons difficult.SCOs are largely unaffected by this issue as a homogenous confluency and marker expression throughout the whole sample and between replicates are observed.Therefore, SCOs are superior for quantitative analyses.
Fibroblast cells have been extensively used to investigate HH signaling.Their handling and the absence of requirements in stem cell knowledge make them optimal for a first insight into the role of a gene within the HH cascade.However, this system has limited use in studying the relevance of a gene in more physiological conditions.For example, while we observed a substantial upregulation of HH target genes in SCOs upon Tmed2 depletion, we only detected a marginal increase of these genes in NIH-3T3 cells [9].Combining several in vitro culture systems is preferred to substantiate mechanistic insights for new HH signaling modulators.Overall, we suggest SCOs as a quantitative and sensitive method to study regulators of the HH SD) (n = 10).P value (**) < 0.01 (Welsch T-test).D) Quantification of NPCs co-expressing OLIG2 and NKX2.2 markers.Each point corresponds to the number of positive cells per SCO (mean ± SD) (n = 10).E-F) Quantification of NPCs expressing OLIG2 (E) and NKX2.2 (F) markers in SCOs treated with a range of SHH (10 ng/ml to 2 μg/ml) for 24 hours.Each point represents the percentage of positive cells in an independent SCO, relative to the total number of cells (mean ± SD) (n = 10).G) Representative pictures of SCOs quantified in E-F.Scale bar 50 μm.H) SCOs with a Ptch1 mutation express ventral markers.Expression of the ventral markers OLIG2 and NKX2.2 in SCOs derived from WT and Ptch1 KO ESCs.Nuclei are labeled with Dapi.Scale bar 50 μm.I) SMO inhibition by chemicals prevents HH hyperactivation in Ptch1-depleted cells.Analysis by RT-qPCR of Gli1, the dorsal marker Pax7 and the ventral marker Olig2 in WT and Ptch1 KO SCOs treated for 48 hours with SHH or Vismodegib (1 μM) for 48 hours as indicated.Fold induction is relative to the WT sample (mean ± SD).Eif4a2 expression was used for normalization.Each dot represents a biological replicate.https://doi.org/10.1371/journal.pone.0301670.g002pathway in a relevant mammalian developmental system.Quantitation of ventral and dorsal markers recapitulates the spectrum of cell types during neural tube specification and offers a reliable and robust method to quantify the HH signaling.
[10].Compared to ESCs, SCOs show a strong upregulation of Sox1 mRNA (Fig 1F), indicating a successful differentiation into NPCs.We next evaluated the expression of the canonical HH target genes Gli1 and Ptch1.Upon SHH stimuli, we observed a significant boost in Gli1 and Ptch1 expression (Fig 1C), indicating heightened GLI transcriptional activity.Activation of canonical HH signaling is also confirmed by the translocation of SMO to the primary cilium (PC) upon SHH stimulation (Fig1D).The third group of genes we analyzed includes Pax7 and Pax6 and characterizes the dorsal fraction of the neural tube where Pax7 is only expressed in the most dorsal regions[5].Compared to ESCs, the SCOs display dorsal characteristics marked by high expression of both Pax7 and Pax6 (Fig1F).As expected, upon HH stimuli, we observed a decrease of these markers concurring with the first steps of the HH pathway activation.Lastly, we measured the expression of the ventral markers Olig2, Nkx2.2, and Foxa2, where Foxa2 defines the floorplate, the most ventral part of the neural tube[5].These three markers are exclusively expressed in SHH-treated SCOs and thus confirm the ability of SCOs to recapitulate the full spectrum of progenitor fates during the dorsal-ventral patterning of the neural tube (Fig1F).In the next set of experiments, we quantified the HH response by analyzing the fraction of neural progenitors with either dorsal or ventral identity by staining through IF SCO sections for SOX1, PAX6, OLIG2, and NKX2.2 markers.As shown in Fig 2A, SCOs show a percentage of SOX1+ cells greater than 90%, essentially yielding a nearly homogenous population of neural progenitors.These NPCs show a mainly dorsal identity, indicated by the high proportion of PAX6+ cells.After SHH treatment, NPCs with the ventral markers OLIG2 and NKX2.2 are enriched (Fig 2B and 2C).The observation of a larger fraction of NPCs expressing OLIG2 than NKX2.2 recapitulates the in vivo situation where activation of NKX2.2 depends on higher SHH levels compared to OLIG2.Notably, the fraction of OLIG2+/NKX2.2+NPCs is extremely low (Fig 2D), as the expression of NKX2.2 inhibits Olig2 transcription.By treating SCOs with different concentrations of SHH, we were able to analyze varying grades of HH signaling intensity (Fig 2E-2G).The maximum fraction of NKX2.2 positive cells was achieved with the highest SHH concentration, while the top OLIG2+ NPC frequency was obtained at lower SHH concentrations.
Fig 1 .
Fig 1. Derivation of SCOs and analysis of their transcriptional profile.A) Schematic overview of SCO differentiation process and expected neuronal marker expression during the differentiation.B) Brightfield microscopy of ESCs, nEBs and SCOs during D1, D3 and D6, respectively of differentiation.D1 ESCs grow as colonies on gelatine in 2i, Lif conditions, scale bar 100 μm; D1 and D3 nEBs grow in AggreWell Plate, first as single cells and then as nEBs, scale bar 200 μm; D6 SCOs grow in suspension, right frame shows zoom, scale bar in both frames 200 μm C) SHH treatment induces expression of HH targets in SCOs.Analysis by RT-qPCR of Gli1 and Ptch1 mRNA expression in
Fig 2 .
Fig 2. Characterization of SCOs by immunostaining.A) Expression of PAX6 (left) and SOX1 (right) markers in unstimulated SCO at day 6 of differentiation.The same SCO is shown for both staining.In the bottom panels, the overlay between PAX6 and SOX1 with nuclei (labeled with Dapi) is shown.Scale bar 50μm.B) SCOs express ventral markers after SHH stimulation.Expression of the ventral markers OLIG2 and NKX2.2 in SCOs treated with and without SHH.Nuclei are labeled with Dapi.Scale bar 50 μm.C) Quantification of NPCs expressing SOX1, OLIG2 and NKX2.2 markers in SCOs treated with and without SHH.Each point represents the percentage of positive cells in an independent SCO, relative to the total number of cells (mean +/- | 4,309.2 | 2024-06-25T00:00:00.000 | [
"Biology"
] |
Interplay of vacuolar transporters for coupling primary and secondary active transport
Secondary active transporters are driven by the proton motif force which is generated by primary active transporters such as the vacuolar proton pumps V-ATPase and V-PPase. The vacuole occupies up to 90 % of the mature cell and acidification of the vacuolar lumen is a challenging and energy-consuming task for the plant cell. Therefore, a direct coupling of primary and secondary active transporters is expected to enhance transport efficiency and to reduce energy consumption by transport processes across the tonoplast. This has been addressed by analyzing physical and functional interactions between the V-ATPase and a selection of vacuolar transporters including the primary active proton pump AVP1, the calcium ion/proton exchanger CAX1, the potassium ion/proton symporter KUP5, the sodium ion/proton exchanger NHX1, and the anion/proton exchanger CLC-c. Physical interaction was demonstrated in vivo for the V-ATPase and the secondary active transporters CAX1 and CLC-c, which are responsible for calciumand anionaccumulation in the vacuole, respectively. Measurements of V-ATPase activity and vacuolar pH revealed a functional interaction of V-ATPase and CAX1, CLC-c that is likely caused by the observed physical interaction. The complex of the V-ATPase further interacts with the nitrate reductase 2, and as a result, nitrate assimilation is directly linked to the energization of vacuolar nitrate accumulation by secondary active anion/proton exchangers.
Introduction
With up to 90% of the cellular volume the plant vacuolar system represents the largest compartment group within plant cells and serves as storage compartment, disposal for waste and toxins, lytic compartment, Ca 2+ -store, and represents an important factor for plant growth and apoptosis [1,2].These multiple functions require a versatile transport system at the tonoplast.In particular, ion influx and efflux have to be coordinated to ensure proper functionality of the cell.For instance, the cytosolic concentration of calcium ions requires a tight regulation to allow for signalinduced calcium ion release and to maintain a low level of calcium ions in resting non-stimulated cells [3].Primary active and outward directed Ca 2+ -pumps are crucial for the resting state, secondary active transporter for the termination of Ca 2+ -signaling [4].Another example is the transient nitrate storage to prevent excessive production of cytotoxic nitrite as a result of the nitrate assimilation.Due to the nitrate accumulation in the vacuole, the transport is directed against the concentration gradient so that active transport is required to enable the nitrate deposition in the vacuole [5].Secondary active transporters such as the calcium ion/proton exchanger CAX1 and anion/proton exchangers of the CLC-family mediate calcium and nitrate transport out of the cytosol into the vacuole [4,6].Therefore, the formation of a proton motif force (pmf) is essential to energize the secondary active transporters.This pmf is created by the vacuolar-type proton translocating ATPase (V-ATPase) and the vacuolar proton-translocating pyrophospatase (V-PPase) [7].While the dimeric V-PPase uses a byproduct of the cellular metabolism as energy source, the multimeric enzyme complex V-ATPase consumes comparatively valuable ATP [8].As a consequence, the V-ATPase activity is linked to ATP-and glucose availability [9].Besides the co-existence of V-PPase and V-ATPase in mature vacuoles, it is widely accepted that their relationship is altered during plant development.The V-PPase pumps protons into vacuoles during embryo and seedling development, the V-ATPase takes over with increasing age of the plant cell and becomes the dominant proton pump for vegetative growth [10,11].This arrangement pays attention to the availability of ATP and PP i during development and might be also a consequence of the complex structure of the V-ATPase: The V-ATPase is a multimeric complex of more than 800 kDa, which can be divided into a membrane integral sector V O (subunits VHA-a, VHA-c, VHA-c", VHA-d, VHA-e) and a membrane associated sector V 1 (subunits VHA-A-VHA-H) [2].The V O -sector is dominated by a rotating ring of approximately six copies of the proteolipids VHA-c/VHA-c".The C-terminal half of VHA-a resides next to the proteolipid ring and provides half channels so that cytosolic protons have access to their binding sites at VHA-c and can be released into the vacuolar lumen [12].The rotation of the proteolipids is driven by sequential ATP-hydrolysis within the three copies of VHA-A, which form the hexameric head together with three copies of VHA-B.VHA-D and VHA-F form the central stalk and transduce the sequential conformational alterations of VHA-A into rotation [12].The peripheral subunits VHA-C, VHA-E, VHA-G and VHA-H form a complex stator structure which prevents corotation of the head and anchors the V 1 -sector to the membrane via the N-terminal domain of VHA-a [2].Recent work revealed a growing number of additional proteins interacting with the V-ATPase and likely regulating its activity.For instance, a glycolytic aldolase interacts with the V-ATPase in a gibberellin-dependent manner in rice [13,14], following blue light perception a regulatory 14-3-3 protein binds to the phosphorylated VHA-A subunits and activates the V-ATPase, and thus, links the V-ATPase activity to day light [15].Kinases such as the WNK8 in Arabidopsis thaliana and CDPK1 in barley phosphorylate individual VHA-subunits such as VHA-a, VHA-C, and VHA-A [15][16][17][18].Additional evidence for an extensive V-ATPase protein interaction network derives from yeast, where the V-ATPase interacts with the cytoskeleton, the RAVE-complex, proteins of the VTC-family, and shows cooperated activity with the cation transporter Vnx1p and Ca 2+ -releasing channels [19][20][21][22][23][24][25].Functional coupling of the secondary active transporter CAX1, NHX1 and the V-ATPase has been observed in plants, too, but a physical interaction was only proven for the canonical vacuolar proton pumps V-PPase and V-ATPase in Kalanchoë blossfeldiana [7].
The V-ATPase subunits VHA-c"1 (23), VHA-d1 (2), VHA-d2 (1), VHA-e1 (54), VHA-B3 (1), VHA-C (7), VHA-E1 (3), VHA-G2 (1), VHA-G3 (3), VHA-H (1) were further included in a large scale screening for interaction partners and regulators of membrane integral proteins using the mating-based split-Ubiquitin system (mbSUS), a yeast-2-hybrid system that has been optimized for the analysis of membrane proteins.The numbers in brackets provide the number of identified interaction partners [26].A previous attempt by the Arabidopsis Interactome Mapping Consortium applied the conventional yeast-2-hybrid system and included the V-ATPase subunits VHA-A (3), VHA-B1 (4), VHA-B2 (1), VHA-B3 (1), VHA-C (5), VHA-D (2), VHA-E1 (1), and VHA-G1 (1), the obtained data on membrane integral proteins of the V O -sector is omitted here due to the nature of the yeast-2-hybrid system [27]: The common yeast-2-hybrid system relies on the nuclear import of the interaction partners to initiate the expression of reporter genes.The published data suggests that the identity of many proteins involved in the active state of the V-ATPase and its regulation is still elusive.However, the large-scale analyses were performed in yeast and obtained interactions might not be valid in plants, either due to a differential expression or due to the distinct cellular environment in yeast.Therefore, we aimed at the investigation of functional and physical interactions in this work and focused mainly on data obtained in plants.
Plant material and growth conditions
A. thaliana (Columbia) was grown on soil in a growth chamber with 12 h light (240 µmol quanta m -2 s -1 , 19 °C) and 12 h dark (18 °C) with 60% relative humidity.The hypocotyl-derived cell suspension culture At7 [28] was applied for localization studies and physiological analysis.The cells were cultivated at 26 °C upon shaking in the dark.3 g of cells were transferred weekly to 45 ml MSmedia: 0.43% (w/v) Murashige and Skoog basal salt mixture (Sigma-Aldrich M5524), 0.1% Gamborg B5 Vitamin stock (Sigma-Aldrich), 3% sucrose, 0.1 ‰ 2,4-dichlorophenoxyacetatic acid, adjusted to pH 5.7.The MS-media contained 18.8 mM KNO 3 and 20.61 mM under control conditions.For nitrate-dependent experiments the media was prepared with the indicated concentrations of KNO 3 and without NH 4 NO 3 as nitrogen source.
Crosslinking and co-immunoprecipitation
Formaldehyde-crosslinking was performed in At7 protoplasts.Cells were incubated for 20 min in 1% formaldehyde with repeated inverting at room temperature in the dark.Approximately 300 µl of cell suspension were mixed with 300 µl of 2% formaldehyde in PBS.Next, the cells were centrifuged for 10 min at 4 °C and 10,000 × g.The Pierce ® co-immunoprecipitation kit (Thermo Scientific, Kit-No.26149) was applied for co-immunoprecipitation according to the manufacturer's protocol: Briefly, an antibody directed against VHA-A and VHA-E was immobilized by a 2.5 h incubation with the resin to ensure proper coupling.The pelleted cells were washed in PBS, resuspended in lyse/wash-buffer and incubated for 5 min under mixing at RT.The sample was centrifuged again, the supernatant transferred to a column with the immobilized antibody/resin mix and incubated overnight while mixing at 4 °C.The proteins were finally eluted and subjected to SDS-PAGE in preparation for mass spectroscopy.
Mass spectroscopy
Protein bands were excised from SDS-gels and transferred to emollient-free reaction tubes and were washed twice with 200 μl 30% acetonitrile in 0.1 M ammonium hydrogen carbonate for 10 min.The gel-slices were dried in a vacuum-concentrator for 30 min.The tryptic digestion was done with 0.1 µg/µl trypsin in 10 mM ammonium bicarbonate for 30 to 45 min at RT. 20 µl 10 mM ammonium bicarbonate were added to each gel slice to prevent drying and the samples were incubated overnight at 37 °C.After complete evaporation, the samples were rehydrated in 15 µl 50% acetonitrile, 0.1% TFA. 1 µl of the peptide sample was finally plotted on the MTP Anchorchip™ 384 TF (Bruker).Briefly, before the peptide sample was dried, 1 µl of the HCCA matrix was added (0.2 mg・ml -1 α-Cyano-4-hydroxycinnamic acid in 90% acetonitrile, 0.1% TFA and 0.1 M NH 4 H 2 PO 4 ).After complete evaporation the mass spectra of the peptide samples were obtained with a MALDI-TOF MS/MS (Ultraflextreme, Bruker) in MS-mode and analysed by the Mascot software (Matrix science, UK).The settings were trypsin as the enzyme, no more than one missed cleavage and a peptide tolerance of 100 ppm.The Mascot-score is defined as -10×Log(P) with P as probability of a false positive match to a database.The threshold further depends on the background noise.For the settings used in this analysis, the probability of a false positive match is smaller than 0.05 for proteins showing a Mascot-score of more than 58.
Isolation of tonoplasts and V-ATPase activity measurements
Tonoplasts were isolated as described before [30].The tonoplast suspension was supplemented with Brij58 (50 mM Tricine, pH 8.0, 20% (v/v) Glycerol, 2% (w/v) Brij58) resulting in a V-ATPase/detergent ratio of 1:10 and incubated for 30 min at 4 °C.The samples were centrifuged at 98,000 × g and 4 °C.The supernatant contained the solubilised V-ATPase.The activity was measured by recording the released phosphate according to Bencini with modifications as described by Dietz et al. (1998) [30,31].Bafilomycin-sensitive ATP hydrolysis corresponds to V-ATPase activity, to this end 2.8 µM Bafilomycin was added to the reaction and related to a parallel reaction without Bafilomycin.
Molecular biology
For genotyping, one leaf was ground with a micro-pistil in a 1.5 ml reaction tube for 10 s, 500 µl Edwards-buffer (200 mM Tris-HCl pH 7.5, 250 mM NaCl, 25 mM EDTA, 0.5% SDS) was given to the samples and grinding was continued.Subsequent vortexing, the samples were centrifuged for 2 min at 10,000 × g and 300 µl of the supernatant were given to 310 µl isopropanol to precipitate the DNA.Precipitation was performed by incubation at RT for 10 minutes and centrifugation at 10,000 × g for 7 min.The pellets were resuspended in 100 µl sterile water.The actual genotyping was performed by polymerase chain reaction (PCR) using the DNA-polymerase from Thermus aquaticus (Taq-Polymerase).A set of three oligo nucleotides was applied for each reaction.The oligo "LBb1" binds to the DNA-insertion, the "LP"-and "RP"-oligo bind the genomic DNA flanking the insertion (for oligo-information see table 1).Homozygous insertion lines are characterized by a PCR-product of either LBb1 and LP or LBb1 and RP, depending on the orientation of the insertion.The wildtype shows a PCR-product of LP and RP, heterozygous lines show both types of PCR-products.Oligo sequences were obtained with the primer design tool from the Salk Institute Genomic Analysis Laboratory (SIGnAL) which suggests sequences for genotyping of SALK-lines (signal.salk.edu/tdnaprimers.2.html).The plasmids 35S-EYFP-C and 35S-ECFP-C [32] were applied for localization studies and Förster resonance energy transfer (FRET)-analyses [32,33,34].Briefly, FRET describes a dipoledipole coupling which results in radiation-less energy transfer between the so-called donor molecule and the acceptor molecule.Prerequisites are a proper orientation of the dipoles, a distance of typically less than 10 nm as well as a high spectral overlap of the donor's emission spectrum and the acceptor's absorption spectrum.A donor-acceptor pair is called a "FRET-pair" and is characterized by its Förster radius R 0 .R 0 is the distance of half maximal energy transfer between both molecules [34].Here, cyan fluorescent proteins served as donor and yellow fluorescent proteins as acceptors.The coding sequences (cds) of the analyzed proteins were obtained from The Arabidopsis Information Resource (TAIR, https://arabidopsis.org) using the Arabidopsis locus identifiers according to the Arabidopsis genome initiative (AGI): AT2G38170 (Cation Exchanger 1, CAX1), AT1G15690 (Arabidopsis Vacuolar Pyrophosphatase 1, AVP1), AT5G49890 (Chloride Channel C, CLC-c), AT4G33530 (K + Uptake Permease 5, KUP5), AT5G27150 (Na + /H + Exchanger 1, NHX1) and AT1G37130 (Nitrate Reductase 2, NR2).The coding sequences were amplified by PCR using oligo nucleotides with flanking NotI/EcoRI (CAX1, AVP1, NR2) or NotI/StuI (CLC-c, KUP5, NHX1) restriction sites (for oligo-information see table 2).Subsequent a reaction with restriction enzymes, the fragments were used for ligation with the plasmid 35S-EYFP-C.Within 35S-VHA-c3-EYFP [31], the EYFP has been replaced by mTurquoise2 for FRET-analyses of the protein pairs CAX1/VHA-c3 and VHA-c3/VHA-c3.
For mbSUS analysis, the cloning was started with Gateway ® entry clones made from attBflanked PCR-products without stop codon (for oligo-information see table 2) and the plasmid pDONR221 following the manufacturer's instructions (Invitrogen).The LR clonase (Invitrogen) was employed to transfer the VHA-A and VHA-E coding sequences to the pX-NubWTgate-vector and the coding sequences of CLC-c and NR2 to the pMetYCgate-vector.
Protoplast transfection
Leaves were harvested from soil grown Arabidopsis thaliana plants at the age of about four weeks.Protoplasts were isolated as described before [38].Protoplast isolation of At7-cells was performed according to Appelhagen et al. (2010) [39].Transfections were performed in uncoated hydrophobic 8-well slides (Ibidi, Martinsried Germany): 20 µl of protoplast suspension were mixed with 5 µl of plasmid DNA and 25 µl of PEG-solution (40% PEG4000, 2.5 mM CaCl 2 , 0.2 M mannitol) and incubated for 10 min in the dark.The assay was diluted stepwise with 50 µl, 100 µl and two times 200 µl of W5-solution (154 mM NaCl, 2.5 mM CaCl 2 , 5 mM glucose, 5 mM MES pH 5.7).In between, the cells were incubated for 10 min.The transfected cells were kept in the dark at 25 °C prior to microscopic analysis.
Quantitative microscopy
The vacuolar pH was measured with a Leica SP2 confocal laser scanning microscope.Cells were loaded with 5 µM 6-carboxy fluorescein diacetate and the fluorescence was detected with photomultiplier 2 in the range of 500 to 530 nm.The dichroic mirror RSP500 and a water-dipping objective with 40-fold magnification were used.The probe was excited sequentially with the 458 nm and 488 nm line of an argon ion laser, the laser power was adjusted to equal emission intensity at pH 7. Calibration and data evaluation were performed as described before [9,33].Confocal localization analyses were performed using a Leica SP2 or Zeiss LSM780 microscope as described before [32,34].ECFP (mTurquoise2), EYFP and the chlorophyll autofluorescence were detected in the range of 470-510 nm, 530-600 nm, 650-700 nm, respectively.Cyan fluorescent proteins (ECFP, mTurquoise2) and EYFP were excited using the 458 nm laser line and the 514 nm lines of an argon ion laser, respectively.Chlorophyll was excited with the 458 nm line, too, and transmitted light images were obtained with the 458 nm line.Main beam splitters were applied that allow for sequential excitation with the 458 nm and 514 nm laser line so that tracks were alternating line by line.Images were recorded with one-to four-fold line-averaging and the intensity resolution was 12 bits / pixel.The settings were identical for FRET-measurements, but the signal amplification was strictly defined and kept constant.
Statistical analysis
Arithmetic mean and standard deviation were calculated for all datasets.For some datasets the standard error was calculated, too, and given in the figures (see legends).Variances of the data were compared by F-test.The non-parametric Mann-Whitney-U-test was applied for the data on vacuolar pH and the effects of KNO 3 and bafilomycin on plant cell suspension culture.The student's t-test was applied for the statistical analysis of the V-ATPase activity.A significant difference was accepted for p 0.05.
In silico-analysis
Predictions of transmembrane domains were obtained from Aramemnon [40].Co-Expression data were collected from the ATTED II-database (http://atted.jp).The ATTED II-database comprises data from Affymetrix microarray-analysis and thus, contains information on transcript abundance [41].To quantify the degree of co-expression for genes that are co-expressed with the query gene, the correlation coefficients R are calculated and the genes are listed with decreasing R so that the coexpressed genes get rank numbers.The same procedure was performed with the thereby identified co-expressed genes as query to obtain the rank number of the initial query gene.The MR-value of a gene pair is given as the square root of the product of both rank numbers.Data on subcellular localization and organ specificity was obtained from MASCP Gator (http://gator.mascproteomics.org/).This database contains data from GFP-localization experiments and mass spectroscopy of samples from A. thaliana.These samples were either tissue-or organelle-specific.[42].Predictions of subcellular localization were collected from the BAR Arabidopsis eFP-browser [43].
Selection of initial candidates
The secondary active transporters CAX1 (AT2G38170) and NHX1 (AT5G27150) as well as the proton pump AVP1 (AT1G15690) were reported to interfere with the V-ATPase activity and therefore suggested to interact with the V-ATPase [6,44,45,46].An ATTED II co-expression analysis [41] for the vacuolar V-ATPase subunit VHA-a3 identified the secondary active transporters KUP5 (AT4G33530), a potassium transporter, and CLC-c (AT5G49890), an anion/proton antiporter (Figure 1).These transporters showed low MR-values for a high number of V-ATPase subunits.MRvalues were classified based on the MR-values between the genes encoding for V-ATPase subunits.Finally, the available published data on CAX1, NHX1, AVP1 and the co-expression analysis resulted in the selection of the transporters CAX1, NHX1, AVP1, KUP5 and CLC-c.Crosslinking followed by immunoprecipitation with an antibody directed against VHA-A was performed to identify transporters that are in a proximity close enough for the formaldehyde linkage.The antibody directed against VHA-A was chosen, because VHA-A is part of the catalytic core complex and was successfully used before by the Lüttge group [7].However, transporters were not identified, but nitrate reductase 2 (AT1G37130) was found with a Mascot-score of 59 that is higher than the background signal.The crosslinking product had an apparent molecular weight (mw) of approximately 170-180 kDa.Subtracting the molecular weight of NR2 (102.8 kDa) leaves 67.2-77.2kDa what matches the mw of VHA-A which served as bait.Next, in silico analyses were performed to estimate the possibility of a physical interaction between nitrate reductase 2 and the V-ATPase.Aramemnon predicted a weak probability (0.12) for two transmembrane domains within NR2.Proteomic data obtained from the MASCP gator-database revealed localizations in cytosolic, vacuolar and plasma membrane fractions and revealed its presence in all tissues, so that an interaction between V-ATPase and NR2 is possible based on this data.
Protein-protein interactions of transporters
Before testing the possibility of a direct interaction between V-ATPase and other transporters the proteins were analyzed pairwise for co-localization in Arabidopsis mesophyll protoplasts.This analysis was performed to check if the analyzed proteins show an overlapping subcellular localization which allows for a physical interactions.Since the FRET-analysis was performed to investigate physical interactions between V-ATPase and other membrane-integral proteins, the colocalization and FRET-studies were performed with the membrane integral V O -subunits VHA-a3 and VHA-c3.The V-ATPase subunit VHA-c3 co-localized with CAX1, NHX1, KUP5 while VHA-a3 co-localized with AVP1 and CLC-c (Figure 2).This data indicates the possibility of an interaction.To test the predicted interaction between the V-ATPase and the selected transporters, FRETmeasurements were performed in Arabidopsis mesophyll protoplasts.FRET offers a higher distance range than formaldehyde-crosslinking or bimolecular fluorescence complementation and thereby allows monitoring close proximity of proteins and thus, proof of protein interaction without direct physical contact of the labeled termini.
The FRET-pairs ECFP/EYFP and mTurquoise2/EYFP were applied for the analysis.The pairs differ in their Förster-radii so that the distances were calculated to allow for the comparison of the data sets.It should be mentioned that these distances do not correspond to real distances since the fluorophores' orientation κ 2 is unknown.However, under the assumption that κ 2 is 0.66 on average, distances of less than 1.5-times R 0 were considered as proof of interaction according to Gadella and co-authors (1999) [47].CAX1/VHA-c3 (5.64 ± 0.36), AVP1/VHA-a3 (6.78 ± 0.51), CLC-c/VHA-a3 (7.31 ± 0.495) showed distances below 1.5-times R 0 while KUP5/VHA-c3 (8.05 ± 1.97) and NHX1/VHA-c3 (7.90 ± 1.37) did not meet that criteria (Figure 3A).The combination VHA-c3/VHA-c3 and the direct fusion of ECFP-EYFP served as positive controls showing average distances of 6.06 ± 0.06 and 4.99 ± 0.35 nm, respectively (Figure 3A).The MR-value bases on the ranking of correlation coefficients R of the analyzed transcripts.Co-expressed genes are listed with decreasing R so that rank numbers can be assigned to the co-expressed genes.Then the co-expressed genes serve as query to obtain the rank number of the initial query gene.The MR-value is given as the square root of the product of both rank numbers.
In addition, the mating-based split-Ubiquitin system (mbSUS) was applied to investigate the interaction between the nitrate transporter CLC-c and the V-ATPase subunits VHA-A and VHA-E1 as well as the interaction between the nitrate reductase 2 and VHA-A/VHA-E1.The mbSUS depends on the bimolecular complementation of Ubiquitin in the cytosol.Once Ubiquitin is folded, Ubiquitindependent proteases cleave off a transcription factor which activates reporter genes [35].The C- termini of VHA-a and VHA-c are exposed to the vacuolar lumen, VHA-c" and VHA-e do not reside at the tonoplast, and the role of VHA-d is still elusive, so that V O -subunits did not qualify for mbSUS.Therefore, VHA-A and VHA-E1 were chosen as representatives of the hexameric head and the peripheral stalk, respectively.This approach was selected for the genes related to nitrate metabolism due to an undetectable expression of fluorescent protein-tagged nitrate reductase 2.
For mbSUS-analysis, the V-ATPase subunits VHA-A and VHA-E1 were fused to the Nterminal fragment of Ubiquitin (Nub), CLC-c and NR2 were fused to the C-terminal half of Ubiquitin (Cub).The yeast was selected for diploid cells carrying both Nub-and Cub-constructs.The selection relied on auxotrophy of the yeast strains.Finally, an x-gal assay of the yeast cells proved protein-protein interactions using the β-galactosidase as reporter gene.To this end, cells were lysed, the substrate X-gal was added and the turnover of X-gal by the β-galactosidase resulted in a blue staining.The assay revealed protein-protein interactions between VHA-E1/CLC-c and VHA-A/NR2, whereas it resulted in a lack of blue staining for VHA-A/CLC-c and VHA-E1/NR2 (Figure 3B). Figure 3. Protein-protein interactions between V-ATPase subunits and vacuolar transporters/nitrate reductase 2. A) FRET-derived distances of fluorescent proteins which were fused to proteins as indicated at the x-axis.Light grey columns show protein pairs with distances below 1.5-times R 0 , dark grey columns those that exceeded 1.5-times R 0 .The mean average ± standard deviation is given, n > 10.B) mbSUS X-gal assay.The interactions between NR2 and VHA-A, NR2 and VHA-E1, CLC-c and VHA-A and CLC and VHA-E1 were analyzed and a blue staining showed the turnover of X-gal by the reporter β-galactosidase.The expression level of the β-galactosidase depends on the strength of the analyzed protein-protein interaction.Representative images out of three independent experiments are shown for each combination.
Activity in insertion lines
The analysis of the insertion lines confirmed that the lines SALK_105121C (nhx1), SALK_120707C (kup5), SALK_115644C (clc-c), and SALK_106153C (avp1) were homozygous while the transgenic line SALK_021486C (cax1) was heterozygous.Unfortunately, the primer combination for genotyping the insertion line clc-c resulted in two products, but both PCR-products obtained with the insertion line clc-c were distinct from the product obtained with the wildtype so that this transgenic line is homozygous.avp1 and cax1 showed a delayed germination in comparison to the wildtype Col-0: After 5 days 86.6% of the wildtype plants germinated, but only 76.1% of the cax1-line and 52.9% of the avp1-line germinated within five days.However, four weeks old plants of nhx1, kup5, clc-c and avp1 did not show a significant difference to the wildtype, but the growth of cax1, though heterozygous, was significantly reduced (Figure 4).Next, the proton pumping activity was obtained by measuring the vacuolar pH in the plants.Vacuoles were loaded with 6carboxyfluorescein, ratiometric images were taken with the two excitation wavelengths 458 nm and 488 nm using a confocal laser scanning microscope.The vacuolar pH was determined based on the obtained images.Significant effects of the t-DNA insertion were observed for nhx1, which showed an increase of pH by 1.38 units, and for cax1, which showed a decrease of vacuolar pH by 0.52 units (Figure 5).Since at least two proton pumps are active at the tonoplast, the V-ATPase activity was measured by the Bafilomycin-sensitive release of phosphate.The phosphate release was normalized to the protein content of the tonoplast samples.All three antiporter-lines showed a reduced V-ATPase activity, the symporter KUP5 did not affect the activity and lack of the primary active proton pump AVP1 was compensated by an increase of V-ATPase activity (Figure 5).
Interplay of nitrate and V-ATPase activity
For simplicity, the heterotrophic Arabidopsis cell culture AT7 was applied to investigate the impact of nitrate on the vacuolar pH and the biomass yield.The strategy had been to analyze combinatory effects of nitrate as nitrogen source and inhibition of V-ATPase by bafilomycin to gain new insights into the interplay of nitrate assimilation and V-ATPase regulation.To this end increasing concentrations of either potassium nitrate as solely nitrogen source or bafilomycin as V-ATPase inhibitor were tested to identify concentrations that resulted in a minor but significant effect on biomass production.Such slight responses were seen for 60 mM potassium nitrate and 10 nM bafilomycin, respectively (table 3): five days after inoculation with 3 g of cell culture, the biomass was 12.3 ± 0.6 g for the control grown in MS-media, 10.2 ± 0.5 g for the supply of 60 mM KNO 3 and 11.1 ± 0.1 g for 10 nM bafilomycin.40 mM KNO 3 resulted in an insignificant reduction of biomass (11.7 ± 0.3 g), 100 mM KNO 3 resulted in a strong reduction of biomass production (4.3 ± 0.1 g), 200 mM KNO 3 in cell death (1.8 ± 0.3 g).Addition of 5 nM bafilomycin gave 11.8 ± 0.5 g which was not significantly different from the control.Concentrations of 20 to 80 nM bafilomycin showed a significant reduction of biomass production to 8.5-3.4 g.
Next, the vacuolar pH was measured in response to these conditions as indicator of V-ATPase activity.The vacuolar pH was 6.73 ± 0.16 under control conditions (18.8 mM KNO 3 and 20.6 mM NH 4 NO 3 ), 7.17 ± 0.17 in presence of 60 mM KNO 3 and 6.42 ± 0.03 with 10 nM bafilomycin.The response to 10 nM bafilomycin indicates rather a disturbance of pH-homeostasis than an inhibition of proton pumping at the tonoplast.The combination of 60 mM KNO 3 and 10 nM bafilomycin gave a vacuolar pH of 6.96 ± 0.21.To determine if this response to both 60 mM KNO 3 and 10 nM bafilomycin is an additive effect, the sum of the pH-values of solely 60 mM KNO 3 and solely 10 nM bafilomycin was calculated in relation to the control and finally offset against the vacuolar pH at control conditions.This gave a vacuolar pH of 6.86 ± 0.2 what is close to the response obtained by -c, d), CAX1 (cax1, e) and AVP1 (avp1, f), respectively.More than 20 plants were analyzed and delayed growth was observed for cax1.Genomic DNA was isolated and the insertion was analyzed by PCR.To this end, three primers were applied: one binding to the insertion, two binding to the genomic sequences flanking the insertion.Wildtype samples resulted in a PCR-product of both flanking primers, samples from the insertion lines resulted in PCR-products of the primer binding to the insertion and one of the two flanking primers.clc-c showed two PCRproducts which were both distinct in size from the wildtype PCR-product.It was concluded that one of the two PCR-products obtained with the clc-c was unspecific.Finally, genotyping revealed that all transgenic lines were homozygous except of cax1 which was heterozygous (g).The heterozygous genotype of cax1 was confirmed by sequencing of the PCR-products.the experimental combination of 60 mM KNO 3 and 10 mM bafilomycin and indicates a simple additive effect of KNO 3 and bafilomycin.However, the data points to a slight overcompensation of the bafilomycin-induced V-ATPase inhibition, probably by the V-PPase, and an effective inhibition of vacuolar acidification by 60 mM nitrate, since the vacuolar pH is close to the cytosolic pH under these conditions.It should be noticed that in this cell type the pH-gradient between cytosol and vacuolar lumen was generally low.
Conclusion
The secondary active transporters NHX1, KUP5, CLC-c and CAX1 and the proton pump AVP1 were tested for their interaction with the V-ATPase.The data demonstrated a physical interaction of CAX1, CLC-c and AVP1 with the V-ATPase, while proof of interaction failed for NHX1 and KUP5.The insertion line affecting AVP1 showed no effect on the vacuolar pH, but an increase of V-ATPase activity.It is concluded, that the V-ATPase compensates successfully for the lack of AVP1.While AVP1 is the dominant pump in younger cells and developing tissues, the V-ATPase is dominant in older tissues [8,10].This distinct activity during plant development might explain that Li and coworkers (2005) [48] observed an increase of vacuolar pH in AVP1 knockout lines.The observed interaction of V-ATPase and AVP supports the idea of a functional interplay of V-ATPase and AVP1 that might be established at the stage of prevacuolar compartments [49] and which is mediated by a direct protein-protein interaction.
In general, NHX-activity reduces the vacuolar pH by exporting protons from the vacuole [50].Interestingly, the knockout of NHX in yeast reduced the vacuolar pH, too [51], pointing to a more complex situation.Indeed, it has been reported before, that NHX1 is involved in the regulation of vacuolar pH and even a direct functional interaction of V-ATPase and NHX1 was described [52,53,54].As FRET depends strongly on dipole-orientation of donor and acceptor, the low FRET-efficiencies measured here do not rule out a physical interaction between V-ATPase and NHX1.The same is valid for the vacuolar potassium ion/proton-symporter KUP5.However, an insertion in the gene encoding KUP5 had no effect on either the vacuolar pH or the V-ATPase activity, so that the coupling of primary and secondary active transport is evident for NHX1 but not for KUP5.It has to be considered that the tonoplast comprises a multitude of transport proteins, which modulate the membrane potential and pH-gradient, and thus, consume or contribute to the energy stored in the pmf.This complexity might have caused the apparent uncoupling of V-ATPase activity and vacuolar acidification in transgenic lines.
The interplay of V-ATPase and CAX1 has been intensively investigated since the nineties.The proton motif force generated by vacuolar proton pumps drives the calcium transport by CAX1 [55,56].The relationship between both transporters appears mutual: While the calciumtransport depends on the pmf, the knockout of CAX1 resulted in reduced V-ATPase activity but increased AVP1-activity and overexpression of CAX1 led to a higher activity of the V-ATPase [57,58].In Catharanthus roseus, the vacuolar pH was 5.6, if CAX-proteins were inactivated, and 6.2, if the transporters were active [59].Based on this data a direct coupling of proton pumps and CAX1 was concluded but a physical interaction was not proven [60].Due to the role of Ca 2+ as important signaling molecule, the double-knockout of CAX1/CAX3 had an impact on plant growth, cell expansion, stomata opening and transpiration [45,61].Within the present work, heterozygous lines showed a reduced V-ATPase activity and a reduced vacuolar pH, so that the reduced V-ATPase-activity does not simply result in an increase of AVP1-activity, it appears to be overcompensated by AVP1.The FRET-data indicates that the mode of bidirectional regulation is a physical interaction between the V 0 -sector and CAX1.The FRET-efficiency, the strong impact of CAX1's knockdown on vacuolar acidification and the negligible degree of transcript correlation might further indicate that the interaction is not established during biosynthesis at the ER, at least not in a stable ratio or constitutive manner.
The connection between nitrate and V-ATPase appears contradictory: on the one hand, nitrate accumulation is a physiological consequence of higher nitrate levels [62].On the other hand, nitrate is known to function as ATP-competitor and thus, inhibits V-ATPases [63] and the vacuolar nitrate accumulation by secondary active anion/proton exchangers depends on the activity of the V-ATPase.This contradiction of nitrate inhibition and required maintenance/stimulation of V-ATPase activity points to a nitrate concentration-dependent regulation of the V-ATPase, turning the V-ATPase on in response to a moderate increase of cellular nitrate concentration, while high nitrate concentrations inhibit the V-ATPase by blocking the ATP-binding sites.Nitrate and chloride are transported by members of the CLC-family, of which CLC-a, CLC-b and CLC-c are located at the tonoplast [6,64,65].All three are known to function as antiporters [6,66].The role of CLC-c is under discussion, initially identified as nitrate transporter, a more recent work stated that CLC-c functions as chloride transporter [6,67,68,69].Within the present work, it could be shown that CLC-c interacts with the V-ATPase and an insertion within the gene encoding CLC-c resulted in a reduced V-ATPase activity, but only insignificantly increased vacuolar pH.This clearly demonstrates the tight relationship between both transporters, regardless of the substrate of CLC-c.The fact that endosomal V-ATPases co-localize with CLC-d [70] point to a general co-occurence of CLC-proteins and V-ATPases.The coupling of nitrate transport and V-ATPase becomes more evident by the observed interaction of V-ATPase and nitrate reductase 2. NR2 is the dominant nitrate reductase responsible for reduction of 90% of the cellular nitrate, only 10% is reduced by NR1 [71].The activity of NR2 is highly induced and the vacuolar nitrate accumulation is reduced in leaves of the vha2vha3-mutant which lacks functional V-ATPases at the tonoplast and shows a vacuolar pH that is less acidic by only 0.5 units in root epidermal cells [10].The higher cytosolic nitrate concentration is known to enhance NR-activity [72] and is related to the reduced vacuolar accumulation of nitrate.The data on the vacuolar acidification differs from the data obtained here.The main difference between both data sets is the presence of an inhibited V-ATPase here and the lack of functional V-ATPases at the tonoplast in the previous work.One possible explanation is that compensation by the V-PPase requires the presence of the V-ATPase so that for instance the V-PPase can sense the activity state of the V-ATPase.On the other hand, high level of nitrate efficiently prevents vacuolar acidification without inducing any compensation, though nitrate is thought to interfere with ATP-binding and should not affect the V-PPase.Obviously, the coordination of vacuolar proton pumps is more complex and depends on multiple factors.Last but not least both V-ATPase and nitrate reductase are regulated in response to blue light via phosphorylation followed by binding of the regulatory 14-3-3 proteins [15,73].This might contribute to the coordination of nitrate assimilation and nitrate transport, too.
Figure 1 .
Figure 1.Co-expression of V-ATPase subunits and vacuolar transporters.Data was taken from ATTED II, the heat map bases on the MR-values as indicated in the legend bar.Co-expression of NR2 and V-ATPase subunits is given for a selection of subunits.The MR-value bases on the ranking of correlation coefficients R of the analyzed transcripts.Co-expressed genes are listed with decreasing R so that rank numbers can be assigned to the co-expressed genes.Then the co-expressed genes serve as query to obtain the rank number of the initial query gene.The MR-value is given as the square root of the product of both rank numbers.
Figure 2 .
Figure 2. Co-localization of vacuolar transporters (NHX1, KUP5, CAX1, CLC-c, AVP1) and the membrane integral V-ATPase subunits VHA-c3/VHA-a3.All cells show the large central vacuole that occupies most of the cellular volume and results in the rim-like localization-pattern of the fluorescent proteins.ECFP-constructs are given in green, EYFP in magenta.Co-localized areas appear white in the merged image.Scale bar: 20 µm.
Figure 4 .
Figure 4. Phenotype of t-DNA insertion lines and wildtype Col-0 plants.Representative images are shown for the wildtype (a) and lines with insertions in the genes encoding for NHX1 (nhx1, b), KUP5 (kup5, c), CLC-c (clc-c, d), CAX1 (cax1, e) and AVP1 (avp1, f), respectively.More than 20 plants were analyzed and delayed growth was observed for cax1.Genomic DNA was isolated and the insertion was analyzed by PCR.To this end, three primers were applied: one binding to the insertion, two binding to the genomic sequences flanking the insertion.Wildtype samples resulted in a PCR-product of both flanking primers, samples from the insertion lines resulted in PCR-products of the primer binding to the insertion and one of the two flanking primers.clc-c showed two PCRproducts which were both distinct in size from the wildtype PCR-product.It was concluded that one of the two PCR-products obtained with the clc-c was unspecific.Finally, genotyping revealed that all transgenic lines were homozygous except of cax1 which was heterozygous (g).The heterozygous genotype of cax1 was confirmed by sequencing of the PCR-products.
Figure 5 .
Figure 5. Vacuolar pH (A) and V-ATPase activity (B) in the transgenic lines and the wildtype Col-0.A) Vacuoles were loaded with the pH-responsive dye 6-carboxy fluorescein and pH was measured by imaging sequentially with the excitation wavelengths 458 nm and 488 nm using a confocal laser scanning microscope (mean ± SE is given, n = 30).B) V-ATPase activity was measured in leaf tonoplasts by bafilomycinsensitive phosphate release.The data was normalized to the average activity of the wildtype plants (mean ± SE, n = 3).Asterisks mark significant differences between the transgenic lines and the wildtype (p < 0.05).
Table 3 .
Impact of KNO 3 and bafilomycin on biomass production and vacuolar pH. | 8,713.6 | 2016-10-13T00:00:00.000 | [
"Physics",
"Environmental Science",
"Chemistry"
] |
A Review of the Safety and Efficacy of Vaccines as Prophylaxis for Clostridium difficile Infections
This review aims to evaluate the literature on the safety and efficacy of novel toxoid vaccines for the prophylaxis of Clostridium difficile infections (CDI) in healthy adults. Literature searches for clinical trials were performed through MEDLINE, ClinicalTrials.gov, and Web of Science using the keywords bacterial vaccines, Clostridium difficile, and vaccine. English-language clinical trials evaluating the efficacy and/or safety of Clostridium difficile toxoid vaccines that were completed and had results posted on ClinicalTrials.gov or in a published journal article were included. Six clinical trials were included. The vaccines were associated with mild self-reported adverse reactions, most commonly injection site reactions and flu-like symptoms, and minimal serious adverse events. Five clinical trials found marked increases in antibody production in vaccinated participants following each dose of the vaccine. Clinical trials evaluating C. difficile toxoid vaccines have shown them to be well tolerated and relatively safe. Surrogate markers of efficacy (seroconversion and geometric mean antibody levels) have shown significant immune responses to a vaccination series in healthy adults, indicating that they have the potential to be used as prophylaxis for CDI. However, more research is needed to determine the clinical benefits of the vaccines.
Introduction
Clostridium difficile (C. difficile) is a gram-positive, spore-forming bacterium known for causing severe diarrhea [1]. C. difficile infections (CDI) are often contracted and transmitted in healthcare settings, such as hospitals and long-term care facilities [1,2]. Patients with prolonged hospitalizations are among those at the highest risk for developing CDI [1]. For these reasons, they are traditionally considered healthcare-related infections [3]. However, while most cases of CDI are contracted in healthcare settings, community-acquired CDI rates are increasing [1]. Other risk factors include antibiotic use, advanced age, cancer chemotherapy, and proton pump inhibitor use [1,4]. Antibiotic use increases the risk of CDI because of alterations in the normal bowel flora, providing a suitable environment for C. difficile to flourish [1]. Colonization of C. difficile results in an infection that can vary in degree of severity, ranging from mild inflammatory diarrhea to pseudomembranous colitis, toxic megacolon, sepsis, and death [2].
C. difficile exerts its effects on the gastrointestinal (GI) tract by releasing two toxins that can bind to and damage intestinal epithelium. Toxins A (an enterotoxin) and B (a cytotoxin) contribute differently to the pathophysiology of CDI. Toxin A is associated with the secretion of fluid and generalized inflammation in the GI tract. Toxin B is considered the main determinant of virulence in recurrent CDI, and is associated with more severe damage to the colon [2,3].
Since these two toxins are implicated in the harmful effects associated with CDI, they have become prominent targets in the search for preventative measures for the infection [5]. One major area of research involves the use of altered toxin structures as targets for a vaccine, termed toxoid vaccines. The toxoid is altered such that it will not damage the GI tract, but will prime the immune system to recognize and remove the real toxins from the bacteria if they are encountered in the future, thereby preventing disease [6][7][8]. There are a few toxoid vaccines for CDI currently in early stages of development, but there is not much evidence relating to their efficacy. Past research has shown a negative correlation between patients' antitoxin antibody levels and their risk of recurrent CDI. For this reason, many researchers believe that a toxoid vaccine that can promote antibody production is a promising research pursuit [9].
While C. difficile toxoid vaccines can neutralize exotoxins and help prevent toxin-mediated symptoms, they also have several limitations. Previous research has indicated that toxoid vaccines do not have the ability to prevent colonization of C. difficile in the GI tract or prevent cytotoxicity [10]. Furthermore, they cannot prevent C. difficile sporulation or shedding of these spores into the environment, thus potentially increasing the number of asymptomatic carriers of the infection [11,12].
Several other types of C. difficile vaccines are also in development. For example, the approach of passive immunization from administration of monoclonal antibodies to both toxins A and B seems promising because of the long in vivo half-life of monoclonal antibodies [13,14]. A carbohydrate-based polysaccharide-II (PS-II)-conjugated vaccine directed toward C. difficile cell wall components has also demonstrated immunogenicity in animal models [12]. Vaccine candidates that inhibit colonization and adhesion of bacteria to gut epithelia are also being evaluated. In C. difficile, two proteins encoded by a single gene, SIpA, are found on the bacterial cell surface. An active vaccination regimen using an extract of two SIP proteins with different adjuvants has been shown to elicit an antibody response in animal models [12].
At present, methods for primary and secondary prevention of CDI are controversial. Many forms of prophylaxis have been proposed, including probiotics and antibiotics, though none are recommended by the Infectious Disease Society of America (IDSA). The only preventative measures currently included in the IDSA guidelines are antimicrobial stewardship and the maintenance of clean, disinfected surfaces to promote sanitary conditions [15].
The potential severity of and damage caused by CDI in combination with their rising incidence renders the subject of prophylaxis a pressing public health concern. Previous research into various toxoid vaccines suggests their promise as preventative measures and this review aims to evaluate clinical trials that test both the safety and efficacy of toxoid vaccines for CDI prophylaxis.
Materials and Methods
A literature search was devised to find research evaluating toxoid vaccines for the prevention of CDI. Outcomes of interest were safety measures (adverse reactions (ARs) and adverse events (AEs) experienced after vaccine administration) and efficacy measures. The trials found in this literature search assessed efficacy using several surrogate markers, including seroconversion as well as geometric mean fold rises (GMFRs) and concentrations (GMCs) of antibody levels. All methods for assessing efficacy were included.
A MEDLINE search (2000-2017) was performed using the key words bacterial vaccines and Clostridium difficile. Only clinical trials were included; trials not specifically performing research on the safety and/or efficacy of a C. difficile vaccine were excluded.
A ClinicalTrials.gov search (2000-2017) was performed using the key words "Clostridium difficile vaccine". The search was limited to clinical trials that were closed and completed at the time of the search and had results posted. One trial that did not have results posted was included because the results were published in a corresponding journal article.
A third literature search was performed for articles through Web of Science (2000-2017) using the keywords Clostridium difficile toxoid vaccine. Only clinical trials were included.
Six clinical trials were included in this review. Out of 85 primary results from the MEDLINE search, two trials met the inclusion criteria once duplicates were excluded. From 17 found in the initial ClinicialTrials.gov search, 12 were completed at the time of the search. Four of these met all the inclusion criteria. One clinical trial included in the review had results posted on ClinicalTrials.gov that have not been published [16]. Out of the eight primary results found through Web of Science, none met the inclusion criteria for this review.
Results
Every trial evaluated a C. difficile vaccination series (3-4 doses of vaccine) administered to healthy participants at pre-specified time points over varying intervals (ranging from 21-180 days). Five trials [17][18][19][20][21] used two variations of a toxoid vaccine: an aluminum-based adjuvant (Alum) vaccine and a non-Alum vaccine, the sixth trial [16] did not specify the formulation. In every trial, the researchers followed up with participants at one or more time points to collect blood samples and self-reported safety data. All trials utilized a dose-escalating treatment regimen, in which participants were assigned to receive different doses of the vaccine.
To assess safety, the trials looked at local and systemic ARs and AEs experienced after vaccination. Every trial measured safety endpoints for at least six days after each dose, the most common period of measurement being seven days after each dose. All trials reported mild local ARs. Overall, there were few moderate/severe ARs or AEs reported. The most commonly reported ARs/AEs were related to the injection site (e.g., pain, erythema) and flu-like symptoms (e.g., malaise, fatigue, headache). Detailed safety information from each trial is outlined in Table 1.
In addition to safety endpoints, five trials evaluated efficacy endpoints [17][18][19][20][21]. The main efficacy parameter was immune response to vaccination in the form of anti-toxin antibody production. The specific methodology used to measure efficacy outcomes in each trial is summarized in Table 2. Seroconversion was defined as an increase in antibody levels of at least four times the baseline value (≥4-fold increase from baseline). GMFR uses the average fold-increase in log transformed antibody levels from baseline. GMCs use the absolute average of the log transformed antibody levels. Seroconversion rates and GMFRs are summarized in Table 2.
Summary of Trials
Kotloff et al. [17] assessed the safety, immunogenicity, and dose response of a C. difficile toxoid vaccine as Alum or non-Alum formulations, in 30 healthy participants (median age: 23 years). Individuals were excluded from the trial if they had a history of antibiotic-associated diarrhea or antibiotic use in the past month. Participants were sequentially assigned to receive one of three study doses: 6.25, 25, or 100 mcg. Vaccines were administered on day 1, 8, 30, and 60. The researchers assessed IgA and IgG antibody production by collecting peripheral blood mononuclear cells (PBMC) from participants one week before and after immunization and performing an adapted Enzyme-Linked Immunospot (ELISPOT) assay on the collection. In the 6.25 mcg dose group, two participants (20% of the group) did not exhibit ≥4-fold rises in antibodies. One of the participants that did not respond received an Alum adjuvant vaccine and the other received a non-Alum vaccine. The highest antibody responses were seen in the 25 mcg with Alum and 100 mcg non-Alum dose groups for Toxin A. In contrast, antibody responses increased with increasing dose for Toxin B. Vaccine dose and formulation were not found to be significantly related to the magnitude of antibody response. Levels of serum antibodies were found to correlate with serum anti-Toxin A IgG (r = 0.83, p < 0.001). The researchers reported that a three dose series (on day 1, 8, and 30) appeared to be adequate because a fourth dose did not substantially boost serum IgG or antibodies.
Bezay et al. [18] evaluated VLA84 (experimental vaccine), a single recombinant fusion protein comprised of portions of C. difficile Toxins A and B. The primary outcomes were safety and tolerability of VLA84 in healthy adults using different doses and formulations. The secondary outcomes were immunogenicity and dose-response. In part A of the study, 60 healthy adults (18-64 years) received one of five vaccine doses: VLA84 20 mcg Alum, 75 mcg Alum or non-Alum, or 200 mcg Alum or non-Alum. Three doses of the vaccine were given on day 0, 7, and 21. In part B of the study, 80 elderly (≥65 years) participants were randomized to receive VLA82 75 mcg Alum or non-Alum or 200 mcg Alum or non-Alum formulations on day 0, 7, 28, and 56. The researchers assessed antibody production by collecting serum samples from participants on day 0 and 28 (adults) or 56 (elderly) and performing Enzyme-Linked Immunosorbent assays (ELISA) and Toxin Neutralization Assays (TNA) on the collections. The researchers reported high antibody responses to both toxins in both the adult and elderly groups after vaccination. However, the 20 mcg Alum dose group showed significantly lower responses than the 75 and 200 mcg non-Alum dose groups at several time points. The highest antibody responses were found one week after the last vaccination in the adult sample and four weeks after the last vaccination in the elderly sample. Based on preliminary analysis, the researchers chose to analyze antibody responses for the 75 mcg non-Alum dose group. The results of this analysis are summarized in Table 2. In this dose group, antibody responses to both C. difficile toxins were found to decrease to 20-25% of the peak response after six months, similar to past findings [19].
Greenberg et al. [19] assessed an adjuvant C. difficile toxoid vaccine in adult (18-55 years) or elderly (≥65 years) participants over a 70-day treatment period. Participants with a history of antibiotic-associated diarrhea or recent antibiotic use were excluded. Participants were randomly assigned to one of four groups: 2, 10, or 50 mcg or placebo. Three doses of the vaccine were administered on day 0, 28, and 56. The researchers assessed IgG production to both toxins A and B by collecting serum samples from participants and obtaining ELISA titers on the collections, using previously collected pooled human plasma as a control measure. The fastest antibody level increase rates were reported for the 50 mcg dose group. In both adult and elderly participants, antibody responses for Toxin B were found to be lower than for Toxin A. Anti-Toxin A and B antibody levels were found to increase until day 70 and were in decline by day 236, which is consistent with responses to other types of toxoid vaccines [22].
de Bruyn et al. [20] assessed the optimal formulation and dosing schedule for an investigational C. difficile toxoid vaccine. To be included in the trial, participants had to be considered at increased risk for infection, which was defined as impending hospitalization and/or prolonged residence in a long-term care facility or nursing home. In stage 1 of the study, 455 participants were randomized to one of five groups: 50 or 100 mcg Alum or non-Alum formulations or placebo. Participants received three doses of the vaccine on day 0, 7, and 30. In stage 2 of the study, 206 participants were randomized to receive a selected vaccine formulation, based on immunogenicity seen in stage 1, according to two vaccination schedules (day 0, 7, and 30 or day 0, 30, and 180). The researchers assessed anti-toxin IgG to toxins A and B as well as anti-toxin neutralizing capacity by collecting blood samples from participants and obtaining, respectively, ELISA titers and TNA titers on the collections. GMCs for both toxins were found to peak on day 60 and decline by day 180. A comparison of the immune responses based on the formulation and schedule resulted in the 100 mcg plus Alum formulation and day 0, 7, and 30 schedule being selected for further development. From the trial results, the researchers reasoned that a vaccine formulation that included an adjuvant would be critical to enhance immune responses in high-risk patients.
Sheldon et al. [21] evaluated a three-dose series of a C. difficile toxoid vaccine. The primary outcome was the safety and tolerability of the C. difficile vaccine. The secondary outcome was immunogenicity of the vaccine, measured by GMFRs of C. difficile Toxin A-specific and Toxin B-specific antibodies. Anyone with a history of CDI or antibiotic use in the past month was excluded from the trial. Participants were randomly assigned to one of seven groups: 50, 100, or 200 mcg Alum or non-Alum formulations or placebo. Vaccines were administered on day 1, 30, and 168. Two cohorts of participants were studied. Cohort 1 consisted of adults 50-64 years of age (n = 97) and cohort 2 consisted of adults 65-85 years of age (n = 95). The researchers assessed neutralizing antibody production by collecting serum samples from participants and performing TNAs on the collections. Substantial increases in antibodies from baseline to both toxins A and B were found in vaccinated participants one month after the second dose and one month after the third dose. The researchers reported that the increases in antibody levels after seven months persisted at least 12 months. Antibody levels for both toxins tended to be higher for the non-Alum compared to the Alum formulations at nearly all time points. No dose-response relationship was observed for vaccine doses and formulations, which the researchers noted may be due to the trial's lack of power.
Finally, a national clinical trial evaluated local and systemic vaccine-related ARs to a C. difficile toxoid vaccine, but did not evaluate efficacy endpoints [16]. As previously mentioned, this clinical trial has safety results posted on ClinicalTrials.gov, but these results have not been published.
Discussion
As research into toxoid vaccines for the prevention of CDI builds, it is essential to compile the findings of individual trials to understand the progression of the field of research as a whole. This review assessed the methodology and outcomes of six clinical trials that tested the safety and efficacy of toxoid vaccines to prevent CDI.
Every trial included in this review found mild ARs consistent with those found to accompany vaccine administration in general [23]. From these findings, toxoid vaccines seem promising from a tolerability and safety standpoint. However, two trials specified that participants were only able to report ARs that were pre-specified by the researchers [16,17]. These ARs tended to be those well-documented to be associated with vaccines (such as injection site pain, fever, and headache). This methodology may have biased participant reporting and resulted in researchers overlooking less common ARs or AEs from the vaccine. Future research should maintain focus on the safety of the vaccine, particularly as more participants and patients become involved in later phases of research.
An interesting finding from multiple studies was the absence of a dose-response relationship to the vaccine for both safety and efficacy. There was no clearly demonstrated or consistent relationship between increasing doses of the vaccine and ARs or AEs, or between increasing doses of the vaccine and the body's immune response. It is not practical to directly compare the trials' vaccine regimens to determine optimal dosing because the formulations and vaccines themselves differed. However, the best dosing of the vaccines included in this review are as yet unclear and more research is required to determine the lowest doses that provide the optimal immune response and minimized side effects.
Three trials found that antibody responses were notably diminished by 145-160 days after the last vaccine [18][19][20]. de Bruyn et al. [20] tested different dosing schedules and their findings suggest that the timing of vaccine administration may play an important role in the body's response to vaccination. The results found in the trials in this review question the vaccines' long-term effects, and bring up the potential need for further doses or boosters of the vaccine to allow patients to experience long-term clinical benefits.
The trials that evaluated the use of Alum adjuvants with their vaccines did not find clear relationships between the use of an adjuvant and any outcome measures; while two trials [18,21] found the non-Alum formulations resulted in higher immune responses, one trial [20] found that the Alum formulations resulted in higher immune responses. Adjuvants are generally used with vaccines to increase the body's immune response, but are often associated with a greater degree of ARs and AEs [24]. The trials in this review did not reliably find an increasing immune response or rate/severity of adverse reactions for adjuvant vaccines compared to non-adjuvant vaccines.
Five trials differentiated between participants' immune response to Toxin A versus Toxin B [17][18][19][20][21]. All of these trials found increases in antibodies to both toxins at various time points after vaccine administration. However, the findings did not demonstrate whether there were significant differences in magnitude between antibodies produced to Toxin A or Toxin B, and if so, which had a stronger response. As previously mentioned, while both toxins contribute to the pathophysiology associated with CDI, Toxin B is generally associated with worse outcomes. Therefore, an optimal C. difficile vaccine may need to account for this, targeting antibody production against Toxin B over Toxin A if possible. From the results of the trials included in this review, there was no reliably directed response toward one toxin over the other. These are important points for further research, especially considering the different ways the toxins contribute to the clinical manifestations of the disease.
Finally, primary and recurrent/secondary CDI are both serious public health concerns and need to be considered to fully understand the scope of the infection. Though vaccines for recurrent CDI have not been evaluated in clinical trials, Sougioultzis et al. [25] found that six months after the last dose of a four-dose vaccine series, three patients with recurrent infection showed no further recurrence. However, antibody responses to vaccination in these three patients were variable; only two showed substantial increases compared to baseline. These results highlight the point that surrogate endpoints used as efficacy measurements are not sufficient to claim efficacy or effectiveness of the toxoid vaccines.
Bezlotoxumab is a monoclonal antibody that has been studied more extensively in patients who have experienced CDI. When given in conjunction with antibiotic treatment for a primary CDI, bezlotoxumab significantly reduced the risk of developing recurrent/secondary CDI [26]. Since this drug has been shown to be clinically effective in the target population, it calls into question what the appropriate timing to administer toxoid vaccines would be and their place in therapy.
Conclusions
Though a substantial amount of research has been done on C. difficile toxoid vaccines in recent years, there are still many areas of inconsistency in the literature. While the vaccines have been shown to be generally well tolerated, their efficacy is questionable. All trials in this review that evaluated efficacy found substantial immune responses after vaccination. However, there were no clear relationships between dose and response or between adjuvant formulation and response. Furthermore, there is evidence that antibody levels from the vaccine may diminish in the long term. These points of contention show the need for further research on the optimal dose, dosing schedule, and formulation of the toxoid vaccines. Finally, the efficacy of the vaccines seems promising from surrogate endpoints measured in these clinical trials, but more research is needed to determine their clinical benefits. | 5,042.8 | 2017-09-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Using chaotic advection for facile high-throughput fabrication of ordered multilayer micro- and nanostructures: continuous chaotic printing
This paper introduces the concept of continuous chaotic printing, i.e. the use of chaotic flows for deterministic and continuous extrusion of fibers with internal multilayered micro- or nanostructures. Two free-flowing materials are coextruded through a printhead containing a miniaturized Kenics static mixer (KSM) composed of multiple helicoidal elements. This produces a fiber with a well-defined internal multilayer microarchitecture at high-throughput (>1.0 m min−1). The number of mixing elements and the printhead diameter determine the number and thickness of the internal lamellae, which are generated according to successive bifurcations that yield a vast amount of inter-material surface area (∼102 cm2 cm−3) at high resolution (∼10 µm). This creates structures with extremely high surface area to volume ratio (SAV). Comparison of experimental and computational results demonstrates that continuous chaotic 3D printing is a robust process with predictable output. In an exciting new development, we demonstrate a method for scaling down these microstructures by 3 orders of magnitude, to the nanoscale level (∼150 nm), by feeding the output of a continuous chaotic 3D printhead into an electrospinner. The simplicity and high resolution of continuous chaotic printing strongly supports its potential use in novel applications, including—but not limited to—bioprinting of multi-scale layered biological structures such as bacterial communities, living tissues composed of organized multiple mammalian cell types, and fabrication of smart multi-material and multilayered constructs for biomedical applications.
Introduction
Multi-material and multi-layered architectures achieve functionality and/or performance that are not achievable with monolithic materials. Moreover, the functionality and performance of multilayered composites is frequently determined by the proximity, indeed the density, of the constituent layers. Multilayered materials with a high amount of internal surface area can yield higher capacitances in supercapacitors [1], elevated mechanical strength [2,3] and fatigue resistance [4], better sensing capabilities [5], or improved energy-harvesting potential [6]. A multi-lamellar architecture that features highly accurate control of surface geometry and surface area is also desirable in applications related to the controlled release of pharmaceuticals [7].
Multilayered structures are particularly relevant in nature and in biological applications. Indeed, one of the most pressing challenges in biofabrication is the development of strategies for the facile and highthroughput creation of multilayered and multimaterial tissue-like constructs. Real tissues are composed of multiple micrometer-thickness layers of distinct cell types. Although appealing and enabling, the cost-effective fabrication of multi-material, and perhaps multi-cell type, lamellar microarchitectures has proven to be challenging, especially when adjacent thin, perhaps single cell layers, of multiple cell types are desired. The current bioprinting and bioassembly technologies are capable of fabricating relatively complex lamellar architectures, but have difficulty placing large surfaces of different types of cells next to each other in a cost-effective manner. For instance, current strategies for multi-material bioprinting or bioassembly of multiple inks in the same printing operation face severe limitations in resolution and speed [8,9]. A combination of multiple channels, each one dispensing one material, has been demonstrated to fabricate multi-material constructs with resolutions in the range of 50-100 µm [10][11][12][13][14][15]. We recently demonstrated multi-material 3D printing of perfusable multi-layered cannulas by co-extruding multiple streams of inks through a set of concentric capillary tubes contained in a single nozzle [13,16]. Similarly, Kang and coworkers presented an extrusion printing technique that produced multi-material tissue-like microstructures by co-extrusion of different materials through a head with a pre-set internal architecture [14]. These stateof-the-art approaches to 3D bioprinting produce structures at resolutions dictated (in the best scenario) by the smallest relevant length scale of the nozzle [8] (i.e. 300-500 µm) and exhibit only moderate speeds (i.e. BioX from Cellink prints at a maximum linear speed of the printhead of 40 mm s −1 ). To date, no study has demonstrated the robust, fast, and cost-effective fabrication of reproducible micro-and nanostructures in a multi-material construct through a single-nozzle printhead.
Here we introduce the concept of continuous chaotic printing: the use of a simple laminar chaotic flow induced by a static mixer for the continuous creation of fine and complex structures at the micrometer and submicrometer levels within polymer fibers. Chaotic flows are used to mix in the laminar regime, where the conditions of low speed and high viscosity preclude the use of turbulence to achieve homogeneity [17,18]. In the context of 3D printing, they have been suggested as a tool to provide better homogenization of different materials [19,20]. However, a much less exploited characteristic of chaotic flows is their potential to create defined multi-material and multi-lamellar structures [21][22][23][24]. A recent contribution from our group demonstrated, for the first time, the use of simple chaotic flows (i.e. Journal Bearing flow) to imprint fine microstructures within constructs in a controlled and predictable manner at an exponentially fast rate in a batch-wise fashion [25]. In the present study, we explore the utility of a chaotic printer, equipped with a Kenics static mixer (KSM) [26] as a key component of the printhead, for the printing of alginate-based fibers with massive amounts of lamellar microstructures in a continuous fashion (figure 1).
Continuous chaotic printing: a simple and effective microfabrication strategy
Our chaotic printer is composed of a flow distributor, a pipe, a static mixing section, and an outlet or nozzle tip (figures 1(A), (B)). The number of inks one can use is unrestricted, and different distributor geometries can be employed to accommodate the injection of multiple inks. However, in this communication we adhered to some of the simplest Figure 1. Experimental setup. Continuous chaotic printing is based on the ability of a static mixer to create structure within a fluid. The Kenics static mixer (KSM) induces a chaotic flow by a repeated process of reorientation and splitting of fluid as it passes through the mixing elements. (A) Schematic representation of a KSM with two inlets on the lid. The inks are fed at a constant rate through the inlets using syringe pumps. The inks flow across the static mixer to produce a lamellar structure at the outlet. The inks are crosslinked at the exit of the KSM to stabilize the structure. Our KSM design includes a cap with 2 inlet ports, a straight non-mixing section that keeps the ink injections independent, a mixing section containing one or more mixing elements, and a nozzle tip. The lid can be adapted to inject several inks simultaneously. (B) Two rotated views at 0 • and 90 • , of a single KSM element. (C) 3D design of a KSM with 6 elements and schematic representation of the flow splitting action, the increase in the number of striations, and the reduction in length scales, in a KSM-printhead. The resolution, namely the number of lamellae and the distance between them (δ), can be tuned using different numbers of KSM elements. (D) Actual continuous chaotic printing in operation. The inset (E) shows the inner lamellar structure formed at the cross-section of the printed fiber (the use of 4 KSM elements originates 16 striations). Scale bar: 250 µm. (F) Longitudinal or (G) cross-sectional microstructure of fiber obtained using different tip nozzle geometries. Images show CFD results of particle tracking experiments where two different inks containing red or green particles are coextruded through a printhead containing 4 KSM elements. The lamellar structure is preserved when the outlet diameter is reduced, from 4 mm (inner diameter of the pipe section) to 2 mm (inner diameter of the tip), through tips differing in their reduction slope. printing scenarios. To this end, we adopted a distributor configuration (figures 1(A), (C)) for dispensing two inks in a symmetrical fashion. The mixing section contains a KSM, a static mixer configuration widely adopted in the chemical industry, that consists of a serial arrangement of n number of helical elements contained in a tubular pipe, with each element rotated 90 • with respect to the previous one (figures 1(B), (C)). In the laminar regime, the KSM (and other static mixers [27][28][29]) produces chaos by repeatedly splitting and reorienting materials as they flow through each element. With this simple mechanism, lamellar interfaces are effectively produced between fluids (i.e. printheads containing 1, 2, 3, 4, 5, or 6 KSM elements will produce 2, 4, 8, 16, 32, or 64 defined striations; figure 1 (C)). Our results show that multi-material lamellar structures with different degrees of inter-material surface can be printed using a single nozzle by simply coextruding two different materials (i.e. inks) through a KSM.
In the experiments presented here, we used sodium alginate to formulate different inks consisting of pristine alginate or suspensions of particles (polymer microparticles, graphite microparticles, mammalian cells or bacteria). For instance, we conducted experiments in which one or two types of fluorescent microparticles (i.e. red and green bacteria or red and green polymer beads) were injected into the inlets of the mixer distributor (figures 1(D), (E)). The result is continuous composite fibers with complex lamellar microstructures (figure 1(E) and figure S1) that can be stabilized simply by crosslinking in a bath of calcium chloride solution. This preserved the internal microstructure of the fibers with high fidelity (figure S1). Fine and well-aligned microstructures with defined features can be robustly fabricated along the printed fibers at remarkably high extrusion speeds (1-5 m of fiber/min). As we will show later, a vast amount of contact area is developed within each linear meter of these fibers. This printing strategy is also robust across a wide range of operation settings. We conducted a series of printing experiments at different inlet flow rates to assess the stability of the printing process. As long as the flow regime is laminar and the fluid behaves in a Newtonian manner (figure S1), the quality of the printing process is not affected by the flow rate used in a wide range of flow conditions. For example, using a cone-shaped nozzle-tip with an outlet diameter of 1 mm, stable fibers were obtained in a window of flow rates from 0.003 to 5.0 ml min −1 (figure 1(D)). Having printheads with different geometries (different degrees of slope) did not disturb the lamellar structure generated by chaotic printing. Computational fluid dynamics (CFD) simulation results suggested that the angle of inclination of the conical tip of the printhead (nozzle tip) did not affect the microstructure within the fiber in the range of the tested flow rates and reduction slopes. Figures 1(F), (G), and S2 show a computational analysis of the effect of the shape of the printhead tip (angle) on the conservation of the microstructure of printed fibers produced from a mixture of alginate inks containing red and green particles.
Multilayered and well-aligned microstructures
The fabrication of fibers with fine lamellar microstructures will enable the design of materials for relevant biological applications such as the development of high surface biosensors, or composite materials with tunable mechanical properties for cell culture or bioactuation.
In figures 2 and 3, we present the results of an experiment in which a suspension of 0.5% graphite microparticles in pristine alginate ink (2%) was coextruded with pristine alginate ink (2%). For this illustrative experiment, the printhead outlet had a diameter of 1 mm (figure 2(A)). Note that the features in the extruded structure were remarkably similar at different lengths of the fiber (figures 2(B)-(F)). For instance, we calculated the area (shadowed in yellow) and the perimeter (indicated with a green line) of each of the graphite striations in five crosssectional cuts along a fiber segment (figure 2(C)). Figure 2(D) shows an overlap of the microstructure for three of these cross-sections. The standard deviation of the area (figure 2(E)) and perimeter (figure 2(F)) for each of the striations is relatively small (the variance coefficient smaller than 10%). This illustrates the robustness of this printing strategy with small nozzle diameters, as well as the reproducibility of the microstructure obtained at different lengths of the fiber.
Notably, the resolution of this technique is controlled by both the diameter of the nozzle and the number of mixing elements. As the number of elements used to print increased, the number of lamellae observed in any given cross-sectional plane of the fiber also increased, while the thickness of each lamella decreased (figure 3(A)). Therefore, users of continuous chaotic printing will have more degrees of freedom to determine the multi-scale resolution of a construct, as this is no longer mainly restricted by the diameter of the nozzle (or the smallest length-scale of the nozzle at cross-section). For instance, for our twostream system (figure 1(C)), the number of lamellae increases exponentially according to the simple model s = 2 n , where s is the number of lamellae or striations within the construct and n is the number of KSM elements within the extrusion tube.
Two streams of inks co-injected into the printhead will generate 4, 8, 16, 32, and 64 distinctive streams of fluid when passing through a series of 2, 3, 4, 5, and 6 KSM elements, respectively (figure 3(A)). The average resolution of the structure will then be governed by the average striation thickness of the construct (δ), given by δ = D/s, were D is the nozzle inner diameter (figure 1(C)). Since stretching is exponential in chaotic flows [21,23,25], the reduction in the length scale is also exponential, as is the increase in resolution (i.e. more closely packed lines). In the experiment portrayed in figure 3, the cross-sectional diameter of the fibers was 2 mm. We observed defined average striations with resolutions of~500, 250, 125, 62.5, and 31.75 µm by continuously printing using 2, 3, 4, 5, and 6 KSM elements, respectively. Even when 6 KSM elements were used, distinctive lamellae could be discriminated in the array of 64 aligned striations ( figure 3(A)). The resolution values obtained through 6 elements already exceeded those achievable by state-of-the-art commercial 3D extrusion printers (~100-75 µm) [30,31] that use hydrogel-based inks (i.e. commercial bioprinters) [30,32].
Another remarkable characteristic of continuous chaotic printing is that the structure obtained is fully predictable, since chaotic flows are deterministic systems (as any chaotic system) [23,33]. Simulation results, obtained by solving the Navier-Stoke equations of fluid motion using CFD [34,35], closely reproduced the cross-sectional lamellar microarchitecture within the fibers (figures 1(F), (G); figure 3(B)).
Moreover, we used optical microscopy and image analysis techniques to characterize the fine array of lamellae experimentally produced by continuous chaotic printing. We calculated the striation thickness distribution (STD) on the cross-sections of the graphite/alginate fibers. We did this by drawing several center lines of representative cross-sections and then calculating the distance between striations along those lines ( Figure S3). The frequency distribution and the cumulative STD were then measured. Figures 3(C) and (D), respectively, show the STD and the cumulative STD for constructs printed using 4, 5, 6, and 7 KSM elements. Remarkably, this family of distributions exhibits self-similarity, one of the distinctive features of chaotic processes [22,23,25]. As discussed, for any of these particular cases, the average striation thickness could be calculated as the fiber diameter/number of striations (D/s). However, due to the highly skewed shape of the distribution toward smaller striation thicknesses (figure 3(D)), the median striation thickness is lower than the average striation value. For example, for the case where 4 KSM elements were used, the average striation thickness can be calculated as 2 mm/16 = 125 µm. Indeed, 50% of the striations measured less than 125 µm (figures 3(C), (D)), but the corresponding STD showed that most of the striations had a median value of about 75 µm. This has profound implications for crucial processes such as cell attachment, cell signaling, local reaction kinetics, and mass and heat transfer. For instance, the diffusional distances (δ) in these constructs decreased rapidly with an increase in the number of elements used to print (i.e. following the model δ/(2 n )). The diffusional length scales are then reduced by half each time that a KSM element is added to a chaotic printhead. Since diffusion time increases with the square of the diffusion distance (and is only inversely proportional to the diffusion coefficient), the diffusion time decreases 4fold per element added. This implies that the time relevant to cell signaling decreases 8-fold (almost an order of magnitude) if one KSM element is added to the printhead. As we will demonstrate later, the intermaterial area per unit of volume, which is key for surface-catalyzed reactions and cell attachment, increases exponentially as the number of elements is increased (figure 4).
Fiber and particle alignment is key in many applications in materials technology [36]. We next show that the fabrication of well-aligned microstructures achievable through chaotic printing can influence relevant characteristics of composites, such as the robustness of their mechanical performance. We characterized the mechanical properties of alginatebased fibers 2.5 cm in length produced by chaotic printing (and therefore having different internal structures) or hand-mixing and extrusion through an empty pipe (figure 3; figure S4). Specifically, we conducted tensile testing using a universal testing machine on fibers produced from a mixture of 0.5% graphite microparticles in alginate, generated either by hand-mixing (a control without lamellar structures) or by continuous chaotic printing using 2, 4, or 6 KSM elements. Figure 3(E) shows the stress-strain curves associated with the resulting fibers. We did not find significant differences in the Young's modulus, ultimate stress, or maximum elongation at break in these sets of fibers (figure S4). However, the fibers exhibited less variability when produced by continuous chaotic printing than those by extrusion of handmixed inks. Among the fibers produced by chaotic printing, the fibers were more homogeneous when co-extruded through printheads containing 4 or 6 elements than only two elements. Figure 3(F) shows an analysis of the standard deviation of relevant mechanical performance indicators associated with different microstructures. These results suggest that effective alignment of the microstructures within the fibers resulted in a more reproducible mechanical performance in structurally complex materials such as alginate hydrogels.
Bioprinting applications
Bioprinting (i.e. the printing of living cells and biomaterials in a predefined fashion) is presently even more limited in resolution and speed than additive manufacturing techniques in general. We further illustrate a biological application of continuous chaotic printing by fabricating constructs with specific microarchitectures containing living cells.
Tightly controlling the degree of intimacy (i.e. the density of interfaces between bacterial populations) may enable the fabrication of 3D multi-material constructs with novel functionalities [37] and is of paramount importance in modern microbiology [38], for example on the design of physiologically relevant gutmicrobiota models [39]. The spatial arrangement and distribution of bacteria, recently described as 'microbiogeography' [38], is an important determinant of bacterial community dynamics. Different species of bacteria interact with other micro-organisms through chemical signals [38,40,41]. For example, quorum sensing, a well-studied phenomenon, depends on the vicinity and the amount of surface area shared among bacterial communities [42]. In general, the dynamics of competition or mutualism in mixed microbial communities is strongly influenced by spatial distribution [43][44][45][46]. However, relatively few studies have addressed the relationship between spatial distribution, distance, and cell density in bacterial systems [43][44][45][46][47][48][49]. This is partially due to the fact that conventional microbiology techniques offer only a limited degree of control over the spatial organization of mixed cultures [50].
Continuous chaotic printing enables precise control of the spatial distribution of bacterial communities, aligned in a lamellar microstructure, and allows meticulous and unprecedented design and regulation of the amount of interface between bands of bacteria.
In figures 4(A)-(E), we used two recombinant E. coli strains, one producing red fluorescent protein (RFP) and the other producing green fluorescent protein (GFP) to fabricate cell-laden fibers. As anticipated, well-defined bacterial striations could be printed using our technique ( figure 4(A)).
Remarkably, the bacteria could be cultured in these fibers for extended time periods. We followed the kinetic behavior of both bacterial populations (i.e. GFP-and RFP-bacteria) in the fibers initially seeded at low concentrations. During the first 24 h of culture (from t = 0 to 24 h), the intensity of the fluorescence produced by the bacterial colonies increased, while the bacteria continued to respect the original patterns in which they had been printed ( figure 4(B)). We corroborated the increase in the number of live bacteria by conventional colony-forming units (CFU) microbiological assays (figures 4(C), (D). To do this, we sampled multiple sections of fibers. We consistently observed that the bacterial populations grew in both areas for the first 24 h, exhibited a short plateau, and later decayed. Interestingly, we observed a statistically significant difference in the number of viable green and red bacteria (P < 0.05) at 24 and 48 h after printing. These differences suggest that these two populations of bacteria, although practically identical in their genetic makeup, establish a competition for resources along shared interfaces (see also [51,52]). These results demonstrated that chaotic printing can be used for the fabrication of dynamic living systems that are capable of evolving in time from very well-defined initial conditions. In addition, massive amounts of interface between green and red bacterial regions could be developed if more KSM elements were used during printing. For example, the boundary between the green and red bacterial regions can be effectively tuned from~1 mm down to 15 µm by varying the number of KSM elements used to print (from 2 to 7, figure 4(E)). Since the maximum length of these bacteria is~2 µm, we were able to imprint lamellae of bacteria in the resolution range of tens of micrometers. The diameter of these fibers was 1 mm. Therefore, printing using 7 KSM elements yielded lamellae with average striation thicknesses of less than 10 µm (median lower than 7 µm). This means that each lamella might accommodate a few bacterial cells across its width. While a characteristic standard deviation occurs with chaotically printed constructs (figure 3(E)), the structures obtained by chaotic printing are repeatable. This will translate into the fact that the 'overall' functionality of the construct (i.e. the bacterial community or tissue construct to be fabricated) will be dictated by the architecture. Please note that, in figure 3(C), the STDs of constructs printed using four and six (or five and seven) elements are distinguishable (i.e. their overlapping is minimal).
In chaotic printing, the amount of inter-material area fabricated increases exponentially as a function of the number of elements used. We used image analysis techniques to quantify, at high magnification, the shared perimeters between red and black lamellae in figure 4(E) (cross-sectional cuts). Indeed, the amount of black-red perimeter at cross-sections grew exponentially with the increasing number of elements (figures 4(E), (G), (H)).
In addition, using computational strategies, we simulated the amount of surface area generated by the printing process in constructs printed using different KSM elements (figure 4(F); figure S5), and confirmed the data obtained experimentally (figures 4(E), (G)). For instance, when six elements were used to print, approximately 5 cm of shared linear interface were developed between the two materials (inks) at each cross-sectional plane (D = 0.1 cm); the ratio between the total amount of developed interface and the fiber perimeter was 16.91. This created a remarkably high density of shared interface (6.76 cm mm −2 ). Since the fiber exhibits the very same microstructure along its entire length ( figure 3(B)), the inter-material surface density generated inside the fiber could be determined as~0.067 m 2 cm −3 .
In tissue engineering scenarios, multi-material and multilayer structures are required to mimic the architecture and functionality of real tissues [15]. We also conducted chaotic bioprinting experiments in which we fabricated bands of C2C12 murine skeletal myoblasts within alginate fibers added with gelatin methacryloyl (GelMA) [53]. We present the crosssectional (figure 5(A)) and longitudinal view ( figure 5(B)) of a cell-laden alginate fiber lightly enriched with protein (GelMA) to favor cell attachment and eventual proliferation. myotubule development, started to be evident ( figure 5(F)). This illustrative experiment suggests the potential of chaotic bioprinting to produce living fibers that closely resemble the multilayered structure observed in mammalian tissues and display massive amount of material interface. As demonstrated before (previous subsection), using a chaotic printhead containing 6 KSM elements, 0.067 m 2 of material interface could be accommodated per 1 cm 3 of cell-laden fiber. Therefore,~67 m 2 l −1 of well-aligned inter-material surfaces could be fabricated within these living constructs. For comparison, human kidneys have an approximate volume of 150 cm 3 , and the total area of the capillaries of all the glomeruli within them is 0.6 m 2 (4 m 2 l −1 ) [54]. Printing at the flow rate of 1 ml min −1 , which is a typical printing flow rate used in our system, could generate this amount of area per unit of volume every minute. This massive amount of interface cannot be fabricated at this speed, precision, or resolution by any of the currently available micro-fabrication or printing platforms. It should be noted that flow rates of up to 3 ml min −1 could be conveniently achieved using our printing method.
Coupling of continuous chaotic printing with other fabrication techniques
The combination of continuous chaotic printing with other fabrication technologies (e.g. molding, electrospinning, or robotic assembly) will lead to the development of complex multi-scale architectures with high degrees of predictable external shapes and internal microstructure. Indeed, during printing, these fibers can be rearranged either into macrostructures or the individual fibers can be further reduced in diameter while preserving their lamellar architecture (figures 1(F), (G): figures 6(A)-(C)). We illustrate this by printing a long fiber of alginate containing multiple lamellae and then rearranging it into a block of several layers of fiber segments (figures 6(A)-(C)). The integration of this multi-material printhead into a 3D printer may thus enable rapid fabrication of multi-material (and/or multi-cellular) constructs that exhibit a great amount of material interface with a complex and tunable hierarchical architecture.
Also, chaotic printing may be coupled with other techniques for the production of nanofibers that contain finely controlled structures at the submicron scale (figures 6(D)-(G)). This may enable, for example, the fabrication of microsensors or microactuators with enormous surface area, for biological applications. As an example, we coupled a 2-element KSM printhead with an electrospinning device (figure 6(D)) to produce a mesh of nanofibers containing well-defined nanostructures composed of a pristine alginate ink (4% sodium alginate in water) and polyethylene oxide (7% PEO in water). Fibers produced by 3D chaotic printing were continuously solidified as they were generated by direct feeding into an electrospinning apparatus, further reducing the fiber mean diameter to <300 nm (figure 6(E)). In this hybrid fabrication strategy, fiber solidification occurs by rapid evaporation during electrospinning, instead of crosslinking by immersion in calcium chloride.
Remarkably, the structure produced by chaotic printing is preserved during electrospinning. Based on estimates of the shape of the Taylor cone (~0.18 µl) and the rate of injection of the materials (2-5 µl min −1 ), the average residence time in the Taylor cone is~2-5 s. The diffusion time may be roughly calculated as δ 2 /Dc, where δ is the average striation thickness (δ = 0.0128 when 3 KSM elements are used), and Dc is the diffusion coefficient. The diffusion coefficient of relatively small organic molecules in PEO has been reported as~10 8 [55]. Therefore, the actual diffusion coefficient of PEO in alginate should be much lower, at~10 9 . The diffusion time for this process should then be in the range of~1000 s, or much higher than the residence time. Since the residence time at the Taylor cone is about 3 orders of magnitude shorter than the diffusion time, electrospinning is expected to have a negligible effect on the structure obtained by 3D chaotic printing.
A close inspection using photo-induced force microscopy (PiFM) [56] revealed multilayered nanostructures with average striation thicknesses in the range of 75-100 nm (figures 6(F), (G); figure S6). These results demonstrated that the microstructure created by 3D chaotic printing can be further scaled down by 3 orders of magnitude using electrospinning.
Conclusion
In this study, we have presented continuous chaotic printing as a strategy that enables delicate control of the spatial microstructures (i.e. number of layers and average spacing between them) within a single 3D printed fiber. The key element of this technological platform is the use of an on-line static mixer in the printhead to provide a partial mixing of different materials as they are coextruded through the nozzle tip. In particular, we have adopted the KSM as the first model and have used it to fabricate, in a simple fashion, highly convoluted 3D structures within polymer composites in a continuous stream at high speeds (>1.0 meters of fiber/min). The diameter of the printing head and the number of mixing elements determine the number and thickness of internal lamellae produced according to a process of successive bifurcations that yields an exponential generation of intermaterial area. Illustratively, by using 6 internal elements, 64 lamellae of average widths of 15 µm can be generated in a 1 mm cross-section fiber, and an inter-material area of~67 m 2 l −1 can be achieved. These values for microstructure resolution, internal surface area density, and fabrication speed all exceed the capabilities of any of the currently available commercial microfabrication techniques (i.e. commercial 3D printers) for the creation of microstructure.
Our results demonstrate the unrivaled ability of chaotic printing to deploy cells within high SAV fibers. As available bioprinting and bioassembly technologies approach the resolution and SAV of chaotic printing, they also tend to require long fabrication times and mechatronically coordinated control systems [57,58]. In addition to multicellular, high SAV constructs, chaotic printing offers other breakthroughs in regards to currently available multimaterial printing technologies that, typically, require optimized inks that must be deployed under a specific and narrow range of conditions.
The fundamentals underlying chaotic printing are solid, as this type of printing relies on the use of chaotic flows to develop microstructure at an exponential rate in a deterministic manner. Indeed, we have shown that the microstructure resulting from the use of different numbers of KSM elements is amenable to rigorous modeling using CFD simulations, and the resemblance between our experimental and simulation results is remarkable. This precise predictability of the microstructures within a printed construct will greatly expand the application of 3D printing and the complexity of printed composites. A wide spectrum of microstructures can be designed and obtained using this technique. The adoption of different types of static mixer elements (i.e. SMX-Sultzer, and novel ad hoc designs), the use of more than two inks (or materials), the manipulation of the injection location, and the dynamic changes in the speed of each one of the injections, can open up possibilities for obtaining structures with various degrees of complexity in a wide range of scales. For instance, we showed that chaotic printing is a simple and versatile micro-and nano-fabrication platform and that, when coupled with other fabrication resources, can generate macro-structures with an enormous amount of interface between their constituent materials.
The fabrication method introduced here produces non-uniform, but reproducible and welldefined, lamellar patterns. We strongly believe that chaotic printing adds to the existing arsenal of tools currently available to the biofabrication community. Reproducible non-uniformity is precisely one of the strengths of this printing method (and a signature of Nature). We were bioinspired by some of the highly complex (and ordered) patterns that are produced in Nature, which are mostly non-uniform (i.e. vasculature, marble and other multilayered geological formations, multi-layered tissues, and real bacterial communities, among many others).
Remarkably, our variability is statistically robust. As we have shown (figures 3(E), (F)), the striation thickness distribution (STD) of the microstructure generated through chaotic printing is known, reproducible, can be calculated by simulations, and is even self-similar (meaning that the overall shape of the STDs obtained by printing with different numbers of elements closely resemble each other). All these properties are rooted in the fundamental physics of chaotic advection. Chaotic flows generate microstructure with a reproducible distribution of length scales [22,23].
In an exciting further development, we demonstrated that the output of a continuous 3D chaotic printhead can be fed into an electrospinning nozzle to create fiber meshes with lamellar nanostructures. By doing so, we have shown that the microstructure created by 3D chaotic printing can be further scaled down by three orders of magnitude. We envision numerous applications of continuous chaotic printing in biomedicine (i.e. bacterial and mammalian cell bioprinting), electronics (i.e. fabrication of highsensitivity multi-branch electrodes and supercapacitors), and materials science in general.
Experimental set-up
Our continuous chaotic printer consisted of a syringe pump loaded with two 10 ml disposable syringes, a cylindrical printhead containing from 2 to 7 KSM elements, and a flask containing 550 ml of 2% calcium chloride (Fermont, Productos Químicos Monterrey, Monterrey, NL, Mexico) ( figure 1(A)). Syringes were loaded with different inks (i.e. particle suspensions in pristine 2% alginate) and connected to one of the two inlet ports located in the lid of the printhead. Details of the geometry of the printer head and the internal KSM elements are shown in figures 1(B), (C). The fabrication of printer heads is described in a following subsection (KSM printheads). The syringe pump was set to operate at a flow rate of 0.8 to 1.5 ml min −1 . We conducted experiments using printheads with different internal diameters, in the range from 5.8 to 2 mm. The tube containing the KSM could be connected to a tip to further reduce the diameter of the final fiber. Tip reducers with an outlet diameter of 4, 2, and 1 mm were used in the experiments presented here (figures 1 (D), (F), (G)). The outlet of the tip was submerged in 2% calcium chloride to crosslink the extruded fibers at the outlet of the tube ( figure 1(D)).
KSM printheads
We fabricated our KSM printheads in house. KSM elements were designed using SolidWorks based on the optimum proportions reported in literature [59]. The sets of KSM elements were printed on a P3 Mini Multi Lens 3D printer (EnvisionTEC, Detroit, MI, USA) from the ABS Flex White material. We used a length-to-radius ratio of L:3R ( figure 1(B)). For example, for printheads with an internal diameter of 5.8 mm, the length and diameter of each separate KSM element were 8.7 mm and 5.8 mm, respectively. Sets of 2, 3, 4, 5, 6, and 7 KSM elements, attached to a tube cap, were fabricated to ensure a correct orientation of the ink inlet ports on the cap with respect to the first KSM ( figure 1(C)). The cap was designed so that each ink inlet was positioned on a different side of the first KSM element to maintain similar initial conditions in all experiments (figures 1(A), (C)).
Printing experiments and ink formulations
We used several different ink formulations for the experiments presented here. Inks consisted of particles suspended in 1% alginate or pristine alginate (CAS 9005-38-3, Sigma-Aldrich, St. Louis, MO, USA) solutions.
In a first set of experiments, we fabricated fibers loaded with either red or green fluorescent particles. Red and green fluorescent inks were prepared by suspending 1 part of commercial fluorescent particles (Fluor Green 5404 or Fluor Hot Pink 5407; Createx Colors; East Granby, CT, USA) in 9 parts of a 2% aqueous solution of sodium alginate (Sigma-Aldrich, St. Louis, MO, USA). The fluorescent particles were previously subjected to three cycles of washing, centrifugation, and decantation to remove surfactants present in the commercial preparation.
We also used chaotic printing to fabricate fibers containing an overall concentration of 0.5% graphite by co-extruding a suspension of 1.0% graphite in alginate solution (2%) and pristine alginate solution (2%) through printheads containing KSM elements. In addition, we produced control fibers by extruding pristine alginate (without graphite microparticles) through an empty tube, or by co-extruding two streams of ink containing 0.5% graphite microparticles hand-mixed in alginate.
In a third set of experiments, we used fluorescent inks based on suspensions of fluorescent E. coli bacteria. These fluorescent bacteria were engineered to produce either GFP or RFP. Bacterial inks were prepared by mixing either GFP-or RFP-expressing E. coli in 2% alginate solution supplemented with 2% Luria-Bertani (LB) broth (Sigma-Aldrich, St. Louis, MO, USA). For ink preparation, bacterial strains were cultivated for 48 h at 37 • C in LB media. Bacterial pellets, recovered by centrifugation, were washed and re-suspended twice in alginate-LB medium. The optical density of the re-suspended pellets was adjusted to 0.1 absorbance units before printing (approximately 5 × 10 8 CFU ml −1 ). Fibers were printed at a flow rate of 1.5 ml min −1 and cultured by immersion in LB media for 72 h. The number of viable cells present in the fibers at different times was determined by conventional platecounting methods. Briefly, fiber samples of 0.1 g were cultured in tubes containing LB media. The number of viable cells was determined by washing the 0.1 g samples in 1X phosphate-buffered saline (PBS) at pH 7.4 (Gibco, Carlsbad, CA, USA) to remove the bacteria accumulated in the LB media. Each sample was disaggregated and homogenized in 0.9 ml of PBS. The resultant bacterial suspensions were decimally diluted, seeded onto 1.5% LB-Agar (Sigma-Aldrich, St. Louis, MO, USA) plates, and incubated at 37 • C for 36 h.
We also bioprinted muscular murine cells (C2C12 cell line, ATCC CRL 1772) in 1% alginate inks supplemented with 3% GelMA added with a photoinitiator (0.067% LAP). To this purpose, a first ink contained only alginate and GelMA, while the second was cellladen with C2C12 cells at a concentration of 3 × 10 6 cell ml −1 . Cell laden fibers were obtained by immersion in alginate and then further crosslinked by exposure to UV light at λ = 400 nm for 30 s. The bioprinted and cell-laden fibers were immersed in DMEM culture medium (Gibco, Carlsbad, CA, USA) and incubated for 20 d at 37 • C in an 5% CO 2 atmosphere. Culture medium was renewed every 4th day during the culture period.
In a fifth set of experiments, we produced electrospun nanofiber mats by combining 3D chaotic printing in-line with electrospinning. First, we chaotically printed fibers by coextrusion of a pristine alginate ink (4% sodium alginate in water) and PEO (7% PEO in water) at a rate of 2-5 µl min −1 . The resulting PEOalginate fibers were then electrospun (in-line) to produce nanofiber mats.
Microscopy characterizations
The microstructure of the fibers produced by chaotic printing was analyzed by optical microscopy using an Axio Imager M2 microscope (Zeiss, Oberkochen, Germany) equipped with Colibri.2 led illumination and an Apotome.2 system (Zeiss, Oberkochen, Germany). Bright-field and fluorescence micrographs were used to document the lamellar structures within the longitudinal segments and cross-sections of the fibers. Wide-field images (up to 20 cm 2 ) were created using a stitching algorithm included as part of the microscope software (Axio Imager Software, Zeiss, Oberkochen, Germany). Fibers were frozen by sudden immersion in liquid nitrogen to facilitate sectioning while preserving the microstructure. The microstructure of the nanofibers produced by chaotic printing coupled with electrospinning was analyzed by atomic force microscopy (AFM) and PiFM, a nano-IR technique (figure S6).
Mechanical testing of graphite-alginate fibers
We used a universal test bench machine (Tinius Olsen h10kn, Horsham, PA, USA), with a load cell of 50 N at a rate of 35 mm min −1 , to evaluate the mechanical properties of alginate fibers containing 0.5% graphite particles and produced by different printing strategies. Specifically, we conducted tensile testing on fibers produced from a mixture of 0.5% graphite microparticles in alginate, generated either by handmixing and extrusion through an empty pipe (a control without lamellar structures) or by continuous chaotic printing using 2, 4, or 6 KSM elements. In these experiments, the gauge length between clamps was set to 25 mm. Stress-strain curves were obtained for each of the five different formulations. We determined the maximum tensile strength, strain at break, and Young modulus of the fibers from stress-strain data.
Computational simulations
The system was simulated using a finite element model (FEM) strategy in COMSOL Multiphysics 5. First, a 3D model was designed and solved, using laminar flow equations and a stationary solver, to determine the velocity field in the system for the various experimental scenarios explored. A fluid viscosity value of 1P and a density of 1000 kg m −3 were used. A time dependent solver was then used to track up to 10 5 massless particles using particle tracking for fluid flow physics in the previously solved stationary velocity field. The simulation was discretized with a reasonable fine mesh composed of free triangular elements. Mesh sensitivity studies were conducted to ensure the consistency of results. No-slip boundary conditions were imposed in the fluid flow simulation, while a freeze boundary condition was employed for the particle tracing module. The interface length was determined by importing the output results from the cross-section of the fibers (a set of points describing the interface position) into CorelDraw software X5 (Corel Corporation, Ottawa, Canada), drawing Bezier curves over the striations, and establishing the length of the curves using the software (figure S5). | 9,513 | 2020-03-30T00:00:00.000 | [
"Materials Science"
] |
Using the Monte-Carlo method to analyze experimental data and produce uncertainties and covariances.
. The production of useful and high-quality nuclear data requires measurements with high precision and extensive information on uncertainties and possible correlations. Analytical treatment of uncertainty propagation can become very tedious when dealing with a high number of parameters. Even worse, the production of a covariance matrix, usually needed in the evaluation process, will require lenghty and error-prone formulas. To work around these issues, we propose using random sampling techniques in the data analysis to obtain final values, uncertainties and covariances and for analyzing the sensitivity of the results to key parameters. We demonstrate this by one full analysis, one partial analysis and an analysis of the sensitivity to branching ratios in the case of (n,n’ γ ) cross section measurements.
Context
We present this paper in the context of the improvement of the evaluated nuclear data for application. This is done in order to perform better numerical simulations to optimize and predict performances, reactor control parameters and radiation safety related quantities. These databases still present large uncertainties, preventing calculations from reaching the required precision. Their improvement requires new measurements and better theoretical descriptions of involved reactions. Our primary work is focused on the measurement of (n, xnγ) cross-section [1][2][3] However, the general idea presented in this paper can be applied in many other contexts.
Motivation
When performing data analysis of experiments, many external parameters (detector efficiencies, distance of flight, ...) are involved, in order to process the raw data. Furthermore, all the steps of the data treatment (event selection, calibration, ...) may introduce uncertainties and correlations.
The usual method for combining and computing uncertainties is to use analytical developments based on the perturbation theory (e.g.
. This method * e-mail<EMAIL_ADDRESS>* * Currently, Univ. of Helsinki works well for simple cases, but with multiple parameters and sources of uncertainty, deriving the final total combined uncertainty can be long and complex. Furthermore, it strictly applies only to small deviations from the central values, which is not always the case. Implementing the formula into the analysis code becomes a tedious process where mistakes can appear, and the final uncertainty value will be wrong.
Finally, this method makes it difficult to calculate covariances, and the inclusion of some unusual form of uncertainty (asymmetric, non-Gaussian) is not directly possible.
The Monte Carlo method
To workaround all the issues presented above, we offer one possible solution: using random sampling (i.e. Monte Carlo) methods to obtain final values with their uncertainties and covariances.
The general principle is the following : Each parameter is randomly sampled according to its probability density function, all the parameters are used to compute one realization of the value of interest. This is done many times, as each computed value is stored in a stack. When all the iterations have been performed, the average value, standard deviation and (when applicable) covariance matrix are calculated from the stacked values. We note that this method allows us to turn off and on specific sources of uncertainty, by including or not the related parameters in the calculations. This makes the study of sensitivity easy, and a good way to check consistency.
The treatment of all sources of uncertainty is the same, without differentiation between the ones that come from systematic or statistical sources, as long as their probability distribution is well-defined. In fact, in this treatment, one should prefer the Type A vs. Type B split (rather than statistical, systematic) recommended by the Guide to the expression of uncertainty in measurement [4]. Particular attention should be given to choosing the probability density functions of the parameters variables. This being done, the reprocessing of data at each iteration with new parameter realizations will ensure that uncertainty from the analyzed data will be taken into account. The ability to examine the final distribution of results (and not just the central value and standard deviation)is a great benefit to check the consistency of all results given the chosen inputs.
Special attention needs to be applied to the convergence of the series. In order to produce accurate and stable final values, enough iterations are needed. The tricky part being that it is not evident a priori how many iterations is enough. One has to check the convergence of the result by inspection.
Thankfully, with modern computing infrastructures, large storage spaces and many computing units are easily available. This allows, within a reasonable time and use of computing resources, to perform a very large number of iterations.
Full Monte Carlo Analysis
The first example of applying such a technique we are presenting is the full Monte Carlo analysis. In this example, for each iteration, we will start the whole analysis pipeline (event selection, data projection, histogram fitting, ...) from scratch, with all external parameters (detector efficiency, target mass, ...) randomly sampled. The analysis is then conducted (automatically) to the end and produces one set of values (in our example, (n, n'γ) crosssections for several neutron energies) which is added to the stack. As the spectra are re-extracted and fitted at each iteration, the procedure uncertainty, as well as uncertainties coming from the data analysis, are automatically taken into account.
When the sufficient number of iterations for convergence has been achieved (in our case, 30 is enough to reach the limit uncertainty), we use the numpy package [5] for Python [6] to compute, from all the values, the central values, standard deviations and covariance matrices. Figure 1 shows an example : it is the 184 W(n, n'γ 111 keV ) cross-section obtained with a full Monte Carlo analysis [7]. Each gray line in the plot represents the result of one iteration, the black points, line and error bars are the final values.
Random sampling applied on intermediate results
In the cases of data that have already been analyzed (using a deterministic method), one can pick up the intermediate result files and, by applying random sampling methods, replay the last steps of the analysis many times. This is a great way to access covariance when the initial analysis did not. A similar method has been applied before in reference [8].
We applied this to 238 U(n, n'γ) data [9] and the Monte-Carlo method reproduces the central value from the analytical method with a very good agreement-figure 2. The obtained uncertainties are slightly different (but still well compatible with each other). In particular, the uncertainties obtained with the Monte Carlo method present more structure than those with the analytical method. This type of structure, which is related to the input data, is not accounted for by the analytical method.
Monte Carlo method for sensitivity analysis
In addition to analyzing data using the random sampling method, the Monte Carlo method can be used to study the sensitivity to parameters. By varying a parameter through random sampling, the variation in the final results provides a sensitivity coefficient from the mean values and standard deviations of the parameter and final result. We used this method to study the sensitivity of calculated (n, n'γ) cross-sections to transitions branching ratios. For this purpose, we performed, for each transition in the level scheme, one hundred calculations using TALYS-1.8 [10], where the intensity of the transition is varied around its reference value [11].
After all calculations are finished, the results are loaded into a Python [6] numpy array [5] and the module is used to calculate the relative standard deviation expected on a calculated cross-section per relative standard deviation (i.e. uncertainty) on a specific γ transition branching ratio in the level scheme.
The resulting sensitivity matrix shows that some transitions have a sensitivity of as high as 40 % to other transitions' branching ratios, as seen in figure 3. See another contribution in this conference for more details on the interpretation of the matrix [13].
Conclusion
We gave three examples of how random sampling can be used for data analysis or to study the sensitivity of calculations to parameters. It is a convenient solution to produce accurate uncertainty and covariance matrix without lengthy and error-prone uncertainty propagation. It is also a good tool to test, using calculations, the sensitivity of a value to external parameters. We believe it's a powerful technique that can be adapted in many situations and provide useful results. | 1,925 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
Reasoning in Description Logic Ontologies for Privacy Management
This work is initially motivated by a privacy scenario in which the confidential information about persons or its properties formulated in description logic (DL) ontologies should be kept hidden. We investigate procedures to detect whether this confidential information can be disclosed in a certain situation by using DL formalisms. If it is the case that this information can be deduced from the ontologies, which implies certain privacy policies are not fulfilled, then one needs to consider methods to repair these ontologies in a minimal way such that the modified ontologies complies with the policies. However, privacy compliance itself is not enough if a possible attacker can also obtain relevant information from other sources, which together with the modified ontologies might violate the privacy policy. This article provides a summary of studies and results from Adrian Nuradiansyah’s Ph.D. dissertation that are corresponding to the addressed problem above with a special emphasis on the investigations on the worst-case complexities of those problems as well as the complexity of the procedures and algorithms solving the problems.
Introduction
Ontologies have been well-known for sharing common understanding of the structure of information in various application domains, e.g., Semantic Web [7] or medicine [3]. Some real examples of ontologies that have been published are [3,14]. Ontologies are also used to provide more semantics to formally describe a meaning of the data. In contrast to relational databases, information stored in ontologies are mainly assumed to be incomplete, which means that we can deduce additional facts from the ontologies, which are not explicitly stated there. Another difference with the database paradigm is that ontologies normally employ open-world assumption stating that knowledge that is not explicitly stored in the ontologies and cannot be inferred, is neither assumed to be true nor to be false.
In addition to semantic features, ontologies are formulated using languages that are much more expressive than database schema languages. One of the most common languages formalizing ontology is the web ontology language (OWL) 1 that has been standardized by W3C and frequently used in many application domains. This standardization results in a connection with a family of knowledge representation languages, called description logics (DLs) (see [1]), that are known as a fragment of first-order logic.
However, all these attracting features of ontologies are still prone to privacy violations. Assume that there is a DL ontology O containing the following information: The first axiom of the form of general concept inclusion (GCI) states that if someone is seen by an oncologist, then he/she suffers from cancer, while the second axiom says that the individual x, whose name is anonymous, is seen by an oncologist. Now, suppose there is a privacy policy P expressing that the public is not allowed to know any disease of any individual of the ontology O . It is also emphasized that O should satisfy P before O is published. However, the derivable fact "x suffers from cancer" from O means that O does not obey P.
In the doctoral dissertation [12], we covered three anticipation steps that need to be considered before publishing an ontology. First, we ask if the confidential information of individuals, such as their identity, is kept hidden or not w.r.t. an ontology. If it is the case that a privacy breach is detected, then we deal with the second step where one needs to repair the ontology such that the sensitive properties of individuals are not possible to be disclosed unauthorizedly, but at the same time the utility value of the ontology remains preserved. The last step is to guarantee if the solution for ontology repairs can avoid a linkage attack from possible attackers that have extra knowledge from different sources. In particular, this sort of attacks may occur when the combination of the repaired ontology with the background knowledge of the attackers still violates a privacy policy. Indeed, considering such malicious attacks for ontologies in the last step above has also been recently investigated in different contexts, such as in the area of linked data [6,9] or in the area of ontology-based data integration (OBDI) [4]. In the context of privacy in DL ontologies, to the best of our knowledge, the studies of preserving identity or reckoning such linkage attacks were still unexplored, whereas the studies of ontology repairs have been carried out by e.g., [10,16] with different settings and motivations. A summary of these new studies as well as their results in [12] is presented in this article.
Detecting Privacy Breaches
The work on detecting if there is a privacy leak occurring in an ontology has been investigated in many literatures, such as [5,15]. Most of the previous work designed their privacypreserving ontology system by conceiving the privacy policy as a property of individuals. A privacy policy is represented as a query and an ontology is said to be compliant with a privacy policy if none of the sensitive answers to the query representing the policy hold in the ontology. In other words, we cannot deduce that an individual (or a tuple of individuals) is a member of the sensitive answers to certain queries representing a privacy policy. In the medical example we had in the previous section, we may say that "suffering from cancer" is a property of the anonymous x that needs to be protected.
As argued by [8], one of the property of individuals that should be importantly protected is their identity, which has not been formally considered, at least, in the context of DL ontologies. Suppose that the ontology O is now extended to an ontology O 1 by adding the following axioms: Note that the first axiom, including a DL concept called (one-of) nominals, tells that people suffering from cancer are only John and Linda. Now, if we do reasoning over O 1 , then a consequence, such as be inferred, stating that the anonymous x is either John or Linda. In fact, the only male in that set of individuals is John, and thus we can infer that x is actually John w.r.t. O 1 , which means that the identity of x is now revealed.
To this end, we introduced the identity problem asking whether two individuals a, b are equal w.r.t. an ontology in general. We show that this problem is trivial for all DLs that are fragments of first-order logic without equality since we can always deduce that no equality between two individuals w.r.t. ontologies formulated in those DLs. Then, we introduced DLs with equality power, which DLs with nominals, number restrictions, functional roles, or functional dependencies belong to, and in which the identity problem is non-trivial. We showed that for these DLs the identity problem has the same complexity as the instance problem, which is the problem asking if an individual a is an instance of a DL concept C w.r.t. an ontology O.
Theorem 1 For all DLs with equality power, the identity problem can be reduced to the instance problem.
We extended the identity problem to a role-based access control setting and to a setting where an attacker does not want to know the exact identity of an anonymous individual x and it is sufficient for him to deduce that the identity of x belongs to a set of known individuals of cardinality smaller than k. We showed that problems in both settings can be reduced to the instance problem for DLs with equality power.
Learning the problems above, we see that the identity problem and its extensions can eventually be reduced to classical reasoning problems in DLs. This means that in the following sections, we do not need to specifically consider the privacy policy to be written as queries asking for identity, but also can be standard queries, such as subsumption queries, instance queries, or even conjunctive queries.
Repairing Ontologies
If one can derive sensitive information about individuals from an ontology O , then it makes sense to repair O such that the secret consequences are no longer entailed by the ontology repair O ′ . We additionally require that O ′ should be implied by O . Such a repair is optimal if there is no repair O ′′ that strictly implies O ′ . However, we show that optimal repairs need not always exists.
In DL communities, initial motivations for repairing ontologies came from a question on why a consequence computed by a DL reasoner actually follows from an ontology. This initiated the work on computing so-called justifications in [2,13], i.e., minimal subsets of the ontology that have the consequence in question. Considering all justifications, one may construct a hitting set of the justifications, i.e., a set of axioms containing at least one axiom from each justification. Removing minimal hitting sets yields maximum subsets of the ontology that do not entail the consequence.
However, the main problem with this approach is that removing complete axioms may also eliminate consequences that are actually wanted. Instead, we proposed to replace axioms directly by weaker ones, i.e., axioms that have less consequences . This motivates us to introduce the notion of gentle repair [12]. In this approach, we generally weaken one axiom from each justification such that the modified justifications no longer have the consequence.
As an illustration, we define an ontology O 2 consisting of the following axioms: Using the same policy P , we can see that O 2 does not comply with P . If we are only allowed to modify the second axiom and this modification is based on the axiom removal, then the modified ontology is compliant with P . This implies that consequences, such as "every worker of a nuclear company is seen by a doctor" or "every worker of a nuclear company is seen by someone working in an oncology department" are gone. Suppose that such consequences are useful. Thus, to retain them, while achieving compliance property at the same time, we weaken the second axiom to so that the modified ontology is now being compliant with P without losing the wanted consequences.
The next theorem states two important results we have in our gentle repair framework.
Theorem 2
The following results hold in our gentle repair framework: 1. The gentle repair approach needs to be iterated, 2. At most exponentially many iterations are needed to reach a gentle repair.
What it means by the first result is that applying this approach once does not necessarily remove the unwanted consequence.
Instead of allowing arbitrary ways to weaken axioms, we introduce the notion of weakening relation ≻ , which is formally a binary relation such that for each ( , ) ∈≻ , the axiom is weaker than . Intuitively, the larger the weakening relation is, the smaller are weakening steps needed to reach a gentle repair, which means that more iterations are performed. Moreover, several conditions of weakening relations are introduced to equip the gentle repair approach with better algorithmic properties that can, for instance, guarantee linear or polynomial number of iterations.
In a situation where we have a justification J, an (unwanted) consequence , and an axiom ∈ J , if all conditions of such weakening relations ≻ are satisfied, then by a search along the one-step relation one can find a maximally strong weakening of , which is an axiom such that ≻ and (J ⧵ { }) ∪ { } ̸ ⊧ , and there is no stronger axiom , where ≻ ≻ , with this property. Intuitively, using a maximally strong weakening, the ontology is changed in a minimal way.
For applying the above repair framework, we focused to the DL EL and introduced specific weakening relations for it, defined based on generalizing the right-hand side of GCIs semantically ( ≻ sub ) and syntactically ( ≻ syn ). Considering O 2 as an EL ontology, the way we weaken the axiom in O 2 illustrated above is based on the use of the weakening relation ≻ sub . If we apply ≻ syn to the second axiom in O 2 , then we cannot split existential restrictions as what ≻ sub can do, but we may have weaker axioms such as We showed that ≻ syn ⊆≻ sub , which means that it takes larger weakening steps to reach a gentle repair using ≻ syn . Complexity wise, ≻ syn behaves better than ≻ sub . For instance, the length of one-step weakening chain for each , i.e., ≻ syn 1 1 ≻ syn 1 2 ≻ syn 1 … , is linearly bounded in the size of . In contrast, one can only provide a non-elementary bound for ≻ sub 1 . Furthermore, we showed that maximally strongest weakenings can be effectively computed using both weakening relations in EL . In particular, one (all) maximally strongest weakening(s) can be computed in polynomial (exponential) time w.r.t. ≻ syn .
Avoiding Linkage Attacks
The framework for ontology repair we described above results in a modified ontology that is policy-compliant. However, this property is still not enough if there is an attacker's knowledge that is different with our policy compliant ontology and it turns out that the combination of these two knowledge is again not compliant with the policy.
To address the issue above, we considered the policysafety property adopted from [9], which requires that the combination of the published ontology with any other compliant ontology is again compliant w.r.t. the policy. Now, consider that this setting is applied to a specific type of EL ontologies, called EL instance stores, that has no individual relationships [11], no GCIs, and only contains instance axioms C(a). It means that all the information about an individual a is given by an EL concept C. Then, a policy is given by a set of instance queries, i.e., by EL concepts D 1 , … , D n .
Using another medical example, suppose that the policy only consists of a concept which says that one should not able to find out who are the patients that are seen by a doctor working in an oncology department. Moreover, it is published that John is an instance of the concept which is compliant with the policy since C ⋢ D . However, it is not safe if there is an attacker knowing that John is a patient since if this knowledge is combined with C, then C ⊓ ⊑ D . In contrast, the concept where C ⊑ C ′ , is a safe generalization of C, as shown in [12], in particular when the attacker's knowledge is encoded as an EL concept. The generalization process illustrated above leads us to the optimality property asking if the modified ontology or concept is compliant (safe) w.r.t. a policy and changes the original ontology in a minimal way. To this end, we developed algorithms for computing optimal compliant (safe) generalizations of EL concepts w.r.t. EL policies. When dealing with different expressiveness of attackers' knowledge, such algorithms may have different complexity results as stated in the following theorem. Likewise, if we view optimality as a decision problem, then a coNP upper bound is given for both compliance and
Theorem 3 Given an EL concept C and an
safety when the attacker's knowledge is written as an EL concept, but when it is modeled as an FLE concept, this optimality problem becomes PTime. We further investigated the case where both published ontology and attackers' knowledge may contain individual relationships. If the policy is an instance query, then the complexities of the corresponding compliance, safety, and optimality problems remain the same as in our previous setting for the case of instance stores. If we upgrade the policy form to be a conjunctive query, then most of the complexity results we have lie on the second or the third level of the polynomial hierarchy.
Conclusions
We have seen that for each anticipation step discussed in Sect. 1, we contributed in introducing relevant reasoning problems in DLs and presenting frameworks, inference procedures and algorithms that provide automated support for dealing with those problems. In particular, this work is coupled with investigations on the complexity of the procedures and algorithms as well as the worst-case complexities of the problems solved by them.
Obviously, this work has many directions to be explored. For instance, probabilistic assumptions are taken into account to annotate ontology axioms and equalities between individuals that only hold with a certain probability may be derived. Additionally, avoiding ontologies from attackers' knowledge that are complete (closed world) or featured with integrity constraints is also a realistic crucial challenge in privacy issues nowadays. adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 4,030.6 | 2020-07-04T00:00:00.000 | [
"Computer Science"
] |
Dark energy loopholes some time after GW170817
We revisit the constraints on scalar tensor theories of modified gravity following the purge of GW170817. We pay particular attention to dynamical loopholes where the anomalous speed of propagation of the gravitational wave can vanish on-shell, when we impose the dynamical field equations. By working in the effective field theory formalism we are able to improve on previous analyses, scanning a much wider class of theories, including Beyond Horndeski and DHOST. Furthermore, the formalism is well adapted to consider the effect of inhomogeneous perturbations, where we explicitly take into account the fact that the galactic overdensities are pressureless to leading order.
Introduction
Multi-messenger astronomy involving coordinated signals of electromagnetic radiation and gravitational waves provide a powerful tool for constraining fundamental theories of gravity. This was demonstrated emphatically by the merger of two neutrons stars at redshift z ∼ 0.01, detected through a gravitational wave GW170817 and a burst of gamma rays GRB170817A [1][2][3][4][5]. These effectively simultaneous twin observations constrained the speed of the gravitational wave through the cosmological medium to be identical to the speed of light to an accuracy of one part in a quadrillion! This immediately led to a slew of papers (see, for example, [6][7][8][9][10][11][12][13][14][15]) examining the implications for modified theories of gravity, especially those that are designed to reproduce the effects of dark energy through their long range modifications (for relevant reviews of modified gravity, see [16,17]). The multi-messenger probe has proven particularly adept at constraining scalar tensor theories, such as Horndeski [18,19] or Beyond Horndeski [20,21], where the gravitational wave will generically propagate through the cosmological background at a different speed to its electromagnetic counterpart. This happens even though the gravitational wave travels through regions of higher density where screening mechanisms are expected to operate [22][23][24].
Modified gravity took a hit after LIGO/Virgo announced this result. Large classes of scalartensor theory were declared incompatible with the data from the neutron star event [6][7][8][9], or else irrelevant to gravity at sufficiently large distances. It is important to realise that these papers did not declare all modified gravity models incompatible. They still left room for scalars conformally coupled to curvature and some models with derivative couplings, such as Kinetic Gravity Braiding [25]. Furthermore, subsequent investigations identified possible loopholes in the original analyses. For example, in [26], it was noted that the frequency of the gravitational wave was very close to the effective field theory cut-off of the relevant dark energy models, raising doubts as to whether or not we are entitled to constrain these theories without further knowledge of their ultra-violet corrections. In [15], a dynamical loophole was considered. It was shown that the anomalous speed of propagation did not need to vanish identically, rather it could vanish on-shell, on account of the scalar equation of motion. At the level of the homogeneous background, a candidate theory from the Horndeski class was shown to exploit exactly this behaviour. A preliminary analysis suggested that this loophole would not survive the consideration of inhomogeneous cosmological perturbations, although the analysis did not exploit all of the dynamical data.
In this paper, we revisit the idea of dynamical loopholes using the effective theory of dark energy [27,28]. This has the advantage that we can investigate the existence of loopholes in a much broader class of models including Beyond Horndeski 1 [20,21] and so-called DHOST models [30][31][32]. It also lends itself to a careful study of linearised perturbations, both homogeneous and inhomogeneous. Previous analyses did not consider the nature of the inhomogeneous source. Here we assume it is non-relativistic, consistent with galactic overdensities. This means we have a vanishing inhomogeneity in the pressure perturbation, and another dynamical zero in the metric equations of motion, in addition to the scalar equation of motion.
With all of these new ingredients, we perform a complete analysis of the full class of theories from Horndeski to Beyond Horndeski and DHOST. At the level of a homogeneous background, we recover the loophole theory found in [15], along with some generalisations. We then consider this extended class of loopholes in the presence of inhomogeneous perturbations, once again using dynamical knowledge of a vanishing scalar equation of motion (although now including inhomogeneities) and also the vanishing pressure perturbation. At the risk of ruining the punchline, we refer the reader to the conclusions for the final outcome of this analysis.
The rest of the paper is organised as follows. In section 2 we review the effective field theory (EFT) approach to dark energy, following [27,28]. We review the generic approach of [6][7][8][9] in section 3 before describing the loophole found in [15], along with generalisations, in section 4. In section 4.4 we study the impact of inhomogeneities, fully taking into account the pressureless nature of the inhomogeneous perturbations. Finally, in section 5, we conclude.
EFT of dark energy
The current expansion of the universe can be explained assuming the existence of a scalar field φ whose energy density fills the universe. The time-dependent homogeneous solutionφ(t) generates a preferred foliation that slices the spacetime into a Friedmann Robertson Walker (FRW) metric. To work out the predictions for large scale structure surveys, we study the perturbations of φ around a background that non-linearly realises time diffeomorphism invariance (time diffs). Instead of focussing on a particular theory of dark energy, we use an effective approach, writing down the most general Lagrangian for φ and then expanding it around its time dependent background. It is therefore more convenient to 1 Beyond Horndeski loopholes were briefly considered in [15] but were prematurely ruled out using constraints coming from the decay of the gravitational wave, drawn directly from [29]. However, these constraints only apply to the residual class of theories left after the analyses of [6][7][8][9] and not the dynamical loopholes.
develop a geometrical approach to expand and categorise perturbations rather than expand the scalar field φ =φ + δφ. In the following we outline this procedure, following references [27,28].
Dark energy action in unitary gauge
We set the gauge by choosing the time coordinate to be a function of φ, t = t(φ), in such a way that the dark energy field sits at its unperturbed value everywhere. This is the so called unitary gauge, in which the perturbations of the field are eaten by the metric. The constant-time slices generated by φ foliate the spacetime, making the action for perturbations no longer invariant under time diffs. It follows that, beside the genuinely 4-d covariant terms such as the Ricci scalar, one can also include objects which are constructed from the foliation. For instance, now we can separately consider the projection of tensors along the orthogonal and parallel directions to the surface. This is done by contracting tensors with the normal vector n µ or with h µν the 3-metric, respectively given by All geometrical objects built from the foliation can be defined using these two projectors. Examples are the extrinsic curvature tensor K µν = h ρ µ h σ ν ∇ ρ n σ or the intrinsic curvature of the 3-dimensional surfaces . The most general effective action is constructed by writing down all possible operators that are compatible with the remaining symmetries. The reduced symmetry pattern of the system allows many terms in the action. They can be categorised as follows: i. Terms which are invariant under all diffs: these are just polynomials of the 4-dimensional Riemann tensor R µνρσ and of its covariant derivatives ∇ µ , contracted to give a scalar.
ii. Terms contracted with n µ . Since in unitary gauge n µ ∝ δ 0 µ , in every tensor we can leave free upper 0 indexes. For instance, we can use the 00 component of the metric or of the Ricci tensor. It is easy to check that these are scalars under spatial diffs.
iii. Terms derived from the foliation. This includes terms like D µ , the derivatives of the induced 3metric or the Riemann tensor (3) R µνρσ that characterises the 3-d slices intrinsically; but one can use also objects that tell us how the hyper-surfaces are embedded in the 4-dimensional spacetime. These are the extrinsic curvature K µν and the acceleration vector A µ = n ρ ∇ ρ n µ . The action contains all the possible scalars made by contracting these quantities.
iv. Since time diffs are broken all the couplings in front of the operators can be functions of time.
The most general action constructed using these ingredients takes the form where the contractions are done with the metric g µν , using the 3-metric does not lead to new interactions. Since the above action contains an infinite number of operators, we organise them in a derivative expansion: at lowest order in derivatives acting on the metric there are only polynomials in g 00 , at first order we can use the trace of the extrinsic curvature K. The leading effective action, up to second order in derivatives is In the above action, δg 00 ≡ 1 + g 00 , δK µ ν ≡ K µ ν − Hh µ ν , with H ≡ȧ/a being the Hubble rate, and δK its trace. While M 2 * is a constant, the other parameters are slowly-varying time-dependent functions. The first line of eq. (2.3) consists of all the operators that start at the background level. The Friedmann equation and requirement that the dark energy stress-energy tensor is covariantly conserved force the parameters f , Λ and c to satisfy where ρ m is the matter density. Note that if f = constant and c = 0 the background dynamics is trivially equivalent the ΛCDM model and dark energy is driven by a cosmological constant, as opposed to some large distance modification of gravity. To properly explore modified gravity we really ought to deviate from this trivial case.
In the second line of eq. (2.3) there are those operators that start at quadratic order in perturbations, while the last line contains cubic order operators. We did not attempt to write all possible cubic operators -only those that affect the graviton propagation speed. The action (2.3) describes both Horndeski and beyond-Horndeski theories. In principle. the parameters m i and m i are totally unconstrained save for the fact that the action must be real (in particular, positive powers of m i and m i can have either sign). Later on we will consider corrections that correspond to so called DHOST theories [30][31][32].
Dark energy immediately after GW170817
The action (2.3) describes the perturbations of the dark energy field and also gravity. Therefore, it captures the motion of gravitational waves (GWs) travelling across the universe. These may be affected by the presence of the time-dependent foliation that breaks diffs (and hence Lorentz invariance on smaller scales) by changing the speed of propagation or even allowing them to decay into dark energy excitations [29]: the situation is analogous to light travelling in a medium. Recently, the twin observation of GWs (GW170817) and its electromagnetic counterpart (GRB170817A ) coming from a neutron star merger has put severe constraints on the speed of propagation of GWs relative to light c T /c light = 1 ± O(1) × 10 −15 . Note that in General Relativity, c T,GR /c light = 1 although generic modifications yield a deviation of this result. Therefore, this event has been used in [6][7][8][9] to rule out many of the operators of the dark energy action (2.3).
Let us explore the argument of [6] in detail. We start by expanding the action in scalar and tensor perturbations. To expand the metric we use the ADM decomposition, so the line element is parametrised as ds 2 = −N 2 dt 2 + h ij (dx i + N i dt)(dx j + N j dt), and we work in the Newtonian Gauge, defined by Time diffs are restored by defining the Goldstone boson π(x, t). Since we are interested in studying the dynamics of GWs, we focus on the part of the action that is at least quadratic in the graviton perturbations. We also keep cubic operators that contain two gravitons and one scalar perturbation. This is because scalar perturbations with very long wavelength are seen by the astrophysical GWs (whose wavelength is ∼ 10 3 km) as a local change of the FRW background, and the value of the couplings in the dark energy action depend on the particular background for the effective theory. Therefore, an astrophysical GW travelling in toward us experiences many different FRW histories. The full dark energy action, expanded at second order in γ ij and up to first order in scalar perturbations is given by eq.(A.16). Here we just report those terms that contribute to c T : To compare this result with the predictions of General Relativity, it is useful to define the parameter where the last equality holds up to first order in perturbations. We see immediately that at background level the requirement α T = 0 is achieved by setting m 4 = 0. However, to robustly set c T to coincide with the prediction of GR, we should also set to zero those couplings whose operators are turned on by a long scalar perturbation, i.e. we set m 2 4 = m 2 5 and m 6 = m 6 = 0. This greatly reduces the phase space of the available dark energy theories.
Similar results can be derived if we work in the covariant formalism [9]. After having restored the scalar field φ, the Horndeski and Beyond-Horndeski actions are given by where X ≡ ∂ µ φ∂ µ φ and In this formalism the quadratic action for the graviton on a homogeneous background is given by [33] S (2) Therefore the parameter α T can be expressed as (3.14) Requiring α T = 0 for any background means that it should vanish independently of the values of H, H,φ andφ. This implies G 5,X = 0, F 5 = 0 and 2G 4,X − XF 4 + G 5,φ = 0. Therefore, G 5 can only be a function of φ, the Beyond Horndeski term F 5 must be absent and F 4 is fixed in terms of the derivatives of G 4 and G 5 .
Looking for a dynamical loophole
Recently, Ref. [15] uncovered a loophole in the argument of the previous section that could potentially rescue an entire class of theories that had been previously discarded. The idea was that gravitational waves must travel at the speed of light only in physical systems that satisfy the classical equations of motion. The authors of [15] worked in the covariant formalism and used the homogeneous scalar equation of motion to expressφ in terms ofφ andḢ, so that they could substitute it in eq. (3.14). This opened up a new region in the parameters space of potentially viable theories. We begin by reviewing the loophole identified in [15], To this end, consider a theory described by the action (3.6), with potentials given by Here, κ G , µ, Λ and ν are constants, and the prime denotes a derivative w.r.t. φ. Using eq. (3.14) one can verify that where ε φ is the homogeneous scalar equation of motion, taken to vanish on-shell This means the gravitational wave constraints are satisfied dynamically, at least for homogeneous configurations, thereby evading some of the conclusions drawn in [6][7][8][9]. However, the preliminary analysis of [15] also suggested that this theory would not survive the necessary constraints in the presence of inhomogeneous perturbations. But the analysis did not make use of the available dynamical data. In particular it did not exploit the vanishing of the inhomogeneous pressure perturbation as a possible means of escape.
As we will show below, the class of homogeneous theories that evade the LIGO/Virgo constraints is also somewhat broader than this one particular example. Indeed, Beyond Horndeski loopholes were prematurely ruled out in [15] using decay constraints that did not directly apply (see footnote 1), while DHOST theories were not investigated in any capacity. All this considered, we expand our analysis and ask again whether there are any theories that dynamically evade all of the relevant constraints even in the presence of linear inhomogeneities.
In the EFT formalism the background evolution is already set by the condition (2.6) and the perturbations of the scalar field are eaten by the metric. Working in the same setup as section 2, the only propagating scalar field is the Goldstone boson of the broken time diffs, π. Our approach will be to use the free equation of motion for the π perturbation to relax the constraints on the various dark energy theories. To do that we need to expand the effective action (2.3) at quadratic order, focussing only on operators with at least one π. Doing that we find (see appendix A for further details) (4.5)
Homogeneous configurations
We begin by focusing on homogeneous configurations by neglecting any spatial gradients -we will switch them back on in section 4.4. We can easily obtain the homogeneous free equation of motion for π, although it is convenient to express this and other dynamical quantities in terms of gauge invariants. According to how the homogeneous perturbations Φ, Ψ and π transform under a time diff, we can define the following gauge invariant variables, The free equation of motion of π becomes 6H c + 2m 4 2 + 2ċ + 3Ḣ m 3 3 − M 2 * ḟ + 4(m 4 4 ) X + 2 c + 2m 4 2 Ẋ If we solve for Y and plug the solution into the expression of α T , eq. (3.5), we get , (4.9) , (4.10) . The above three equations identify a new class of dark energy theories that evade the LIGO/Virgo constraints on homogeneous backgrounds. To ensure that our calculations are correct let us check whether or not the rescued theory of [15] satisfies the conditions (4.12) and (4.13). Using the results of [28] it is easy to write the rescued theory in the EFT language, (4.14) First of all notice that the couplings m 2 4 and m 2 4 are not identically zero, but proportional to the φ equation of motion (see eq. (4.4)) and so vanishing on shell. We can also check that the parameters of eq. (4.14) satisfy eq. (4.12). Finally, eq. (4.13) is satisfied, again, using the φ equation of motion (4.14), completing the check. It is important to point out that eq (4.12) and eq (4.13) yield a broader class of dark energy theories to the one proposed in [15] . This is because it also retains a rescued class of Beyond Horndeski theories prematurely ruled out in [15].
What about DHOST?
DHOST theories were not considered in any capacity in [15]. However, the EFT formalism is well adapted to include them. To this end we supplement the EFT action (2.3) with the following operators, corresponding to DHOST corrections [30][31][32]: where V ≡ (Ṅ − N i ∂ i N )/N and a i ≡ ∂ i N/N . The action S ≡ S EF T + S DHOST , contains all the possible independent operators that start at quadratic level, with at most two derivatives acting on the metric, [30,34]. Since S DHOST contains operators with one more derivative acting on the fields, in general they propagate more than one scalar degree of freedom. Therefore, one has to impose the following degeneracy conditions, that ensure the theory describes only one scalar degree of freedom [30][31][32], Expanding the action (4.15) in perturbations, we note that these new operators do not lead to changes in the expression for the speed of gravitational waves, c T . However, they do contribute to the free scalar equation of motion and can therefore change the expression for α T , indirectly, once we have evaluated it on-shell. To investigate this further, let us expand the action to second order. Since we are interested only in the equation of motion for π, we consider only those terms which are proportional to π or its derivatives. This gives, (4.17) Again, we compute the homogeneous equation of motion of π and we express it in terms of the gauge invariant quantities X and Y . After solving for Y , and plugging the solution into eq. (3.5), we obtain, The functions α ... X , αẌ , αẊ , α X , αẎ are given in app. B. Here we just use the fact that α ... X and αẌ both vanish if and only if m 6 β 1 = 0. If we want to have a non-trivial DHOST theory with β 1 = 0 we must therefore impose m 6 = 0. Inspecting equations (B.2) to (B.7), it is easy to see that this choice implies m 6 = 0 and m 2 5 = m 2 4 . In other words, we return to the results of sec. 3. We conclude that DHOST theories do not lead to any new results -we need to switch them off in order to escape the conclusions of [6].
Adding inhomogeneities -an intuitive argument
We now ask if the class of theories of sec. 4.1 continue to satisfy the LIGO/Virgo constraints, even in the presence of inhomogeneous perturbations. As a warm up to the main event we provide an intuitive argument to illustrate why this is unlikely. However, we emphasize that this argument will not make use of all of the dynamical information and we stress the importance of a more detailed analysis carried out in the next section.
To develop our intuition, note that an inhomogeneous perturbation introduces local curvature and so locally, the universe looks like a curved FRW universe. We can then perform the same analysis of sec. 4.1 expanding the action (2.3) around the following metric, (4.19) with k = 0. The extrinsic curvature K µν is not affected by k. However, on the background level, the 3-dimensional Ricci tensor goes as where we recall that h µν is the induced metric on the spatial slice. Glancing at the form of eq. (2.3), this implies that only the operators proportional to m 2 4 and m 6 are affected by the change of the background. Let us focus on the latter operator. Because of the curved background it now starts at quadratic order in perturbations, Again, comparing with eq. (2.3), we clearly see that the effect of m 6 , in the presence of curvature, is to shift the coefficient m 3 3 → m 3 3 − 2k/a 2 m 6 . As a consequence it changes the expression (3.5) for α T , and, in particular, the form of αẎ . Eq. (4.9) now becomes This must vanish for any value of k. This adds an additional requirement, forcing m 6 which in turns forces m 6 = 0 and m 2 5 = m 2 4 . Once again we return to the results of [6] and the loophole has been closed, at least intuitively.
Adding inhomogeneities -a detailed argument
We are now ready to perform a more detailed and careful analysis of the LIGO/Virgo constraints in the presence of linearised inhomogeneities. In other words, we restore the spatial gradients. Crucially, however, there is another consideration that we will take into account that could help in relaxing the very stringent bounds on the EFT coefficients: the fact that matter perturbations are pressureless. This is equivalent to saying that the diagonal part of Einstein's Equations is equal to the stress energy tensor of the dark energy field, or in other words, we demand that the variation of S EF T w.r.t. Ψ is zero.
The expression we obtain is quite complicated but can be simplified by exploiting the huge separation of scales in the problem at hand. We are interested in scalar perturbations with the typical size of a galaxy, at the the same time, GWs have a typical length of λ GW ∼ 10 3 km. Therefore we can exploit the fact that k GW ∼ λ −1 GW ≫ k s ∼ r −1 gal ≫ H 0 . The expression for α T still takes the form of eq. (3.5). As our interest is only in those theories that exploited a dynamical loophole at the homogeneous level, we parametrise them completely in terms of Λ, c, m 4 and m 6 by using eqs. (4.12) and (4.13). We will assume this is the case for the remainder of this section and evaluate α T accordingly.
For these loophole theories, we now compute the equation of motion for π, including spatial gradients (C.1), and the Ψ equation of motion, which is also assumed to vanish in the absence of an inhomogeneous pressure perturbation. We can simplify the latter by neglecting all the terms proportional to H and keeping only those with the lowest number of derivatives, consistent with the hierarchy of scales described above. This gives (4.23) Assumingḟ = 0, we solve the scalar equation of motion for Y and eq. (4.23) forΨ and substitute back into the appropriate expression for α T . Since we are restricting to loophole theories, we expect this to vanish up to spatial gradients, which is indeed the case. Specifically, we get, , (4.28) In deriving the above expressions we have also used the Friedmann equation (2.5) to express Λ in terms of c and ρ m , the matter density. We clearly see that to robustly set α T to zero we must demand that either m 4 or m 6 are vanishing. However, from eq. (4.29) m 4 = 0 impliesḟ = 0 which contradicts our earlier assumption, so we must have m 6 = 0. This now implies, via eqs. (4.12)-(4.13), that m 6 = 0 and m 4 = m 5 , thus going back to the results of [6] and eliminating the loophole.
The only possible way out is if f = const (we set it to unity for simplicity). In this situation the square bracket in the Ψ equation of motion eq. (4.23) is identically zero. Instead of solving forΨ as we did previously, we now solve for Ψ and plug that into α T , arriving at the following expression, where , (4.32) , (4.33) . (4.34) We can achieve α T = 0 in one of two ways. The first is to set m 6 = 0, leading us back to the results of [6] and closing the loophole. The other possibility is to set m 4 = 0 and c = 0. However, as stated in the text after eq. 2.6, this corresponds to dark energy driven by a cosmological constant, as in the standard cosmological model, and not some large distance modification of gravity. Although this set up would differ from the standard ΛCDM scenario at the level of perturbations, we don't think the scalar has any right to be called a genuine dark energy field.
Conclusions
After an exhaustive treatment taking into account all of the relevant dynamical information the conclusion is clear: there are no dark energy loopholes that survive the necessary constraints up to and including leading order cosmological perturbations. So, a decade and a half since one of us reviewed the state of play [35], what now for dark energy? Within the playground of scalar tensor theories, there is very little room for manoeuvre. Concerns about the low cut off notwithstanding [26], there are very few models of genuine cosmological interest that survive the purge carried out by LIGO and Virgo. One class of models that still survive are the so-called KGB set-ups [25]. Although there are some special cases, such as the cubic galileon, that are ruled out by other cosmological constraints [36], there is sufficient freedom in the KGB potentials to still extract genuine dark energy candidates [37,38]. Beyond that there is less room for optimism, Take, for example, the chameleon models [39,40]. Although these are compatible with the gravitational wave data, the chameleon by itself cannot be considered a bonafide dark energy field as other constraints on its mass render it irrelevant on Hubble scales [41]. In such scenarios, dark energy must be driven by a cosmological constant and the case for modified gravity with chameleons is less strong. Similarly, we could consider generic Horndeski theories where the higher order operators are suppressed by scales larger than Hubble, and we would have no concerns with the speed of propagation of gravitational waves. However, once again, in such a set up, dark energy must be driven by a cosmological constant and the case of modified gravity is weakened.
Perhaps this is pointing us towards a much less flamboyant origin of dark energy, corresponding to a cosmological constant, or a weakly coupled quintessence field in slow roll. However, even these scenarios are now being challenged by the so-called swampland criteria [42,43]. There should be some caution here. Contrary to the claim made in [44] a slowly rolling quintessence field will not fall victim to the distance conjecture after an order one excursion in units of the Planck mass, but after O(100). This is because the heavy moduli initially have their masses set by some high ultra-violet scale, M U V which is then modified as M U V e −O(1)∆φ/Mp as the dark energy field rolls a distance ∆φ. To bring their scale down to Hubble today, and contaminate quintessence, we need a very large exponential suppression and a large number of Planck excursions. In our opinion, modelling dark energy within string theory is now more important than ever, especially in light of the constraints coming from LIGO and Virgo. For example, some attempts to incorporate dark energy in supergravity can be found in [45][46][47] and more recently in [48,49].
Acknowledgments
We would like to thanks Paolo Creminelli, Giovani Tambalo, Vicharit Yingcharoenrat and Alex Vikman for useful discussions. AP was funded by a Leverhulme Research Project Grant. LB, EC and AP were funded by an STFC Consolidated Grant Number ST/P000703/1.
A Expansion of the effective action
As our aim is to obtain the equation of motion of π and derive the action to first order in scalar and second order in tensor perturbations, we ignore all the terms quadratic in Φ and Ψ. To restore the π field one needs to perform the time diff Under this transformation, any function of time f changes up to second order as while tensors transform as from which we can derive the transformations of the different ADM components of the metric. The metric g µν is decomposed as Making use of these relations, together with the derivative transformations, we can derive the transformations of the extrinsic curvature tensor and 3-dimensional Ricci scalar To compute the action we also need to expand the operators before performing the time diff. We work in Newtonian gauge, so we have It is now straightforward to expand the effective action up to second order in gravitons and first order in scalar perturbations, To recover the limit of General Relativity, we set all the m i couplings to zero and choose f = 1.
B Expression for α T in DHOST
Including also the DHOST operators, the coefficient α T is given by C Equations of motion for the π field including spatial gradients | 7,324.8 | 2020-06-11T00:00:00.000 | [
"Physics"
] |
M2-like tumor-associated macrophages transmit exosomal miR-27b-3p and maintain glioblastoma stem-like cell properties
There is growing evidence supporting the implications of exosomes-shuttled microRNAs (miRs) in the phenotypes of glioblastoma stem cells (GSCs), whilst the role of exosomal miR-27b-3p remains to be established. Herein, the aim of this study was to investigate the effect of M2 tumor-associated macrophage (TAM)-derived exosomal miR-27b-3p on the function of GSCs. Clinical glioblastoma (GBM) specimens were obtained and GSCs and M2-TAMs were isolated by fluorescence-activated cell sorting (FACS), and exosomes were separated from M2-TAMs. It was observed that M2-TAM-derived exosomes promoted the stem-like properties of GSCs. Gain- and loss- of function assays were then conducted to explore the effects of exosomal miR-27b-3p and the miR-27b-3p/MLL4/PRDM1 axis on GSC phenotypes. A xenograft tumor model of GBM was further established for in vivo substantiation. Inhibition of miR-27b-3p in M2-TAMs reduced exosomal miR-27b-3p transferred into GSCs and consequently diminished GSC viability in vitro and tumor-promoting effects of GSCs in vivo. The interaction among miR-27b-3p, mixed linked leukemia 4 (MLL4), positive regulatory domain I (PRDM1) was validated by dual-luciferase and ChIP assays. MLL4 positively regulated PRDM1 expression by inducing methylation in the PRDM1 enhancer region and ultimately reduced IL-33 expression. miR-27b-3p targeted MLL4/PRDM1 to activate IL-33 and maintain the stem-like function of GSCs. In conclusion, our study elucidated that M2-TAM-derived exosomal miR-27b-3p enhanced the tumorigenicity of GSCs through the MLL4/PRDM1/IL-33 axis.
INTRODUCTION
Glioblastoma (GBM) is the most deadly subtype of primary brain tumors associated with a poor prognosis with a median survival shorter than 2 years [1]. Cancer stem cells (CSCs) are a subpopulation of tumorigenic cells with high self-renewal potential at the apex of cellular hierarchies [2]. Notably, glioblastoma stem cells (GSCs) function as major contributors to the poor prognosis of GBM through supporting chemoresistance, radio-resistance, angiogenesis, and recurrence [3][4][5]. Tumor-associated macrophages (TAMs) produce factors that not only stimulate malignant behaviors of tumor cells but also enhance tumor vascularization [6]. Evidence exists revealing that TAMs may promote the invasiveness of GSCs [7]. Following previous documentation, this study aimed to further explore the function and mechanistic actions of TAMs and GSCs in GBM.
Extracellular vesicles (EVs) are membrane vesicles (including exosomes, microvesicles, and apoptotic bodies) capable to modulate the function of recipient cells by delivering RNAs, proteins and other molecular constituents [8]. For instance, enrichment of microRNA-21 (miR-21) in M2 bone marrowderived macrophage-derived exosomes has been unraveled to promote the immune escape of glioma cells [9]. Intriguingly, it has been suggested that exosomes secreted by M2-TAMs contain high levels of miR-27b-3p [10], which has key roles to play in the pathogenesis of glioma [11]. miR-27b may raise the invasiveness of glioma cells by targeting Sprouty homolog 2 [12]. Herein, the modulatory effect of M2-TAMs may be associated with miR-27b-3p delivered by exosomes.
Furthermore, results of our bioinformatics analysis predicted mixed linked leukemia 4 (MLL4) as a target of miR-27b. In GBM, highly expressed histone H3 lysine 4 methyltransferase MLL4 prolongs the overall survival of patients [13]. Moreover, our coexpression analysis showed that the expression of MLL4 is positively correlated with that of positive regulatory domain I (PRDM1) in GBM. PRDM1 is a key component in the orderly transition from pluripotent state to defined neural lineages [14]. Interestingly, diminished PRDM1 expression is linked to poor survival and aggravated pathological grade of human glioma [15] and IL-33 is found to express heterogeneously in cancerous tissues and shares an association with inferior survival in patients suffered from relapse GBM [16]. More recently, a study indicated that IL-33 accelerates the invasion of glioma cells [17].
Based on the aforementioned evidence, this study intended to examine the impact of M2-TAM-derived exosomes-containing miR-27b-3p on GSCs and the underlying mechanisms likely associated with the MLL4/PRDM1/IL-33 regulatory axis.
M2-TAMs promote the viability of GSCs
To further elucidate the molecular mechanism of M2-TAMs in tumor promotion, GEPIA was utilized to obtain the potential relationship between the survival rate of GBM patients and the expression of pan-TAM marker Iba1 (also named AIF1 in NCBI) and M2-TAM marker CD163 in TCGA database. Kaplan-Meier analysis results showed that GBM patients with lower expression of Iba1 or CD163 had a relatively better prognosis (Fig. 1A-D). Next, the tumor-supporting M2-TAMs (CD11b + /CD163 + ) and control TAMs (CD11b + /CD163 − ) were sorted by fluorescence-activated cell sorting (FACS) using the well-known TAM labeling CD11b and the classic M2-TAM labeling CD163. It was confirmed by reverse transcription quantitative polymerase chain reaction (RT-qPCR) that compared with control TAMs, the expression of CD163 was increased in M2-TAMs sorted by FACS (Fig. 1E). Accordingly, TAMs could be recruited by GSCs and maintained as M2-TAMs to promote GBM tumor growth.
Next, GSCs were isolated, the stemness of which was first assessed. Increasing evidence suggests that Oct4 and Sox2 are marker proteins of stem cells and tumor stem cells, which are essential for the maintenance of tumor stem cell stemness [5,18,19]. In addition, neural stem cell marker proteins CD133 and Nestin have also been found to be marker proteins of glioma stem cells, and their high expression is crucial for the self-renewal and proliferation of GSCs [18,20]. GFAP is an intermediate filament protein, and its expression is related to the maturity of tumor cells. In the GSCs, GFAP signal is difficult to detect. However, GFAP is frequently highly expressed in mature glioma cells. Therefore, it is often used as a marker of GSC differentiation [21,22]. GSCs were cultured in the nerve basal medium containing growth factor for suspension growth in a non-adherent culture system to form neurospheres (Fig. 1F). In contrast to the primary GBM cells, the levels of stem cell-related proteins CD133, Nestin, Oct4 and Sox2 were raised while that of astrocyte activation marker GFAP was diminished in GSCs (Fig. 1G). In a Transwell system, GSCs were co-cultured with M2-TAMs or control TAMs, and the results reported that after co-culture with M2 TAMs, the formation rate and diameter of spheres in GSCs were increased obviously (Fig. 1H). The protein levels of CD133, Nestin, Oct4 and Sox2 were enhanced, while those of GFAP protein were reduced in GSCs co-cultured with M2-TAMs (Fig. 1I).
The above results suggested that M2-TAMs could enhance the viability of GSCs.
M2-TAM-derived exosomes enhance the tumorigenic properties of GSCs M2-TAM-derived exosomes were isolated to investigate whether the exosomes secreted by M2-TAMs could promote the growth of GSCs as a paracrine pathway. The diameter of the M2-TAMderived exosomes was about 180 nm as cup-shape under a transmission electron microscope (TEM) ( Fig. 2A). Further flow cytometry and Western blot analyses suggested that M2 biomarkers (CD206 and CD163) and the exosome marker CD63 were enriched in M2-TAM-derived exosomes, while CD63 was enriched in the control TAM-derived exosomes (Fig. 2B, C).
Next, the role of M2-TAM-derived exosomes was assessed in the mice bearing xenograft tumors of GSCs. M2-TAM-derived exosomes and GSCs were implanted into the brains of mice. Hematoxylin and eosin (HE) staining results revealed that versus the control TAM-derived exosomes, the tumorigenic role of GSCs was enhanced by M2-TAM-derived exosomes (Fig. 2G).
In order to investigate whether M2-TAM-derived exosomes affected the stemness maintenance of GSCs, the expression of GSCs marker Sox2 in xenograft was detected by immunohistochemical staining. In comparison to the control TAM-derived exosomes, the percentage of Sox2-labeled GSCs in xenograft was increased by M2-TAM-derived exosomes (Fig. 2H). Consequently, the survival time of mice treated with M2-TAM-derived exosomes was distinctly shortened (Fig. 2I Hence, M2-TAM-derived exosomes may be a key paracrine factor contributing to the tumorigenic properties of GSCs. miR-27b-3p transferred by M2-TAM-derived exosomes potentiates the tumorigenic properties of GSCs Exosomal miRNAs have been illustrated as important regulators of cellular functions [23]. Recently, miR-27b-3p was highly expressed in exosomes secreted by M2 macrophages [10]. Of note, miR-27b-3p expression was markedly increased in M2-TAM-derived exosomes relative to control TAM-derived exosomes (Fig. 3A).
After the exosomes and GSCs were delivered into the brain of mice, the observation after HE staining showed that the tumorpromoting effect of GSCs was enhanced by M2-TAM-derived exosomes, but suppressed by the exosomes derived from the miR-27b-3p inhibitor-treated M2-TAMs (Fig. 3G). The immunohistochemical staining presented that the percentage of Sox2-labeled GSCs was increased by the M2-TAM-derived exosomes but it was reduced following miR-27b-3p inhibition (Fig. 3H). Additionally, the survival time of mice was shortened by treatment with M2-TAM-derived exosomes, but prolonged by the exosomes derived from the miR-27b-3p inhibitor-treated M2-TAMs (Fig. 3I).
Together, M2-TAM-derived exosomes can promote the properties of GSCs by delivering miR-27b-3p. Average diameter (μm) the control EXO group, the inhibitor-NC/TAM group, the NC/M2 EXO group, or the PBS group. # P < 0.05 vs. the NC/M2 EXO group. Measurement data were depicted as mean ± standard deviation. Comparison of data between two groups was conducted by unpaired t test, while that among multiple groups by one-way ANOVA followed by Tukey's post hoc test. Cell experiments were repeated three times independently.
Overexpression of MLL4 inhibits the properties of GSCs
We next elucidated the molecular mechanism of miR-27b-3p in the maintenance of GSCs. miRWalk, DIANA TOOLS, RNA22, and starBase were utilized to predict the targeted genes of miR-27b-3p, and 131, 4496, 10,104, and 946 genes were obtained, respectively. Interestingly, 5 important downstream mRNAs were found in the intersection (Fig. 4A). STRING database was adopted to predict genes related to the aforementioned downstream mRNAs and to construct a protein-protein interaction (PPI) network. MLL4 (also named KMT2D in NCBI) had the highest core degree among the genes related to the downstream mRNAs ( Fig. 4B), which was selected as key downstream gene for subsequent analyses. The binding site of miR-27b-3p and MLL4 was predicted by starBase (Fig. 4C). MLL4 was overexpressed by oe-MLL4 transfection and knocked down by short hairpin RNA (sh)-MLL4 (sh-MLL4#1, sh-MLL4#2) transfection in GSCs to examine its role in GSCs. It was noted that sh-MLL4#1 which presented with the optimal silencing efficiency was selected for subsequent experiments (Fig. 4D). Consequently, neurosphere formation in GSCs was suppressed following overexpression of MLL4, while it was promoted following silencing of MLL4 ( Fig. 4E and Supplementary Fig. 1B).
In addition, the protein levels of CD133, Nestin, Oct4, and Sox2 proteins were reduced while GFAP protein level was increased by MLL4 overexpression in GSCs, which were opposite to the changes caused by MLL4 knockdown (Fig. 4F). Meanwhile, the viability of GSCs was suppressed by MLL4 overexpression, but it was facilitated by MLL4 silencing (Fig. 4G).
Overall, overexpression of MLL4 attenuated the properties of GSCs, while its knockdown abolished this effect.
In addition, MLL4 expression was detected to be increased at the mRNA and protein levels when miR-27b-3p expression was inhibited in a dose-dependent manner (Fig. 5B, C). More importantly, the luciferase activity of MLL4-WT was diminished in HEK-293 cells transfected with miR-27b-3p mimic; on the contrary, MLL4-MUT had no changes after transfection (Fig. 5D). These results suggested that miR-27b-3p could directly target MLL4 3′UTR.
In summary, miR-27b-3p can bind to MLL4 and reduce its expression.
miR-27b-3p enhances the properties of GSCs by downregulating the MLL4/PRDM1/IL-33 axis PRDM1 has been reported to affect the progression of glioma [15], and IL-33 has been demonstrated to promote the stemness of tumor stem cells [24]. GEPIA analysis revealed a close correlation of PRDM1 expression with the survival of GBM patients (Fig. 6A), and a positive correlation was found between MLL4 expression and PRDM1 expression in GBM (Fig. 6B). MEM analysis presented a significant co-expression relationship between MLL4 expression and PRDM1 expression as well as PRDM1 expression and IL-33 expression (Fig. 6C, D).
GSCs were transfected with oe-MLL4 and the expression of IL-33 was determined to be reduced (Fig. 6E). Besides, miR-27b-3p inhibitor diminished the expression of IL-33 in GSCs (Fig. 6F). It Measurement data were depicted as mean ± standard deviation. Comparison of data among multiple groups was conducted by one-way ANOVA followed by Tukey's post hoc test. The data among multiple groups at different time points were compared using repeated-measures ANOVA with Bonferroni's post hoc test. Cell experiments were repeated three times independently.
was found that PRDM1 could recruit G9a to the IL-33 promoter, promoting H3K9 modification and inhibiting its transcription [25]. Thus, we speculated that MLL4 might downregulate IL-33 through PRDM1 and thus affected the functions of GSCs.
It was revealed that PRDM1 expression and H3K9me3 level were elevated in GSCs transfected with oe-MLL4 (Fig. 6E, G). Next, GSCs were transfected with oe-MLL4 or co-transfected with oe-MLL4 and sh-PRDM1. Consequently, PRDM1 expression and H3K9me3 level were diminished and IL-33 expression was elevated in GSCs co-transfected with oe-MLL4 and sh-PRDM1 as compared to oe-MLL4 treatment alone (Fig. 6E, G).
Finally, chromatin immunoprecipitation (ChIP) assay demonstrated that H3K9me3 recruited more IL-33 in the GSCs with oe-MLL4, while H3K9me3 recruited less IL-33 in the GSCs cotransfected with oe-MLL4 and sh-PRDM1 (Fig. 6H). The results of Western blot analysis that overexpression of MLL4 raised the level of H3K4me1 in GSCs (Fig. 6I). ChIP assay further demonstrated that upregulated MLL4 promoted the methylation of PRDM1 enhancer region (Fig. 6J). These results indicated that MLL4 downregulated IL-33 by increasing PRDM1.
The function of PRDM1 in GSCs was further verified using gain-of-function approaches. The formation rate and diameter of sphere of GSCs were attenuated by PRDM1 overexpression in GSCs ( Fig. 6K and Supplementary Fig. 1C). Consistently, the levels of CD133, Nestin, Oct4, and Sox2 were diminished while GFAP expression was enhanced by PRDM1 overexpression in GSCs (Fig. 6L).
The above results suggested that miR-27b-3p could promote the properties of GSCs by downregulating the MLL4/PRDM1/IL-33 axis. It was revealed by our study that M2-TAM-derived exosomes raised the formation rate and diameter of spheres in GSCs and potentiated their tumor-initiating effects. M2-TAMs have also been reported to correlate with oncogenesis and tumor progression. For instance, M2-TAMs are markedly related to the development of premalignant lesions to oral squamous cell carcinoma [26]. The EVs originating from M2-TAMs contain deaminase proteins or regulatory molecules of deaminase-specific transcription/translation which are implicated in cancer progression [27]. Another study suggested that tumor-derived exosomes from hypoxic conditions accelerate tumor growth and angiogenesis in GBM [28]. In this study, M2-TAM-derived exosomes facilitated tumorigenesis through maintenance of the GSCs, which was supported by the in vivo data from the murine model.
DISCUSSION
In addition, inhibition of miR-27b-3p in the M2-TAM-derived exosomes attenuated the stem-like properties and tumorpromoting effect of GSCs and consequently prolonged the survival time of mice with GBM. It has been suggested previously that upregulation of miR-27b-3p accelerates the proliferation and apoptosis resistance in myeloma fibroblasts [29]. miR-27b-3p functions as an oncogenic miR in colorectal cancer [29]. In addition, the expression of miR-27b-3p is heightened in Doxresistant anaplastic thyroid cancer cells and its ectopic expression enhances the Dox resistance [30].
In this study, we further found that miR-27b-3p directly targeted MLL4 and negatively regulated its expression in GSCs. It is wellknown that miRNAs promote the degradation of target mRNAs to inhibit the expression of the mRNAs [31]. Moreover, our study demonstrated that upregulating MLL4 reduced self-renewal and tumor-promoting properties of GSCs and inhibited their viability. It is Measurement data were depicted as mean ± standard deviation. Comparison of data between two groups was conducted by unpaired t test, while that among multiple groups by oneway ANOVA followed by Tukey's post hoc test. Cell experiments were repeated three times independently.
reported that restoration of MLL4 positively influences the outcome of GBM patients [13]. Our study revealed that MLL4 expression was positively related to PRDM1 expression, and that MLL4 could promote PRDM1 transcription through the increase of the H3K4me1 enrichment at the enhancer region of PRDM1. As an H3K4 methylation methyltransferase, MLL4 often catalyzes H3K4me1/2 modification and H3K4me1/2 is widely distributed in the upstream enhancer region of genes to provide gene transcription [32,33]. Further, previous literature has reported that PRDM1 can recruit G9a to the IL-33 promoter, thus promoting H3K9 modification, and inhibiting its transcription [25]. Meanwhile, G9a is a methyltransferase which catalyzes methylation of H3K9 [34]. In this study, we further explored it in GBM cells and found that it did exist, and had an important regulatory effect on GBM growth. The bioinformatics and biological approaches of Wang's study have demonstrated that the tumor-suppressive PRDM1 was a direct target gene of miR-30a-5p, and aberrant deficiency of PRDM1 was attributable to miR-30a-5p overexpression in GBM which contribute to phenotype maintenance and pathogenesis of gliomas [15]. Consistent with our findings, another study has indicated that PRDM1 could recruit G9a to the IL-33 promoter, promote H3K9 modification and inhibit its transcription [25]. It has been suggested that overexpression of PRDM1 inhibits cell proliferation, cell cycle arrest and enhances apoptosis of tumor cells [35]. Also, it is presented that upregulating PRDM1 in human colon cancer organoids can suppress the growth and formation of colon tumor organoids in vitro [36]. Our data substantiated that miR-27b-3p targeted MLL4/PRDM1 to activate IL-33 and thereby contributed to the maintenance of the stem-like function of GSCs. Nevertheless, the interactions among miR-27b-3p, MLL4, PRDM1, and IL-33 need further exploration.
In conclusion, our study provides evidence that M2-TAMs exosomal miR-27b-3p promotes the stem-like phenotype of GSCs via mediating the MLL4/PRDM1/IL-33 axis (Fig. 7). This study provides a new mechanistic insight for the development of GBM. However, a conclusion about the oncogenic effects of M2-TAMs exosomal miR-27b-3p may be limited by the lack of clinical data on this, which should be further probed in future studies.
Study subjects
Tumor specimens were collected from 6 GBM patients who underwent resection at Jilin Medical University from 2017 to 2019. According to the World Health Organization (WHO) classification, the enrolled patients were histopathologically diagnosed with GBM. Fresh GBM specimens were collected for FACS to isolate GSCs and TAMs.
Isolation of GSCs
The fresh GBM tumor was separated by Papain separation system (Worthington Biochemical Corporation, Freehold, NJ). Cells were cultured for 6 h in nerve basal medium (Invitrogen, Carlsbad, CA) and B27 supplements (20 ng/mL, Thermo Fisher Scientific, Waltham, MA), epidermal growth factor (20 ng/mL, Peprotech, Rocky Hill, NJ) and basic fibroblast growth factor (20 ng/mL, Peprotech) to re-express GSC surface markers. Then, cells were labeled with fluorescein isothiocyanateconjugated CD15 antibody (BD Biosciences, Franklin Lakes, NJ) and P-phycoerythrin (PE)-conjugated CD133 antibody (Miltenyi, 130-090-854) at 4°C for 40 min. Next, GSCs (CD15 + /CD133 + ) were isolated by FACS. The characteristics of GSC were verified using GSC markers Sox2, OLIG2, CD15, and CD133, and a series of functional tests, including tumor sphere formation, serum-induced differentiation, and in vivo limiting-dilution assays [37]. The enriched GSCs were continuously preserved as GBM xenograft, and the cells only cultured in vitro stem cell medium were used for functional experiments. The cells had been identified by karyotype and morphology. All the cells were tested by RT-qPCR for mycoplasma contamination and confirmed to be mycoplasma-free.
Cell co-culture
Using a Transwell co-culture system (Corning), M2-TAMs were co-cultured with GSCs, while TAMs without co-culture were used as the control. Cells were co-cultured for 24 h and then adopted for neurosphere forming assay and Western blot analysis.
Cell transfection
miR-27b-3p inhibitor and corresponding negative control (NC) were purchased from RiboBio Co., Ltd. (Guangdong, Guangzhou, China). Cells were cultured in a six-well plate before transfection in accordance with the specification of RiboBio using Lipofectamine 3000 reagent (Invitrogen).
Neurosphere forming assay and cell viability assessment
GSCs were co-cultured with M2-TAM-derived exosomes (40 μg/mL) for 3 cycles (2 d/cycle). GSCs co-cultured or not with M2-TAM-derived exosomes were dispersed into single cell suspension and seeded in a six-well plate (1 × 10 4 cells/well) with 0.5% agarose, and cultured in serum-free Dulbecco's modified Eagle's medium/Ham's F-12 medium (DMEM/F12) containing growth factors. After 12 d, the number of spheres in each well was counted. The formation rate of sphere (%) was equal to the number of spheres divided by the number of individual cells initially seeded. The diameter of sphere was measured by Image-Pro Plus 6.0 software. In the light of the manufacturer's instructions, the determination of cell viability was performed by Cell Titer-Glo Luminescent Cell Viability Assay kit (Promega).
Isolation and purification of exosomes
TAMs (2.0 × 10 6 cells/well) were seeded in serum-free high glucose DMEM (Gibco, Carlsbad, California) for 48 h. The medium was centrifuged in a 50 mL centrifuge tube at 300 × g for 15 min to collect the supernatant. Next, the supernatant was subjected to a series of low-speed centrifugation steps (2000 × g, 10 min) to discard cell debris. Subsequently, the supernatant was centrifuged at 10,000 × g for 30 min, ultra-centrifuged at 100,000 × g for 120 min (Optima L-100XP, Beckman Coulter) and finally centrifuged for 120 min at 100,000 × g. The obtained pellet was the exosomes. The sucrose density gradient fractionation was used for exosome purification [38]. The concentrated exosomes were stored at −80°C.
Characterization of exosomes
The morphology of TAM-derived exosomes was observed by a TEM (Tecnai Spirit, FEI). The content of total protein in the exosomes derived from 2.0 × 10 6 cells within 48 h was measured by bicinchoninic acid (BCA) protein detection kit (23235, Thermo Fisher Scientific, Rochester, NY). TAM-derived exosomes were labeled by PE-bound anti-human CD63 antibody (12-
Tumor xenograft in NOD/SCID mice
An in-situ orthotopic xenograft model was developed in mice by intracranial injection of GBM cells [39]. Male non-obese diabetic/severe combined immunodeficiency (NOD/SCID) mice (4-6 weeks old) were purchased from Guangzhou Saiye Biotechnology Co., Ltd. (Guangzhou, China). In order to discuss the tumor-promoting effects of TAM-derived exosomes on GSC-driven tumor growth, TAM-derived exosomes and GSCs (5 × 10 3 cells per mouse) were transplanted into the brain of mice.
Histological analyses
After 40 days, the brain of mice bearing GBM was fixed by 10% neutral formalin solution and sliced into sections. The tissue sections were stained with HE and sealed by neutral gum. The histopathological changes of tissues were observed under the BX51 microscope (Olympus Optical Co. Ltd, Tokyo, Japan). For immunohistochemical staining, the primary antibody anti-Sox2 (#3579, 1: 200, Cell Signaling Technology, Beverly, MA) was employed. The percentage of Sox2 positive cells was quantified in five randomly selected regions of each tumor specimen.
ChIP assay
ChIP kit (Millipore) was adopted to test the binding of PRDM1 enhancer to IL-33 promoter. Upon reaching about 70-80% confluence, cells were crosslinked with 1% formaldehyde for 10 min and then randomly broken into fragments by ultrasonication. The supernatant was collected after centrifugation at 13,000 rpm and 4°C. The supernatant was severally incubated with positive control antibody RNA polymerase II, anti-human IgG and rabbit anti-H3K4me1 (1:100, ab176877, Abcam) and H3K9me3 (1:100, ab8898, Abcam) at 4°C overnight. Endogenous DNA-protein complex was precipitated by Protein Agarose/Sepharose and centrifuged to remove the supernatant, and nonspecific complex was rinsed. The cross-linking was reversed at 65°C overnight. DNA fragments were purified and recovered by phenol/chloroform extraction. The expression of PRDM1 enhancer and IL-33 promoter was determined by RT-qPCR.
Statistical analysis
All data were analyzed by SPSS 21.0 software (IBM Corp. Armonk, NY). Measurement data were indicated as mean ± standard deviation. Comparison between two groups was conducted by unpaired t test, while that among multiple groups by one-way analysis of variance (ANOVA) followed by Tukey's post hoc test. The data at different time points were compared using repeated measures ANOVA with Bonferroni's post hoc test. p value less than 0.05 was indicative of statistically significant difference.
DATA AVAILABILITY
The datasets generated/analyzed during the current study are available. | 5,406.8 | 2022-08-04T00:00:00.000 | [
"Biology"
] |
High-Repetition-Rate Femtosecond Laser Processing of Acrylic Intra-Ocular Lenses
The study of laser processing of acrylic intra-ocular lenses (IOL) by using femtosecond laser pulses delivered at high-repetition rate is presented in this work. An ultra-compact air-cooled femtosecond diode laser (HighQ2-SHG, Spectra-Physics) delivering 250 fs laser pulses at the fixed wavelength of 520 nm with a repetition rate of 63 MHz was used to process the samples. Laser inscription of linear periodic patterns on the surface and inside the acrylic substrates was studied as a function of the processing parameters as well as the optical absorption characteristics of the sample. Scanning Electron Microscopy (SEM) Energy Dispersive X-ray Spectroscopy (EDX), and micro-Raman Spectroscopy were used to evaluate the compositional and microstructural changes induced by the laser radiation in the processed areas. Diffractive characterization was used to assess 1st-order efficiency and the refractive index change.
Introduction
Polymeric materials have become well-known in recent decades and have experienced an outstanding development achieving unique properties that have allowed them to enter rapidly in almost all industrial, technological and biotechnological applications in semiconductor manufacturing and coatings, household appliances, automotive, electronics, aerospace, as well as in biomedicine, bioengineering, pharmaceutical and ophthalmology. It is worth noting their application like lab-on-chip devices, storage devices, optoelectronic and photovoltaic devices, micro-fluidic channels, orthopedic, dental, hard and soft tissue replacements, cardiovascular devices, drug delivery, and as contact and intraocular lenses [1][2][3][4][5][6][7][8][9][10][11][12]. In fact, polymers represent the largest class of materials used for biomedical applications. The reasons why they are the preferred material are resource efficiency and energy saving, easy and reliable processing capabilities for diverse styling and design. In addition, polymers usually have excellent bulk physical and chemical properties such as low surface energy, hydrophobicity, and high electrical resistance [1,2].
Ultrafast Laser Inscription (ULI) has become a powerful and versatile technique for selective surface and bulk processing. In this technique, ultra-short laser pulses are tightly focused inside transparent materials inducing nonlinear absorption processes in the focal volume and leading to permanent weak local refractive index variations, formation of nano-voids, crystallization processes or even chemical transformations. This technique has been widely used to modify crystalline and glassy matrices as well as polymers to produce passive and active photonic devices, to create 2D/3D micro/nanostructures or to activate and functionalize the surface [15][16][17][18][19][20][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. Nevertheless, despite laser ablation of polymers is a well stablished process for industrial applications the contribution of the main mechanisms which may take part in the laser-polymer interaction process, i.e., photo-chemical and photo-thermal decomposition processes, are not clearly solved and the discussion is still controversial. Photo-thermal ablation induces electronic excitation and thermalization whereas in photo-chemical ablation, covalent bonds in the polymer chains are directly broken by high-energy UV photons. Ablation mechanisms depend on laser characteristics such as wavelength, pulse duration, repetition rate, fluence and intensity, and on material properties such as absorption, reflectance and thermal conductivity. At the extreme intensities reached by ultrashort laser pulses absorption becomes nonlinear. Since the first report on laser ablation of polymers by Srinivasan and Kawamura in 1982 [36,37] a wide variety of polymers have been processed by using laser radiation with pulsewidth from the ns-to the fs-range and wavelengths in the UV, VIS and IR spectral regions aiming at explaining the nature of these mechanisms as well as incubation phenomena, studying the correlation of the laser processing parameters on the modification of the morphology, optical properties and chemical composition [38][39][40][41].
It is well known that the inscription of diffractive optical elements such as diffraction gratings can be used to modify the refractive index and hence the refractive power of an optical device by using short and ultrashort laser pulses with pulse energy below damage threshold [32][33][34][35]42,43]. In this work, we report on the fabrication of diffractive gratings in acrylic intra-ocular lenses by using high-repetition-rate ultrashort laser pulses. For this purpose, periodic linear patterns were inscribed on the surface and inside the polymer sample modifying the pulse energy, the scanning speed, and the inter-line spacing. Processed samples were analyzed by optical and contrast phase microscopy, SEM-EDX, confocal micro-Raman spectroscopy and diffractive techniques under illumination of a cw He-Ne laser.
Laser Processing
As the laser source, a HighQ2-SHG femtosecond system (Spectra Physics, Santa Clara, CA, USA) delivering 250 fs laser pulses at a fixed wavelength of 520 nm and a repetition rate of 63 MHz was used to process the samples. The laser beam was focused on the surface and 150 µm beneath the surface by using a 100× long working distance infinity corrected microscope objective (NA = 0.6). According to the equation d 0 = 2 × 1.22 × λ/NA [22], the theoretical diameter at the focal plane d 0 was 2.1 µm. Pulse energy was set at 1 and 2 nJ with help of a calibrated neutral density filter. The sample, placed in a 3D motorized stage, was scanned to produce parallel tracks with lateral separation of 10 µm, 20 µm and 40 µm by using speeds of 0.25 mm/s, 0.50 mm/s and 1 mm/s. According to preliminary experiments, these values of pulse energy and scanning speed were considered the optimal to inscribe low damage diffraction gratings. As the substrate, a 320 µm thick hydrophobic UV-photo-reactive polybenzylmethacrylate polymer employed as ophthalmic intra-ocular lens was used (Contateq, Eindhoven, The Netherlands).
Characterization Techniques
Photographs were taken with a contrast phase microscope (B-800PH, Optika, Ponteranica, Italy) and a stereoscope lupe (SZM-LED2, Optika, Ponteranica, Italy). Optical transmission spectra were obtained by means of a spectrophotometer (U-3400, Hitachi, Abingdon, UK). Raman dispersion measurements were performed using a confocal Raman Spectrometer (Alpha 300 M+, WITec, Ulm, Germany) equipped with a thermoelectric-cooled CCD detector. A continuous wave 532 nm laser was used as the excitation source. The backscattered light was collected through a 20× microscope objective lens. The output power of the laser was kept below 20 mW in order to avoid significant local heating of the sample. A continuous 3 mW He-Ne laser at 632.8 nm was used to illuminate the diffraction gratings to characterize the diffractive modes and the refractive index variation.
Ultrafast Laser Inscription of Diffraction Gratings
Polymers used for ophthalmic applications commonly incorporate UV filters to mimic the natural characteristics of the optical tissue to be substituted. In particular, for the case of polymers used to replace the crystalline lens, these UV blocking chromophores prevent most UV radiation between 300 nm and 400 nm which is not useful for vision and may damage the retina [44]. Figure 1 shows the optical transmission spectrum of the acrylic IOL recorded at room temperature between 250 nm and 800 nm. It can be observed that the absorption cut-off wavelength is placed at 375 nm. In addition, optical transmittance at the laser wavelength used to process the samples, 520 nm is 84.3%. For comparison purposes the transmission spectrum of the crystalline lens is also included.
Polymers 2020, 12, x FOR PEER REVIEW 3 of 9 Germany) equipped with a thermoelectric-cooled CCD detector. A continuous wave 532 nm laser was used as the excitation source. The backscattered light was collected through a 20× microscope objective lens. The output power of the laser was kept below 20 mW in order to avoid significant local heating of the sample. A continuous 3 mW He-Ne laser at 632.8 nm was used to illuminate the diffraction gratings to characterize the diffractive modes and the refractive index variation.
Ultrafast Laser Inscription of Diffraction Gratings
Polymers used for ophthalmic applications commonly incorporate UV filters to mimic the natural characteristics of the optical tissue to be substituted. In particular, for the case of polymers used to replace the crystalline lens, these UV blocking chromophores prevent most UV radiation between 300 nm and 400 nm which is not useful for vision and may damage the retina [44]. Figure 1 shows the optical transmission spectrum of the acrylic IOL recorded at room temperature between 250 nm and 800 nm. It can be observed that the absorption cut-off wavelength is placed at 375 nm. In addition, optical transmittance at the laser wavelength used to process the samples, 520 nm is 84.3%. For comparison purposes the transmission spectrum of the crystalline lens is also included. Optical transmission spectrum of acrylic intra-ocular lens. The arrow points out the laser wavelength used to process the sample, 520 nm, for which optical transmittance is 84.3%. Crystalline lens optical transmission spectrum is also included for comparison purposes.
Since the optical absorption of the sample at the laser processing wavelength is very low only weak modification should be induced in the material at low processing rates, in the range of few μm/s. Nevertheless, the characteristics of this laser oscillator with 63 MHz repetition-rate and 250 fs pulse duration, properly combined with a high numerical aperture focusing objective, 0.6 NA, allow inducing non-linear absorption processes so that it was possible to produce linear diffraction gratings as on the surface as inside the material at high processing rates. In particular, diffraction gratings were processed on the surface and 150 μm underneath the surface at 0.25 mm/s, 0.5 mm/s and 1 mm/s with 10 μm, 20 μm and 40 μm inter-line spacing. It is worth highlighting that for ophthalmic applications diffraction gratings inscribed inside the sample in which the surface remains unaltered are preferred. Figure 2a shows a horizontal linear diffraction grating inscribed on the surface of the sample placed above a vertical linear diffraction grating inscribed 150 μm inside the material, Figure 2b. Both samples were processed at 1 mm/s with 10 μm inter-line spacing and 2 nJ pulse energy. Optical transmission spectrum of acrylic intra-ocular lens. The arrow points out the laser wavelength used to process the sample, 520 nm, for which optical transmittance is 84.3%. Crystalline lens optical transmission spectrum is also included for comparison purposes.
Since the optical absorption of the sample at the laser processing wavelength is very low only weak modification should be induced in the material at low processing rates, in the range of few µm/s. Nevertheless, the characteristics of this laser oscillator with 63 MHz repetition-rate and 250 fs pulse duration, properly combined with a high numerical aperture focusing objective, 0.6 NA, allow inducing non-linear absorption processes so that it was possible to produce linear diffraction gratings as on the surface as inside the material at high processing rates. In particular, diffraction gratings were processed on the surface and 150 µm underneath the surface at 0.25 mm/s, 0.5 mm/s and 1 mm/s with 10 µm, 20 µm and 40 µm inter-line spacing. It is worth highlighting that for ophthalmic applications diffraction gratings inscribed inside the sample in which the surface remains unaltered are preferred. Figure 2a shows a horizontal linear diffraction grating inscribed on the surface of the sample placed above a vertical linear diffraction grating inscribed 150 µm inside the material, Figure 2b. Both samples were processed at 1 mm/s with 10 µm inter-line spacing and 2 nJ pulse energy.
Microstructural and Compositional Characterization
Morphology and semi-quantitative chemical composition analyses of the processed IOL samples were carried out by SEM-EDX microanalysis to analyze the effects appeared in the polymer as a consequence of laser irradiation. Figure 3a shows a top-view micrograph of the processed area for a periodic pattern inscribed on the surface of the sample at 0.25 mm/s and 1 nJ pulse energy. As can be observed, linear tracks induced by the ultrashort laser radiation were quite inhomogeneous combining areas of low and high damage. It is worth mentioning that due to the high-repetition-rate of the laser source, the interaction with the polymer is produced in thermal regime. The repetition rate, or frequency, is a critical parameter in materials laser processing. Depending on the thermal properties of the material, accumulation of multiple laser pulses over the same point may result in an increase of the local temperature. The critical frequency, fcr, provides the cross-over between thermal and non-thermal regime and can be calculated according to [22]: , where Dth is the thermal diffusivity and dlaser the laser beam diameter. Accounting that the thermal diffusivity of acrylic polymers ranges 10 −3 cm 2 /s [45] and that the average beam diameter measured on the processed areas is 3 μm, the critical frequency results around 10 kHz. The frequency of the laser used to process these samples, 60 MHz, is well above this critical value so that the fabrication of diffraction gratings by using this laser source is in thermal regime. In these conditions it is expected
Microstructural and Compositional Characterization
Morphology and semi-quantitative chemical composition analyses of the processed IOL samples were carried out by SEM-EDX microanalysis to analyze the effects appeared in the polymer as a consequence of laser irradiation. Figure 3a shows a top-view micrograph of the processed area for a periodic pattern inscribed on the surface of the sample at 0.25 mm/s and 1 nJ pulse energy. As can be observed, linear tracks induced by the ultrashort laser radiation were quite inhomogeneous combining areas of low and high damage.
Microstructural and Compositional Characterization
Morphology and semi-quantitative chemical composition analyses of the processed IOL samples were carried out by SEM-EDX microanalysis to analyze the effects appeared in the polymer as a consequence of laser irradiation. Figure 3a shows a top-view micrograph of the processed area for a periodic pattern inscribed on the surface of the sample at 0.25 mm/s and 1 nJ pulse energy. As can be observed, linear tracks induced by the ultrashort laser radiation were quite inhomogeneous combining areas of low and high damage. It is worth mentioning that due to the high-repetition-rate of the laser source, the interaction with the polymer is produced in thermal regime. The repetition rate, or frequency, is a critical parameter in materials laser processing. Depending on the thermal properties of the material, accumulation of multiple laser pulses over the same point may result in an increase of the local temperature. The critical frequency, fcr, provides the cross-over between thermal and non-thermal regime and can be calculated according to [22]: where Dth is the thermal diffusivity and dlaser the laser beam diameter. Accounting that the thermal diffusivity of acrylic polymers ranges 10 −3 cm 2 /s [45] and that the average beam diameter measured on the processed areas is 3 μm, the critical frequency results around 10 kHz. The frequency of the laser used to process these samples, 60 MHz, is well above this critical value so that the fabrication of diffraction gratings by using this laser source is in thermal regime. In these conditions it is expected It is worth mentioning that due to the high-repetition-rate of the laser source, the interaction with the polymer is produced in thermal regime. The repetition rate, or frequency, is a critical parameter in materials laser processing. Depending on the thermal properties of the material, accumulation of multiple laser pulses over the same point may result in an increase of the local temperature. The critical frequency, f cr , provides the cross-over between thermal and non-thermal regime and can be calculated according to [22]: where D th is the thermal diffusivity and d laser the laser beam diameter. Accounting that the thermal diffusivity of acrylic polymers ranges 10 −3 cm 2 /s [45] and that the average beam diameter measured on the processed areas is 3 µm, the critical frequency results around 10 kHz. The frequency of the laser used to process these samples, 60 MHz, is well above this critical value so that the fabrication of diffraction gratings by using this laser source is in thermal regime. In these conditions it is expected the laser to induce thermal damage and decomposition. This assumption was confirmed by EDX analysis and micro-Raman spectroscopy. Figure 3b shows the profile of semi-quantitative compositional variation of both carbon and oxygen content in the periodic pattern inscribed on the surface of the acrylic IOL shown in Figure 3a. For clarification purposes, oxygen content has been doubled in the figure. Both carbon and oxygen content decreased along the irradiated area resulting in maximal variation at the center of the laser track. This diminution was approximately 30% and 40% for carbon and oxygen respectively. Figure 4 shows micro-Raman spectra of the non-processed IOL as well as the spectra acquired in periodic patterns processed as on the surface as 150 µm underneath the surface in the wavenumber region 400-3500 cm [46][47][48][49]. Raman spectra of samples processed on the surface and inside the IOL showed a strong diminution of Raman intensity. Furthermore, bands placed at 1310 cm −1 and 1347 cm −1 turned into a broad band. Therefore, laser radiation induced a photo-thermal damage and hence the structure degradation of the acrylic intra-ocular lens.
Polymers 2020, 12, x FOR PEER REVIEW 5 of 9 the laser to induce thermal damage and decomposition. This assumption was confirmed by EDX analysis and micro-Raman spectroscopy. Figure 3b shows the profile of semi-quantitative compositional variation of both carbon and oxygen content in the periodic pattern inscribed on the surface of the acrylic IOL shown in Figure 3a. For clarification purposes, oxygen content has been doubled in the figure. Both carbon and oxygen content decreased along the irradiated area resulting in maximal variation at the center of the laser track. This diminution was approximately 30% and 40% for carbon and oxygen respectively. Figure 4 shows micro-Raman spectra of the non-processed IOL as well as the spectra acquired in periodic patterns processed as on the surface as 150 μm underneath the surface in the wavenumber region 400-3500 cm [46][47][48][49]. Raman spectra of samples processed on the surface and inside the IOL showed a strong diminution of Raman intensity. Furthermore, bands placed at 1310 cm −1 and 1347 cm −1 turned into a broad band. Therefore, laser radiation induced a photo-thermal damage and hence the structure degradation of the acrylic intraocular lens.
Optical Characterization
A continuous-wave He-Ne laser with emission at 632.8 nm was used to characterize the periodic patterns inscribed in the intra-ocular lenses. The angle of incidence of the He-Ne laser beam was set orthogonal to the samples. Only diffraction gratings inscribed inside the sample were optically characterized. All these samples showed diffraction patterns with diffraction angles according to the diffraction equation [42]. Intensities of zero and first diffracted orders were measured by using a power-meter to determine the 1st-order efficiency. As an example, Figure 5 shows the far-field diffraction image of the output beam transmitted through the periodic structure processed 150 μm underneath the surface, with 20 μm inter-line spacing and 1 nJ pulse energy at 0.50 mm/s.
Optical Characterization
A continuous-wave He-Ne laser with emission at 632.8 nm was used to characterize the periodic patterns inscribed in the intra-ocular lenses. The angle of incidence of the He-Ne laser beam was set orthogonal to the samples. Only diffraction gratings inscribed inside the sample were optically characterized. All these samples showed diffraction patterns with diffraction angles according to the diffraction equation [42]. Intensities of zero and first diffracted orders were measured by using a power-meter to determine the 1st-order efficiency. As an example, Figure 5 shows the far-field diffraction image of the output beam transmitted through the periodic structure processed 150 µm underneath the surface, with 20 µm inter-line spacing and 1 nJ pulse energy at 0.50 mm/s. The magnitude of the refractive index modification, Δn, was determined from the 1st-order efficiency according to the equation [50][51][52]: where λ is the wavelength of the laser light used to assess the gratings (in our case, 633 nm), θ is the incident angle from the normal in the media (in our case, 0°), η is the 1st-order efficiency and b is the grating thickness. An optical microscope was used to measure the thickness of each grating in crosssection view, which were found to be 7 μm, 6 μm and 5 μm for 0.25 mms −1 , 0.50 mms −1 and 1 mms −1 and 1 nJ respectively, and 7 μm for 1 mms −1 and 2 nJ. Figure 6 shows the 1st-order efficiency (a) and the refractive index change (b). It was observed that 1st-order efficiency decreased with the increase of the inter-line spacing whereas it increased as the energy delivered on the sample increased, either by increasing the pulse energy or by decreasing the scanning speed. It is worth highlighting that provided a pulse energy, its optimal value was achieved for a scanning speed of 0.50 mms −1 .
Concerning the refractive index change, it decreased with both the increase in the inter-line spacing and the energy delivered on the sample, with values ranging between 2.8 × 10 −3 and 4.00 × 10 −3 . These values were similar to those reported in acrylate and silicone polymers [19,[51][52][53][54]. The magnitude of the refractive index modification, ∆n, was determined from the 1st-order efficiency according to the equation [50][51][52]: where λ is the wavelength of the laser light used to assess the gratings (in our case, 633 nm), θ is the incident angle from the normal in the media (in our case, 0 • ), η is the 1st-order efficiency and b is the grating thickness. An optical microscope was used to measure the thickness of each grating in cross-section view, which were found to be 7 µm, 6 µm and 5 µm for 0.25 mms −1 , 0.50 mms −1 and 1 mms −1 and 1 nJ respectively, and 7 µm for 1 mms −1 and 2 nJ. Figure 6 shows the 1st-order efficiency (a) and the refractive index change (b). It was observed that 1st-order efficiency decreased with the increase of the inter-line spacing whereas it increased as the energy delivered on the sample increased, either by increasing the pulse energy or by decreasing the scanning speed. It is worth highlighting that provided a pulse energy, its optimal value was achieved for a scanning speed of 0.50 mms −1 .
Concerning the refractive index change, it decreased with both the increase in the inter-line spacing and the energy delivered on the sample, with values ranging between 2.8 × 10 −3 and 4.00 × 10 −3 . These values were similar to those reported in acrylate and silicone polymers [19,[51][52][53][54].
Polymers 2020, 12, x FOR PEER REVIEW 6 of 9 The magnitude of the refractive index modification, Δn, was determined from the 1st-order efficiency according to the equation [50][51][52]: where λ is the wavelength of the laser light used to assess the gratings (in our case, 633 nm), θ is the incident angle from the normal in the media (in our case, 0°), η is the 1st-order efficiency and b is the grating thickness. An optical microscope was used to measure the thickness of each grating in crosssection view, which were found to be 7 μm, 6 μm and 5 μm for 0.25 mms −1 , 0.50 mms −1 and 1 mms −1 and 1 nJ respectively, and 7 μm for 1 mms −1 and 2 nJ. Figure 6 shows the 1st-order efficiency (a) and the refractive index change (b). It was observed that 1st-order efficiency decreased with the increase of the inter-line spacing whereas it increased as the energy delivered on the sample increased, either by increasing the pulse energy or by decreasing the scanning speed. It is worth highlighting that provided a pulse energy, its optimal value was achieved for a scanning speed of 0.50 mms −1 .
Concerning the refractive index change, it decreased with both the increase in the inter-line spacing and the energy delivered on the sample, with values ranging between 2.8 × 10 −3 and 4.00 × 10 −3 . These values were similar to those reported in acrylate and silicone polymers [19,[51][52][53][54].
Conclusions
Periodic patterns were successfully written on the surface and inside acrylic intra-ocular lenses using femtosecond laser pulses at high-repetition-rate. Patterns were assessed as a function of interline spacing, scanning speed and pulse energy. Compositional and microstructural characterization carried out by SEM-EDX and micro-Raman spectroscopy showed that laser radiation induced photothermal damage and decomposition. Optical characterization showed diffraction patterns under | 5,563.6 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics",
"Materials Science"
] |
Geometric Properties of Certain Classes of Analytic Functions Associated with a q -Integral Operator
: This article presents certain families of analytic functions regarding q -starlikeness and q -convexity of complex order γ ( γ ∈ C \ { 0 } ) . This introduced a q -integral operator and certain subclasses of the newly introduced classes are defined by using this q -integral operator. Coefficient bounds for these subclasses are obtained. Furthermore, the ( δ , q ) -neighborhood of analytic functions are introduced and the inclusion relations between the ( δ , q ) -neighborhood and these subclasses of analytic functions are established. Moreover, the generalized hyper-Bessel function is defined, and application of main results are discussed.
Introduction
Recently, many researchers have focused on the study of q-calculus keeping in view its wide applications in many areas of mathematics, e.g., in the q-fractional calculus, q-integral calculus, q-transform analysis and others (see, for example, [1,2]). Jackson [3] was the first to introduce and develop the q-derivative and q-integral. Purohit [4] was the first one to introduce and analyze a class in open unit disk and he used a certain operator of fractional q-derivative. His remarkable contribution was to give q-extension of a number of results that were already known in analytic function theory. Later, the q-operator was studied by Mohammed and Darus regarding its geometric properties on certain analytic functions, see [5]. A very significant usage of the q-calculus in the context of Geometric Function Theory was basically furnished and the basic (or q-) hypergeometric functions were first We note that D q f (z) −→ f (z) as q −→ 1− and D q f (0) = f (0), where f is the ordinary derivative of f .
In particular, q-derivative of h(z) = z n is as follows : where [n] q denotes q-number which is given as: Since we see that [n] q −→ n as q −→ 1−, therefore, in view of Equation (1), D q h(z) −→ h (z) as q −→ 1−, where h represents ordinary derivative of h.
The q-gamma function Γ q is defined as: which has the following properties: and where t ∈ N and [.] q ! denotes the q-factorial and defined as: Also, the q-beta function B q is defined as: which has the following property: where Γ q is given by Equation (3).
Furthermore, q-binomial coefficients are defined as [17]: where [.] q ! is given by Equation (6). We consider the class A comprising the functions that are analytic in open unit disc U = {z ∈ C : |z| < 1} and are of the form given as: Using Equation (1), the q-derivative of f , defined by Equation (10) is as follows: [n] q a n z n−1 (z ∈ U; 0 < q < 1), where [n] q is given by Equation (2). The two important subsets of the class A are the families S * consisting of those functions that are starlike with reference to origin and C which is the collection of convex functions. A function f is from S * if for each point x ∈ f (U) the linear segment between 0 and x is contained in f (U) . Also, a function f ∈ C if the image f (U) is a convex subset of complex plane C, i.e., f (U) must have every line segment that joins its any two points.
Nasr and Aouf [18] defined the class of those functions which are starlike and are of complex order γ (γ ∈ C\ {0}), denoted by S * (γ) and Wiatrowski [19] gave the class of similar type convex functions i.e., of complex order γ (γ ∈ C\ {0}), denoted by C(γ) as: and respectively. From Equations (12) and (13), it is clear that S * (γ) and C(γ) are subclasses of the class A. The class denoted by S * q (µ) of such q-starlike functions that are of order µ is defined as: Also, the class C q (µ) of q-convex functions of order µ is defined as: For more detail, see [20]. From Equations (14) and (15), it is clear that S * q (µ) and C q (µ) are subclasses of the class A.
Next, we recall that the δ-neighborhood of the function f (z) ∈ A is defined as [21]: In particular, the δ-neighborhood of the identity function p(z) = z is defined as [21]: Finally, we recall that the Jung-Kim-Srivastava integral operator Q α β : A → A are defined as [22]: The Bessel functions are associated with a wide range of problems in important areas of mathematical physics and Engineering. These functions appear in the solutions of heat transfer and other problems in cylindrical and spherical coordinates. Rainville [23] discussed the properties of the Bessel function.
The generalized Bessel functions w ν,b,d (z) are defined as [24]: where ν, b, d, z ∈ C. Orhan, Deniz and Srivastava [25] defined the function ϕ ν,b,d (z) : U → C as: by using the Generalized Bessel function w ν,b,d (z), given by Equation (12). The power series representation for the function ϕ ν,b,d (z) is as follows [25]: where c = ν + b + 1 2 > 0, ν, b, d ∈ R and z ∈ U = {z ∈ C : |z| < 1} . The hyper-Bessel function is defined as [26]: where the hypergeometric function p F q is defined by: using above Equation (23) in Equation (22), then the function J α d (z) has the following power series: By choosing d = 1 and putting α 1 = ν, we get the classical Bessel function In the next section, we introduce the classes of q-starlike functions that are of complex order γ (γ ∈ C\ {0}) and similarly, q-convex functions that are of complex order γ (γ ∈ C\ {0}), which are denoted by S * q (γ) and C q (γ), respectively. Also, we define a q-integral operator and define the subclasses S q (α, β, γ) and C q (α, β, γ) of the class A by using this q-integral operator. Then, we find the coefficient bounds for these subclasses.
The respective definitions of the classes S * q (γ) and C q (γ) are as follows: The function f ∈ A will belong to the class S * q (γ) if it satisfies the following inequality: Definition 2. The function f ∈ A will belong to the class C q (γ) if it satisfies the following inequality: , then the subclasses S * q (γ) and C q (γ) give the sub classes S * q (µ) and C q (µ), respectively. (ii) Using the fact that lim q→1− D q f (z) = f (z), we get that lim q→1− S * q (γ) = S * (γ) and lim q→1− C q (γ) = C(γ). Now, we introduce the q-integral operator χ α β,q as: It is clear that χ α β,q f (z) is analytic in open disc U. Using Equations (4), (5) and (7)-(9), we get the following power series for the function χ α β,q f in U: Remark 2. For q −→ 1−, Equation (29), gives the Jung-Kim-Srivastava integral operator Q α β , given by Equation (18).
Remark 3.
Taking α = 1 in Equation (28) and using Equations (4), (5) and (9), we get the q-Bernardi integral operator, defined as [27]: Next, in view of the Definitions 1 and 2 and the fact that (z) < |z|, we introduce the subclasses S q (α, β, γ) and C q (α, β, γ) of the classes S * q (γ) and C q (γ), respectively, by using the operator χ α β,q , as: The function f ∈ A will belong to S q (α, β, γ) if it satisfies the following inequality: Definition 4. The function f ∈ A will belong to C q (α, β, γ) if it satisfies the following inequality: Now, we establish the following result, which gives the coefficient bound for the subclass S q (α, β, γ): where Γ q and [n] q are given by Equations (3) and (2), respectively.
Proof. Let f ∈ A, then using Equations (11) and (29), we have [n] q a n z n If f ∈ S q (α, β, γ), then in view of Definition 3 and Equation (33), we have which, on simplifying, gives [n] q − 1 a n z n−1 Now, using the fact that (z) < |z| in the Inequality (34), we get [n] q − 1 a n z n−1 Since χ α β,q f (z) is analytic in U, therefore taking limit z → 1-through real axis, Inequality (35), gives the Assertion (32).
Also, we establish the following result, which gives the coefficient bound for the subclass C q (α, β, γ): Lemma 2. If f is an analytic function such that it belongs to the class C q (α, β, γ) and |γ| ≥ 1 then where Γ q and [n] q are given by Equations (3) and (2), respectively.
In the next section, we define (δ, q)-neighborhood of the function f ∈ A and establish the inclusion relations of the subclasses S q (α, β, γ) and C q (α, β, γ) with the (δ, q)-neighborhood of the identity function p(z) = z.
Application
First, we define the generalized hyper-Bessel function w c,b,α d (z) as : where ν, b, d, z ∈ C.
Discussion of Results and Future Work
The concept of q-derivatives has so far been applied in many areas of not only mathematics but also physics, including fractional calculus and quantum physics. However, research on q-calculus is in connection with function theory and especially geometric properties of analytic functions such as starlikeness and convexity, which is fairly familiar on this topic. Finding sharp coefficient bounds for analytic functions belonging to Classes of starlikeness and convexity defined by q-calculus operators is of particular importance since any information can shed light on the study of the geometric properties of such functions. Our results are applicable by using any analytic functions.
Conclusions
In this paper, we have used q-calculus to introduce a new q-integral operator which is a generalization of the known Jung-Kim-Srivastava integral operator. Also, a new subclass involving the q-integral operator introduced has been defined. Some interesting coefficient bounds for these subclasses of analytic functions have been studied. Furthermore, the (δ, q)-neighborhood of analytic functions and the inclusion relation between the (δ, q)-neighborhood and the subclasses involving the q-integral operator have been derived. The ideas of this paper may stimulate further research in this field. | 2,497.6 | 2019-05-27T00:00:00.000 | [
"Mathematics"
] |
Jupiter analogues and planets of active stars
Combined results are now available from a 15 year long search for Jupiter analogues around solar-type stars using the ESO CAT + CES, ESO 3.6 m + CES, and ESO 3.6 m + HARPS instruments. They comprise planet (co-)discoveries ( Hor and HR 506) and confirmations (three planets in HR 3259) as well as non-confirmations of planets (HR 4523 and Eri) announced elsewhere. A long-term trend in Ind found by our survey is probably attributable to a Jovian planet with a period >30 yr, but we cannot fully exclude stellar activity effects as the cause. A 3.8 year periodic variation in HR 8323 can be attributed to stellar activity.
INTRODUCTION
In 1992 a readial velocity (RV) search program for extrasolar planets around solar-type stars was begun (Kürster et al. 2000;Endl et al. 2002) with the ESO 1.4 m CAT telescope and the Coudé Echelle spectrograph (CES) together with its Long Camera (LC) which provided a resolving power R = 100, 000.In 1999 the Long Camera was replaced by the R = 230, 000 Very Long Camers (VLC) fibre fed to the ESO 3.6 m telescope with which the survey was continued up to the time when some temporal overlap with the higher precision ESO HARPS spectrograph (R = 130, 000, wide wavelength coverage, also at the ESO 3.6 m telescope) could be attained which was finally employed for the survey from 2003-2007.Archival HARPS data from other programs were added extending our time baseline into 2009 for a few stars.
A total of 31 bright (V < 6) solar-type stars was monitored throughout the whole 15-year survey.The main goal of the long-term studies of this sample was the search for Jupiter analogues.Despite the fact that the stars in the sample were selected for low activity levels, the more active stars turned out to be the most interesting targets.This contribution summarizes the most important results.The survey and its results for the whole sample are described in much more detail in Zechmeister et al. (2013).a e-mail<EMAIL_ADDRESS>is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
RESULTS
• (Co-)discoveries.Fig. 1 shows the RV time series for the G0V active star Hor revealing a planetary companion with a minimum mass of m sin i = 2.48 M Jup and an orbital period of P = 307.2d. this period and also our HARPS data in which we do not find any evidence for the signal.Therefore, we cannot confirm this planet.Fig. 5 shows another case in which we cannot confirm a planet announced earlier.It provides our RV time series for the K2V star Eri with the orbit by Hatzes et al. (2000) overplotted.This orbit corresponds to a planet with a minimum mass of 0.86 M Jup , a period of 6.9 yr, and an eccentricity of 0.6.As the comparison shows, our data alone do not provide unequivocal evidence for the planet.• Companions vs. stellar activity.Fig. 6 shows our RV time series for the K4V star Ind along with a linear fit.As shown in the figure, this trend exceeds the expected RV increase due to the RV secular acceleration which is quite substantial for this rapidly moving nearby star.The trend is likely to be produced by a long-period (P > 30 yr) sub-stellar (m sin i > 0.97 M Jup ) companion.
Hot Planets and Cool Stars
The HARPS data make it possible to correlate the RV measurements with activity indicators, i.e. the index for chromospheric CaII emission log R HK , or the FWHM and the bisector (BIS) of the average photospheric line profile (the profile of the cross-correlation with a mask employed to determine the RV).If activity is the cause for a (spurious) RV signal, then a correlation of the RV data with log R HK and also with FWHM and BIS is expected.For Ind such a correlation exists for log R HK and FWHM, but not for BIS which could indicate that the star also has a long-term trend in its activity level, but its correlation with the RV data is just a coincidence.This is different for the G0V star HR 8323 whose RV time series is shown in Fig. 7.We find a sinusoidal variation with a period of 3.8 yr, but in this case all three activity indicators (including BIS) are correlated with the RVs so that we exclude a companion.Most likely, the periodic variation is due to a stellar activity cycle.
CONCLUSION
While our long-term RV survey of 31 solar-type stars has not found a genuine Jupiter analogue at ≈ 5 AU separation from its host star, we have contributed discoveries of closer-in Jupiter-type planets and a possible substellar object at wide separation as well as confirmations and non-confirmations of planets found elsewhere.We have addressed the influence of stellar activity on RV signals and how it affects their interpretation.
Figure 1 .
Figure 1.Left: RV time series for Hor combined with data from the Anglo-Australian Telescope (AAT; Butler et al. 2001) and from the 1.2 m EULER-Telescope + CORALIE spectrograph (Naef et al. 2001).Right: RVs phase folded with the orbital period of P = 307 d.Bottom panels show the residuals from the Keplerian fit.
Figure 2 .
Figure 2. Left: RV time series for HR 506 combined with AAT data (Butler et al. 2006) and CORALIE data (see http://obswww.unige.ch/∼udry/planet/hd10647.html).Right: RVs phase folded with the orbital period of P = 995 d.Bottom panels show the residuals from the Keplerian fit.
Figure 4 .Figure 5 .
Figure 4. RV data for HR 4523 phase folded with a period of 122.1 d.The dashed curve is an orbital solution proposed by Tinney et al. (2011) based on the UCLES data.HARPS RV data are shown for comparison.The do not support the planetary signal.
Figure 6 .
Figure 6.RV time series for Ind. Model curves are shown for a constant RV (i.e.considering only the secular acceleration of 1.86 ms −1 yr −1 ; dashed line) as well as for a linear trend (solid line).The offset in the data (and the fit) occuring around BJD 2,451,200 is of instrumental nature.
Figure 7 .
Figure 7. RV time series for HR 8323 together with a sinusoidal model.As in Fig. 6, the offset occuring around BJD 2,451,200 is of instrumental nature. | 1,482.6 | 2013-04-01T00:00:00.000 | [
"Physics",
"Geology"
] |
Automated metabolic reconstruction for Methanococcus jannaschii
We present the computational prediction and synthesis of the metabolic pathways in Methanococcus jannaschii from its genomic sequence using the PathoLogic software. Metabolic reconstruction is based on a reference knowledge base of metabolic pathways and is performed with minimal manual intervention. We predict the existence of 609 metabolic reactions that are assembled in 113 metabolic pathways and an additional 17 super-pathways consisting of one or more component pathways. These assignments represent significantly improved enzyme and pathway predictions compared with previous metabolic reconstructions, and some key metabolic reactions, previously missing, have been identified. Our results, in the form of enzymatic assignments and metabolic pathway predictions, form a database (MJCyc) that is accessible over the World Wide Web for further dissemination among members of the scientific community.
Introduction
The first genome sequence became publicly available in 1995; since then, more than 100 species have been completely sequenced and the data submitted to public repositories (Janssen et al. 2003).Interest in the use of computational methods for revealing the mechanisms of cellular processes has increased because of: (1) the ability of such methods to handle large amounts of genome data and thus produce integrated views of functional processes, (2) the capacity for rapid and objective processing of data, and (3) the suitability of these methods for analysis of experimentally non-tractable organisms that cannot be cultured in the laboratory.
Prediction of the metabolic complement, or the entire set of metabolic reactions, from the genome sequence-a process known as metabolic reconstruction-has attracted significant interest (Karp and Paley 1994).Systems such as MetaCyc (Karp et al. 2002b), KEGG (Kanehisa et al. 2002) and WIT (Overbeek et al. 2000) offer a reference knowledge base that associates functional features of individual genes (Enzyme Commission (EC) numbers, enzyme names) with specific metabolic reactions and pathways.Given an uncharacterized genome sequence, such systems deduce a map of the entire metabolic complement of the target organism, providing an excellent basis for further computational or experimental analysis of cellular metabolism.
The advantage of computational metabolic network synthesis is that the assumed "completeness" of a genome (in terms of the sequences it encodes) or biochemical pathway (in terms of the reactions it includes) may form the basis for further functional assignment beyond sequence similarity by exploiting contextual constraints (Karp et al. 1996, Bono et al. 1998).Current computational analyses of genome sequences, which use a combination of sequence similarity and comparative genomics, are combined with high-throughput experimental methods to progress from individual functional assignments to integrated descriptions of cellular metabolic networks (Eisenberg et al. 2000).
In that sense, metabolic reconstruction for entire genomes not only compiles existing knowledge, but also assists with the analysis and delineation of protein function through the association of enzyme properties with biochemical pathways.Coupled with the ever-increasing amount of experimentally characterized protein sequences in the databases, it is possible to generate accurate metabolic maps using computational methods.The ultimate goal is to facilitate experimental analysis by organizing all available genomic information for a specific organism into an integrated, coherent and homogeneous resource (Tsoka and Ouzounis 2000).Formation of protein complexes is implicit in the identification procedure and, when multiple proteins are assigned to a single EC number and reaction, they are assumed to participate in the same reaction step.This procedure flags possible participation of proteins as subunits of the same enzymatic complex, which can in turn be established by literature searches and annotation.
The genome of Methanococcus jannaschii was the first archaeal genome to be sequenced (Bult et al. 1996).The sequence data have been used to support the hypothesis that the Archaea constitute a domain of life separate from Bacteria and Eukarya-a hypothesis proposed several years before the genome sequence became available (Woese andFox 1977, Graham et al. 2000b).Archaea have attracted significant interest because of their extremophilic properties and their evolutionary relationships to members of the other two domains.De-spite growth in the amount of archaeal genome sequence data, the domain is still underrepresented in protein sequence databases.
Approximately 40% of the M. jannaschii genome is specific either to this organism or to the archaeal kingdom (Graham et al. 2001b); metabolic reconstruction, therefore, is particularly challenging.The aim of this paper is to evaluate an automated procedure for detection of the reaction capacities and reconstruction of the metabolic pathways present in M. jannaschii.Overall, computational analyses of M. jannaschii are valuable because of this microorganism's evolutionary position and distinctive lifestyle.The availability of a well-annotated metabolic complement of M. jannaschii will facilitate comparative analyses of archaeal metabolism and evolution (Kyrpides et al. 1999) across different organisms and domains of life.Our results are compared with previous metabolic reconstructions for this species and are available to the scientific community for further analysis (accessible at: http://maine.ebi.ac.uk:1555 /server.html).
Microorganism
Methanococcus jannaschii is a hyperthermophilic methanogenic archaeon (Bult et al. 1996).It was isolated from surface material collected at a "white smoker" chimney at a depth of 2600 m in the East Pacific Rise near the western coast of Mexico.Its extreme habitat (48-94 °C, 200 atm) suggests that it possesses adaptations for growth at high temperature, high pressure, and moderate salinity.It is an autotrophic methanogen capable of nitrogen fixation (Wolfe 1992, Deppenmeier 2002).Cells are irregular cocci possessing polar bundles of flagella.The cell envelope consists of a cytoplasmic membrane and a protein surface layer (Bult et al. 1996).
Automated metabolic reconstruction
To establish the functional attributes of M. jannaschii protein sequences, the entire genome was searched against the nonredundant protein database (NRDB version of October 2002, 999845 entries) using BLAST (e-value 10 -10 ) (Altschul et al. 1997), filtered for composition bias using CAST (Promponas et al. 2000).Among the 1792 sequences that make up the M. jannaschii genome, sequence similarity searches against the non-redundant database identified a total of 587 proteins likely to have enzymatic activity, as indicated by the -ase suffix in the function annotation string.These proteins corresponded to 376 potential enzymes (21% of the total number of sequences) and were assigned to any of 246 unique EC numbers.Functional annotations for each protein, including EC numbers where available, and the genome sequence were used as inputs to the Pathway Tools software (Karp et al. 2002a), which incorporates the PathoLogic algorithm for inference of metabolic pathways of a genome given its sequence and functional annotations (Paley and Karp 2002).
The PathoLogic protocol uses MetaCyc, a manually curated collection of metabolic pathways and reactions from a multi-tude of organisms (Karp et al. 2002b), as a reference database upon which reconstruction for the query genome is performed.Given the functional annotations of the genome, the system associates EC numbers and enzyme names with reference reactions, to infer all reactions present in the organism of interest.After the reaction detection stage, the reaction-to-pathway associations of the reference state are used to identify candidate metabolic pathways in the target organism.
For each candidate pathway, three parameters are evaluated: the number of reactions in the reference pathway (X), the number of reactions identified in the query genome (Y), and the number of these reactions that are shared across more than one pathway (Z).The values for X, Y and Z provide a measure of confidence for the existence of the pathway in the reconstruction outcome.A pathway is considered to be present if, for example, at least half of the pathway reactions are found in the organism of interest and not all of them are shared.A number of pathways defined in the reference set are highly related to one another (termed variants); in this case, PathoLogic selects from among these pathways the one for which a unique enzyme has been identified.We predicted 113 pathways as well as an additional 17 pathways (termed super-pathways) each comprising several other smaller pathways.
The metabolic reconstruction as implemented by Pathway Tools is largely performed automatically; however, some manual intervention is required at the final stage, namely in the following two steps: (1) the user can inspect instances of enzymes identified by the suffix -ase for which PathoLogic has not identified an associated reaction (probable enzymes), and manually associate them with a specific reaction; and (2) the user may eliminate pathways for which there is little evidence.Because one of the main purposes of this study was hypothesis-generation of the metabolic complement of M. jannaschii, we aimed to obtain the maximum possible number of pathways and thus decided to retain all predicted pathways.
Pathway Tools employs frame representation technology (Karp and Paley 1994), a type of object-oriented database schema, so the terms object and frame are used interchangeably here.To construct the metabolic database for M. jannaschii, whenever an association between a genome protein and a MetaCyc pathway is made (through the EC number or the name-matching procedure), the relevant reaction and pathway frames corresponding to that pathway are copied to the genome database.Database objects are also created for each genetic element and gene, with appropriate pointers between each gene and the protein product it encodes, as well as between each product and the reactions that the product catalyzes.The knowledge base for M. jannaschii (MJCyc) built by this process contains 1792 gene and protein frames, 130 pathway frames, 609 reaction frames and 461 enzymatic-reaction frames.A summary of the reconstruction assignments is shown in Table 1.
Following the manual curation steps that augment the initial input information, 436 proteins from among the total number of gene products (1792) were associated with a reaction.Of these, 418 contained EC information and mapped to a total of 266 unique EC numbers.In most cases there was a one-to-one correspondence between proteins and their EC number assignments; however, we have identified six proteins with more than one EC number, and 67 EC numbers that map to more than one protein (from 2 to a maximum of 8 proteins).Proteins that are assigned more than one EC number may be paralogous enzymes, whereas proteins that share the same EC number may be subunits of the same enzyme complex.For example, the formylmethanuran dehydrogenase enzyme that catalyzes the first step of methanogenesis from CO 2 (EC 1.2.99.5) contains seven subunits (see http://maine.ebi.ac.uk:1555/MJNRDB2 /new-image?type=REACTION-IN-PATHWAY&object=FORM YLMETHANOFURAN-DEHYDROGENASE-RXN).
The 609 reaction frames comprising the pathway database for M. jannaschii correspond to all of the reactions present in the 113 predicted metabolic pathways.The group of 609 reaction frames consists of 297 reactions that have been identified in M. jannaschii and 312 reactions that are currently not associated with any gene product.Of the 297 reactions that are present in M. jannaschii, 231 have been assigned to a pathway, and 66 reactions remain unassigned.
Analysis of the functional attributes of all 609 reactions (including reactions currently absent) revealed that 541 reactions map to a pathway, whereas 68 reactions remain unassigned.Furthermore, 511 reactions have at least one EC number and map to a total of 467 unique EC numbers, whereas the remaining 98 reactions have no associated EC number.The 461 enzymatic-reaction and 609 reaction objects correspond to 436 unique enzymes, due to enzymes involved in more than one enzymatic reaction.Finally, the metabolic reconstruction predicts the existence of 510 compounds within the metabolic network.
Pathways in which all reactions have been identified represent the most strongly supported assignments and are summarized in Table 2.The majority of pathways found to be complete in M. jannaschii are involved in either amino acid biosynthesis or degradation.These pathways are highly conserved across different taxonomic domains (Peregrin-Alvarez et al. 2003) and thus represent cases where pathway prediction has been particularly successful.
Also predicted in its entirety is the methanogenesis pathway.In contrast with the amino acid biosynthesis pathways, methanogenesis is a highly specific biochemical conversion cascade present in only a few organisms capable of producing methane from CO 2 .Nevertheless, this pathway is well charac-terized biochemically (Ferry 1992, Deppenmeier 2002), and because of its specialized nature, has not diverged much.Other pathways related to energy metabolism, such as glycolysis and variants of the TCA cycle, are also well characterized and highly conserved and are thus strongly predicted (Tsoka and Ouzounis 2001).
Comparison with previous metabolic reconstructions
Previously, the most recent and most extensive analysis of the metabolic complement of M. jannaschii was the reconstruction reported by Selkov et al. (1997).This work was part of a wider database project for metabolic analysis (Overbeek et al. 2000), which has placed particular emphasis on M. jannaschii reconstruction (Selkov et al. 1997).The approach consisted of identification of enzymatic functions through sequence similarity searches and analysis of biochemical and phenotypic data (Selkov et al. 1997).Although sequence similarity searches routinely serve as the basis for computational assignments of protein function, and the results can be subjected to closer scrutiny by different researchers, phenotypic characteristics used by authors have generally not been made explicitly available and are therefore not reproducible.In contrast, MJCyc contains only computational function assignments.These assignments can be supplemented with experimental data as it becomes available, and amended as necessary.
The present reconstruction (MJCyc), with 436 proteins assigned to 266 EC numbers and 113 pathways, may be a significant improvement upon the previous metabolic reconstruction, which consisted of 245 proteins assigned to a total of 171 EC numbers and 97 pathways (Selkov et al. 1997).The two re- constructions had 184 assignments in common; 61 enzymes were found only by Selkov et al. (1997) and 252 assignments were unique to MJCyc (the agreement between annotations is shown schematically in Figure 1).There were 149 EC numbers common to the two annotation sets, whereas 22 EC numbers were unique to the previous reconstruction and 117 EC numbers were identified only in MJCyc.It is important to emphasize that the majority of the new assignments in MJCyc are a result of updates of database records with proteins characterized since publication of the first metabolic reconstruction for this species (Selkov et al. 1997).Nevertheless, the use of Pathway Tools assists in the elimination of problems such as false positives that arise from weak sequence similarities to the query proteins, paralogous families or unclear function assignments in database entries.
In addition to identifying new reactions in metabolic pathways known to be present in M. jannaschii, we were able to detect enzymes involved in sulfate assimilation (phosphoadenosine phosphosulfate reductase; EC 1.8.4.8;MJ0406) and methionine synthesis from homocysteine (EC 2.1.1.14;MJ1473), as well as most reactions involved in cobalamin biosynthesis, the mevalonate pathway, and synthesis of polyamines.A complete list of pathway assignments can be found in the web-accessible MJCyc database.
Selkov et al. also identified a total of 65 EC numbers missing from the 97 pathways (i.e., these enzyme activities would complete the 97 reported pathways).We were able to identify several of these missing activities, some of which are reported in Table 3.Some of these activities are associated with reactions of key metabolic pathways such as the glycolytic cascade.These newly discovered enzymes have been identified by direct biochemical experiments either in M. jannaschii or in other related archaeal species.Some of these enzymes are discussed below to illustrate that M. jannaschii (and possibly archaea in general) employ enzymes different from those of bacteria and eukarya to catalyze some well known reactions, and that these cases may represent complex patterns of evolutionary divergence from a common ancestral sequence.
First, glycolytic and gluconeogenic reactions in M. jannaschii are catalyzed by phosphoglycerate mutase.This enzyme is encoded by sequences MJ0010 and MJ1612 and catalyzes the conversion of 3-phosphoglycerate to 2-phosphoglycerate.Generally, two structurally distinct forms of the enzyme are known: a cofactor-dependent and a cofactor-independent form.In archaea, phosphoglycerate mutase is only distantly related to the cofactor-independent form of the enzyme (van der Oost et al. 2002).The gene encoding the enzyme belongs to a family that is widely distributed among archaea and some bacteria, but is significantly different in sequence in eukarya and most bacteria.Phylogenetic analyses have indicated that the family arose before divergence of the archaeal lineage (Graham et al. 2002a).
Fructose-1,6-biphosphate aldolase is another archaeal enzyme whose sequence differs markedly from those of eukaryal and most bacterial Class I and Class II aldolases.It has been
Table 3.Reactions identified in this analysis that were previously reported as missing (Bult et al. 1996, Selkov et al. 1997) suggested that the Class I aldolase genes found in several archaea, including M. jannaschii, separated very early from the gene lineages of the classical Class I and Class II aldolases, and that subsequent evolutionary events involved gene duplications with subsequent differential loss and probably some late lateral gene transfer (Siebers et al. 2001).Twenty-three of the enzyme function assignments reported by Selkov et al. (1997) were not reproducible by sequence similarity searches.Although the origin of annotation cannot be traced, it is likely that in these cases the assignment was based on phenotypic rather than genome sequence data.For 17 of these 23 cases, the EC number assigned by Selkov et al. (1997) has been assigned in our reconstruction to a different gene product, so no significant gain in pathway prediction would be expected if these annotations were obtained.
Discussion
Despite the continuously increasing variety and sophistication of high-throughput genome analysis methods, sequence similarity remains the most widely used means of assessing genome-wide features.By implication, successful reconstruction of the entire metabolic complement of a particular species depends on (1) the specificity and sensitivity of database search algorithms through which gene products can be associated with particular reactions according to their similarity to known enzyme sequences, and (2) the phylogenetic distance from previously characterized species that form the reference set of reaction to pathway associations.For a species that is phylogenetically distant from well characterized species, it is likely that both the annotation and the pathway detection procedure will be particularly challenging.However, even in such cases, prediction of metabolic pathways for the entire genome can significantly enhance contextual analysis by directing experimental and computational approaches (e.g., remote homology searches) toward enzymatic functions that are expected to be present but have not yet been detected.Setting aside the possibility of erroneous gene prediction, these missing activities may be attributed to the use of unique biochemical routes by the target organism that are significantly different from the reference pathways used for the metabolic reconstruction.We have predicted 312 as-yet-undiscovered (missing) reactions in 113 predicted metabolic pathways that may serve as potential targets for functional genomics experiments, and which illustrate how metabolic reconstruction can enhance genome annotation.
Having identified enzymatic activities that have not yet been attributed to a particular gene through metabolic reconstruction, subsequent application of computational protocols for function assignment, which rely on contextual information rather than sequence similarity, can identify candidate genes that perform these previously absent functions.For example, all reactions in chorismate biosynthesis (implicated in biosynthesis of aromatic amino acids) have been well characterized in model genomes (De Feyter 1987).In archaea, most enzymes in this pathway have been identified through sequence homology to bacteria and eukaryotes, with the excep-tion of shikimate kinase (EC 2.7.1.71),which catalyzes the fifth of the seven reactions in this pathway.Analysis of gene clustering on the M. jannaschii chromosome (Overbeek et al. 1999) ascribed the shikimate kinase activity to gene MJ1440, and this was later verified experimentally (Daugherty et al. 2001).Sequence and structural analyses have revealed that archaeal shikimate kinase has no sequence similarity to any bacterial or eukaryotic shikimate kinases and is distantly related to members of the GHMP-kinase superfamily (Daugherty et al. 2001).This was the first time that any member of this family or fold type was associated with this particular biochemical activity, revealing complex patterns of protein and metabolic network evolution.
The metabolic reconstruction in MJCyc may facilitate similar scrutiny of other currently absent biochemical activities or entire metabolic pathways (for example, cysteine biosynthesis has been reported to be absent from M. jannaschii and Methanosarcina barkeri (Kitabatake et al. 2000)).The advantage of automated approaches to metabolic reconstruction is that the results form a database of functional properties that is reproducible and directly comparable across different species and time points (Karp et al. 1996).We plan to use the PathoLogic protocol to obtain similar metabolic reconstructions for more archaeal species, in order to shed light on archaeal metabolism through comparative analyses.
Another important feature of the MetaCyc metabolic pathway knowledge base and all metabolic reconstructions generated with the PathoLogic protocol is the availability of structured and flexible query capabilities that are particularly well suited to large-scale computational analyses (Paley and Karp 2002).These features are enabled by the object-oriented nature of the knowledge base (Karp and Paley 1994) and by the formal ontology specifying the relationships between biological objects of the knowledge base (Karp 2000).Two examples of specific biological queries are provided here.First, we have classified all M. jannaschii pathways according to their wider biochemical role by retrieving all 130 pathways and assigning them to one of 17 pathway functional classes (e.g., energy metabolism, amino acid biosynthesis, etc.).The result is presented schematically in Figure 2. A second example is the determination of whether each reaction is present in M. jannaschii (i.e., if it is one of the 312 missing reactions); if it is not, the associated pathway is retrieved.The result, shown in Figure 3, indicates the number of missing reactions in each pathway.
There are many examples of metabolic rarities in M. jannaschii.First, it has been shown that M. jannaschii has a dual specificity tRNA synthetase: MJ1238 encodes both prolyl (EC 6.1.1.15)and cysteinyl-tRNA synthetase (EC 6.1.1.16)activities (Bunjun et al. 2000, Lipman et al. 2000, Stathopoulos et al. 2000); the cysteinyl-tRNA synthetase activity had not been previously identified (Selkov et al. 1997).Second, it has been shown that polyamines are present at high concentrations in M. jannaschii (Graham et al. 2000a, Kim et al. 2000, Sekowska et al. 2000).MJCyc includes four of the five enzymes in the polyamine biosynthesis pathway (MJ0315, MJ0316, MJ0309 and MJ0313), with ornithine decarboxylase currently missing.S-Adenosylmethionine (AdoMet) decarboxylase (MJ0315) activity in M. jannaschii was difficult to identify because of a high degree of divergence of the sequence encoding this enzyme from the sequences of other enzymes already known to possess this activity.However, since detection of this activity, it has been possible to show that spermidine biosynthesis in Gram-positive bacteria and in archaea is similar to the related pathway in Gram-negative bacteria and in eukarya (Sekowska et al. 2000).Finally, MJCyc provides strong evidence for the presence of the pathway for biosynthesis of Coenzyme M, an important cofactor in methanogenesis: the first two reactions have been identified in M. jannaschii and attributed to this pathway (MJ0255, 2r-phospho-3-sulfolactate synthase and MJ1140, 2-phosphosulfolactate phosphatase) (Graupner et al. 2000, Graham et al. 2001a).
In conclusion, the metabolic reconstruction of the entire genome of M. jannaschii is presented, and significant improve-ments have been noted compared with previous analyses.Comparative analyses of metabolic reconstruction aid in the refinement of these methods and contribute to a deeper understanding of the process of genome annotation (Paley and Karp 2002).We have illustrated the potential of this approach for providing a comprehensive view of what is known about the metabolism of a given organism, as well as for supporting the search for unknown metabolic components.Given that functional genomics projects aim to increase the coverage of functional types encoded in genomes, the iterative procedure of continuously improving genome annotation and metabolic pathway prediction is expected to yield complete metabolic maps in the future.
Figure 2 .
Figure 2. Classification of predicted metabolic pathways according to the type of biochemical function they perform.Shades of blue, red, green and yellow indicate central metabolism, biosynthetic functions, degradation pathways and super-pathways, respectively.
Figure 3 .
Figure 3. Frequency distribution of missing reactions with respect to pathways.This figure demonstrates an example of use of the MJCyc database.Overall, 312 missing reactions were linked to a total of 91 pathways.Details are available at http://maine.ebi.ac.uk:1555/server.html.
Table 1 .
Overall outcome of metabolic reconstruction for Methanococcus jannaschii.
Table 2 .
Pathways predicted to be complete in Methanococcus jannaschii.The total number of reactions in each pathway is shown.
. References indicate the source of annotation. | 5,490.8 | 2004-10-01T00:00:00.000 | [
"Biology"
] |
A genetic variant of the Wnt receptor LRP6 accelerates synapse degeneration during aging and in Alzheimer’s disease
Synapse loss strongly correlates with cognitive decline in Alzheimer’s disease (AD), but the underlying mechanisms are poorly understood. Deficient Wnt signaling contributes to synapse dysfunction and loss in AD. Consistently, a variant of the LRP6 receptor, (LRP6-Val), with reduced Wnt signaling, is linked to late-onset AD. However, the impact of LRP6-Val on the healthy and AD brain has not been examined. Knock-in mice, generated by gene editing, carrying this Lrp6 variant develop normally. However, neurons from Lrp6-val mice do not respond to Wnt7a, a ligand that promotes synaptic assembly through the Frizzled-5 receptor. Wnt7a stimulates the formation of the low-density lipoprotein receptor-related protein 6 (LRP6)–Frizzled-5 complex but not if LRP6-Val is present. Lrp6-val mice exhibit structural and functional synaptic defects that become pronounced with age. Lrp6-val mice present exacerbated synapse loss around plaques when crossed to the NL-G-F AD model. Our findings uncover a previously unidentified role for Lrp6-val in synapse vulnerability during aging and AD.
Figure S2 .
Figure S2.Characterisation of homozygous Lrp6-val knock-in mice.A) Images of WT and Lrp6-val mice at 7 months showed that these mice developed normally and have no visible external abnormalities.B) Adult Lrp6-val mice had the same weights as control WT mice.Weights of male mice were measured at 4-8 months of age.WT N = 9.Lrp6-val N = 10.Unpaired T-test.C) Quantitative RT-PCR analyses of Lrp6 mRNA levels in the hippocampus of WT and Lrp6-val mice at 3-4 months of age showed no changes in the expression of Lrp6.WT N = 5, Lrp6-val N = 6.Unpaired T-Test.D) Hippocampal LRP6 protein levels in WT and Lrp6-val mice at 4 months.E) No changes in LRP6 levels (normalised to β-actin) were observed between WT or Lrp6-val mice.N = 3 per genotype.Unpaired T-test.F) SIM images of excitatory synapses containing LRP6 (red) from WT and Lrp6-val hippocampal neurons showing vGlut1 (blue) and PSD-95 (green).Scale bar = 0.2 μm.G) LRP6 receptor exhibited similar pre-and post-synaptic localisation in WT and Lrp6-val mice. 2 independent cultures, 6-8 images per culture.Unpaired T-tests.Data are represented as mean SEM.
Figure S3 .
Figure S3.Basal synaptic transmission is unaffected in Lrp6-val mice at 7-9 months and vesicle recycling and initial fusion efficiency are unchanged in Lrp6-val mice at 12-14 months.A) Representative traces of post-synaptic currents elicited at different stimulation intensities.B) No differences were detected in the input-output curves at 7-9 months.N = 12-14 cells recorded from 4-5 animals per genotype.Repeated-measures one-way-ANOVA.C) Graphs display the recycling rate and initial fusion efficiency obtained from all cells.No differences were observed in Lrp6-val mice when compared to WT mice at 12-14 months.N = 12 cells from 4 animals per genotype.Unpaired Student's T-test.Data are represented as mean SEM.
Figure S4 .
Figure S4.Lrp6-val mice do not display neuronal loss at 16-18 months.A) Confocal images of the hippocampus of WT and Lrp6-val mice labelled with DAPI (blue) and NeuN (red).Scale bar = 150 μm.Insets show higher magnification images of the CA1 region.Scale bar = 100 μm.B) Quantification revealed no differences in the percentage of NeuN positive cells between WT and Lrp6-val mice.WT N = 6, Lrp6-val N = 5.Unpaired T-test.Data are represented as mean SEM.
Figure S7 .
Figure S7.Lrp6-val neurons do not respond to Wnt7a or Wnt3a to promote synapse formation.A) Diagram showing the isolation of primary neurons from WT and homozygous Lrp6-val mice to evaluate synapse number or surface levels of the receptor by surface biotinylation.Top: neurons were exposed to recombinant Wnt7a or Wnt3a for 3 hours prior to analyses of synapses by confocal microscopy.Bottom: neurons were incubated with biotin (red circles).Biotin bound surface proteins (Green and blue shapes) were pulled down with streptavidin-agarose beads (blue circles).B) Wnt7a (100ng/ml) increased the number of synapses in WT neurons, but Wnt7a had no effect on Lrp6-val neurons.N = 3 independent cultures, 8-11 images per culture.Two-way-ANOVA with Games-Howell post hoc test.* p < 0.05.C) Wnt3a increased synapse number in WT neurons but Lrp6-val neurons failed to respond to Wnt3a.N = 3 independent cultures, 8-10 images per culture.Two-way-ANOVA with Games-Howell post hoc test.* p < 0.05.D) Surface biotinylation analyses of LRP6 were performed on neurons isolated from WT and homozygous Lrp6-val mice.E) No differences in the ratio of surface to total LRP6 were observed.N = 3 independent cultures.Mann-Whitney.Data are represented as mean SEM.
Figure S8 .
Figure S8.The surface localisation of LRP6 is not affected in cells expressing LRP6-Val.A) Schematic of surface biotinylation analyses of HeLa cells.B) Western blot analyses of LRP6 levels following surface biotinylation of cells expressing Fz5-HA and WT LRP6 or LRP6-Val and treated with recombinant Wnt7a.C) No differences in the ratio of surface to total LRP6 were observed after Wnt7a treatment.N = 3 independent cultures.Two-way-ANOVA with Games-Howell post hoc test.D) Western blot analyses of Fz5-HA following surface biotinylation of HeLa cells expressing Fz5-HA and WT LRP6 or LRP6-Val and treated with recombinant Wnt7a.We observed a higher molecular weight of Fz5-HA at the surface, which is probably due to changes in glycosylation of this receptor (59).E) The ratio of surface to total Fz5-HA were unchanged after Wnt7a treatment.Both bands observed for HA were quantified.N = 3 independent cultures.Twoway-ANOVA with Games-Howell post hoc test.F) Confocal images of HeLa cells expressing GFP, WT LRP6 and Fz5-HA or GFP, LRP6-Val and Fz5-HA.Total LRP6 (red) and phosphorylated LRP6 (pLRP6) (grey).Scale bar = 2 μm.Data are represented as mean SEM.
Figure S11 .
Figure S11.Lrp6-val does not affect synapse number in NL-G-F mice at 2 months or 10 months.A) Confocal images of the CA1 SR of WT, Lrp6-val, NL-G-F and NL-G-F;Lrp6-val mice at 2 months showing Bassoon (green) and Homer1 (red) puncta.Scale bar = 3.8 μm.Insets show higher magnification images of synapses.Scale bar = 2 μm B) Quantification of Bassoon and Homer1 puncta and synapse number (co-localised puncta).No differences are detected between any of the different genotypes.WT N = 5, Lrp6-val N = 4, NL-G-F N = 5, NL-G-F; Lrp6-val N = 6.One-way-ANOVA with Tukey's post hoc test.C) Confocal images of synapses, co-localised Bassoon (green) and Homer1 (red) puncta, at increasing distances from the centre of an Aβ plaque (blue) in NL-G-F and NL-G-F;Lrp6-val mice at 10 months.D) Quantification revealed no differences in synapse number.Scale bar = 5 μm.NL-G-F N = 15 slices from 5 brains, NL-G-F; Lrp6-val N = 11 slices from 4 brains.Repeated measure Two-way-ANOVA with Bonferroni's post hoc test.Data are represented as mean SEM. | 1,486.8 | 2023-01-01T00:00:00.000 | [
"Biology"
] |