text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Multi Response Optimization of Simultaneous Fixation of Reactive Printing and Crease Resistant Finishing Using Desirability Function The single step fixation process for reactive printing and crease resistant finishing of cotton fabric with four potential factors namely; concentrations of dye and crease resistant, fixation method and temperature has been statistically analysed in this work. The optimum conditions of factors and levels of simultaneous fixation have been explored. A single step process for reactive printing and crease resistant finishing of cotton fabric is described. Evaluations of the process were made with respect to K/S, dry and wet crease recovery, tensile and tear strength, fastness to washing, light & rubbing, resistance to abrasion and pilling. An Econtrol fixation at a temperature of 145°C was proved to be efficient for imparting single-step reactive print fixation and crease resistant finishing to cotton fabric. Introduction Modern textile processes have high demands concerning the combined application of crease resistance finishing and reactive printing. Various attempts have been made on simultaneous fixation of reactive dyeing and crease resistance finishing [1][2][3][4] but very few studies have been reported on combined application of reactive printing and creases resistance finishing [5][6]. Reports in literature revealed that a number of attempts was made for combined pigment printing and crease resistance finishing [7], due to the similar chemistry to cellulose cross linking agents and binders and the similar application conditions. However development and optimization of the process for reactive printing and crease resistance finishing is a novel approach. The concept of wet on wet fixation using an E Control process for the combined fixation of reactive printing and crease resistance finishing was investigated in this work. The conventional optimization of the process parameters is costly in terms of time and material. Each process parameter has to be optimized one by one with several repetitions. The idea of this work is to use a model based approach due to the complexity of the chemical and physical operational sequence with the combined fixation of reactive printing and crease resistance finishing process. This paper uses desirability function to determine the optimum parameters of simultaneous fixation of reactive printing and crease resistance finishing for optimization of K/S, dry and wet crease recovery, tensile and tear strength, fastness to washing, light & rubbing, resistance to abrasion and pilling. The 2 1 .3 3 .mixed factorial design for the four controllable factors viz. chroma, concentration of crease resistant, fixation method and fixation temperature was used for this work to find the optimum conditions of factors and levels in simultaneous fixation. The multi response optimization was attempted through desirability function. These responses can be given equalwieghtage or the wieghtage of responses can be varied according toindustrial requirements. The optimization techniques like utility concept, principal component analysis etc. Desirability function A useful approach for optimization of multiple responses is to use the simultaneous optimization technique popularized by Derringer and Suich [8,9]. Their procedure makes use of desirability function. The general approach is to first convert each response yi into an individual desirability function di that varies over the range. (1) Where if the response yi is at its goal or target, then di = 1, and if the response is outside an acceptable region, di = 0. Then the design variables are chosen to maximize the overall desirability. D = (d1.d2……..dm) 1/m (2) Where there are m responses. Commercially Singed, desized, scoured, bleached and mercerized cotton fabric with satin weave structure, 40x40 s, 130 ends/inch x 73 picks/inch, and an area density of approximately 136 g/m2 was used in this research work. The reactive dyes used were Drimarine Red P2B (Clariant), based on MCT reactive group. Other chemicals used in this research work were commercially available thickener Lamitex HP (sodium alginate), sodium bicarbonate, urea, Reduction Inhibitor (Revatol S) and sodium hexameta phosphate as a sequestrant. Print-finish Paste Manufacture A concentration of 2.50-3.00% w/w (30g/kg) of thickener Lamitex HP (to maintain the 60-65 dPa viscosity range recommended by the supplier) was added to produce stock paste, with continuous high speed stirring, to the required volume of water. This was followed by the gradual addition of Urea 200 g/kg, sodium bi carbonate 30 g/kg, Revatol S 10 gm/kg and sodium hexa meta phosphate 5 gm/kg with continuous stirring giving a final stock paste viscosity of 60-65dPa.However urea is not added in the stock paste manufactured for the experiments conducted using E Control method for fixation. The printing pastes of different concentrations were prepared with Drimarine Red P2B as outlined in Table 1.During stock and print paste preparation a vigorous high speed stirring for 10 min was required to obtain a homogenous paste after adding all reagents. The viscosities of all types of pastes were measured using a Brookfield Viscometer, Type LV. The CR finishing liquor was prepared by using Magnesium Chloride 25% of CR but not greater than 30 g/l, Solusoft MW 20g/l, Ceranine-L 20g/l and Imercol, PCLF 1g/l. The final finish bath was prepared with Arkofix NEC as outlined in Table 1. Print-finish Procedure The combined process of reactive printing and CR finishing was carried out as follows: In the first stage the fabric was immersed in an aqueous solution of CR finish liquor, and then squeezed to obtain a 70% wet pickup. The wet fabric was then dried at 60℃ for 7 min. In the second stage the treated fabric was printed by the lab scale Rotary Printing machine (Zimmer). The printed fabric going to be fixed though Curing process was again dried at 60℃ for 7 min. However, the printed fabric going to be fixed through E Control process was not dried. In the third stage, the print-finish fabric was fixed. The preparation of finish bath, printing recipe and fixation method and temperature were employed in accordance with the experimental design arrangement as stated in Table 1. The fixed samples were finally washed in 1g/l non-ionic detergent until all un reacted dyes and chemicals were removed from the fabric surface. Evaluation of Fabric Properties The easy-care properties imparted by the CR finish were evaluated by measuring the dry crease recovery angles (DCRA) using AATCC-66. The fabric strength properties were assessed by measuring the breaking load of fabric using the standard test procedure ASTM D 5035. The tear strength of fabric was evaluated using ASTM D1424. The standard test procedures adopted for color fastness properties included: (a) color staining to rubbing, AATCC-08; (b) the loss of color and staining to washing, ISO C2S; and (c) color fastness to light, ISO 105-B02. Each value reported for DCRA and breaking load is the mean of two samples tested, each having a coefficient of variance not more than +/-5%.The fabric pilling and abrasion resistance was examined using ISO -12945-2 (100 cycles) and ISO 12947) @ 2500 rubs respectively. Shade depth values were assessed spectrophotometrically and expressed in terms of the Kubelka-Munk (K/S) relationship. The samples processed with simultaneous fixation were compared with those produced from a standard two-step process of printing and finishing. 3 . mixed full factorial design was used to explore the effect of different factors namely: (i) chroma, (ii) concentration of crease resistant, (iii) fixation method and (iv) fixation temperature on combined reactive printing and crease resistance finishing. A 2 1 .3 3 mixed factorial design with two replicates was run according to the design matrix as shown in Table 1. The experiments were performed in random order. The results were analyzed and optimized using software Design Expert 8.0. The responses investigated were the K/S, dry and wet crease recovery, tensile and tear strength, fastness to washing, light & rubbing, resistance to abrasion and pilling. Multi-response Optimization Using Desirability Function To overcome the problem of conflicting responses of single response optimization, multi-response optimization was used. In multi response optimization, desired wieghtage is given to all responses (equal wieghtage in the present study) and for a combined influence of all responses desirability is determined for varying values of input parameters. Table 2 shows the range of input parameters and that of responses and the goal and weights assigned to each parameter. The target values assigned were obtained from the experiments performed for the standard two-step process of reactive printing and CR finishing (shade depth 2% and concentration of CR 100g/l) as shown in Table 3. Figure 1 shows the ramps of various factors (all the four responses are given equal weightage). Table 4 revealed that the overall desirability values are highin the region of Econtrolmode of fixation. This is owing to the fact that wet on wet fixation of Econtrol process is based on the temperature of the fabric reached during the fixation process, which depends on the relative humidity inside the hot air/controlled moisture fixation chamber The adequate parameters to maximize the overall desirability involve the following specifications: •Chroma = 3% •Concentration of CR = 131 g/l •Fixation Mode = E Control •Fixation Temperature = 145℃ The following step consists of setting the optimal input variable levels that have maximized the overall desirabilities. That is, the optimum parameters would be set as mentioned in Table 2. In this case the single step process of fixation of reactive dye and CR finishing showed similar results as a two-step process. Moreover the wet on wet fixation of the Econtrol process gives notably high values of tensile and tear strengths both in warp and weft directions as shown in Table 4. In contrast, the specifications of K/S, DCRA warp and weft, pilling & abrasion resistance, wash, rub and light fastness are barely satisfied. However, the amount of dye and CR used in the simultaneous fixation will be slightly higher as compared to conventional process of fixation for achieving the same depth of shade and DCRA. Furthermore, it is unnecessary to use urea to print cotton in the CPF process using wet on wet fixation by the Econtrol process. 5.Conclusions This paper optimizes process parameters for simultaneous fixation of reactive printing and crease resistant finish fabric using multi response optimization. To overcome the problem of conflicting responses of single response optimization, multi-response optimization was used. In multi response optimization, equal weightage is given to all responses and for a combined influence of all responses desirability is determined for varying values of input parameters. The target values assigned were obtained from the experiments performed for the conventional two-step process of reactive printing and CR finishing. The optimal value with specified desirability was calculated using the Design Expert software. Table 4 lists seven different desirable ranges of input parameters and responses which give overall high value of desirability. As clear from the Table 4, E control process at 145℃ is enviable for getting high values of desirability and consequently a good substitute of the two-step process. Therefore, the fixation by the Econtrol process at 145℃ for 5 minutes using the pad-dry-print-fix-wash method is desirable for getting high values of desirability and consequently a good substitute for the two-step process. Furthermore in this model we can vary the objectives, the tolerance intervals and the corresponding weights of the responses as required by the customer.
2,570.4
2013-01-07T00:00:00.000
[ "Materials Science" ]
Control of the metal-to-insulator transition by substrate orientation in nickelates We proved that the critical thickness for metal-to-insulator transition (MIT) of LaNiO3 could be controlled by substrate orientation. By means of density functional theory calculations, films grown on SrTiO3 substrates with (001), (110) and (111) orientations have different amount of charge transfer across the interface. Different charge transfer induces different interfacial conductivity behavior and at the same time modifies the carrier density of adjacent LaNiO3 films. The manipulation of MIT by substrate orientation can be achieved through interfacial charge transfer induced interfacial conductive layer with the modified conductivity of LNO layer. © 2019 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.5125942., s The charge-transfer nickelate family ReNiO3 (Re = rare earth) with dramatic metal to insulator transition (MIT) has been one of the most fascinating systems for designing innovative oxide interface and heterostructures in the past decades. Understanding and controlling of MIT in nickelates is interesting from the fundamental physics aspect, providing great opportunities for future electronic devices. LaNiO3 (LNO), a metal paramagnet conductor lacking any ordering phenomena in bulk, has recently become the subject of intense research, as theoretical work predicted orbital ordering and high-Tc superconductivity. LNO is the most conductive one of the ReNiO3 family and only exhibits film thicknessdependent MIT. That is, films above a critical thickness of 4 unit cells (u.c.) display metallic behavior while for films below the critical thickness, an insulating behavior is usually observed. However, it remains difficult to modify the MIT of LaNiO3 once the materials has been prepared. In our previous work, we have tried to utilize the ferroelectric polarization of BaTiO3 films to modify the orbital polarization on a large scale, however, the resistivity of LNO can only be slightly modified. When we turn to other oxide systems for solution, it was reported that the two dimensional electron gas (2DEG) at the (001) LaAlO3/SrTiO3 interfaces can be easily tuned by substrate orientation. In another similar case, LaMnO3/LaNiO3 superlattices grown along different substrate orientations have unconventional ferromagnetic behavior. Meanwhile, oxide thin films grown along unconventional (111)-direction is theoretically proposed to host a variety of exotic states for LNO films. All this gives us an important hint that substrate orientation can be used as a tuning knob to tailor the MIT of LNO. As it is frequently reported, the competition of electronic reconstruction and polar distortions is regarded as the main factor that is responsible for the substrate orientation-induced difference in magnetic properties. Nevertheless, the effect of electronic reconstruction is especially dominated for conducting films, as the carrier density directly determines its conductive properties. Thus, it is of great significance to evaluate the effect of electronic reconstruction, namely interfacial charge transfer, on the MIT especially for nickelates. AIP Advances 9, 105118 (2019); doi: 10.1063/1.5125942 9, 105118-1 The charge-transfer nickelate family ReNiO 3 (Re = rare earth) with dramatic metal to insulator transition (MIT) has been one of the most fascinating systems for designing innovative oxide interface and heterostructures in the past decades. 1,2 Understanding and controlling of MIT in nickelates is interesting from the fundamental physics aspect, providing great opportunities for future electronic devices. LaNiO 3 (LNO), a metal paramagnet conductor lacking any ordering phenomena in bulk, has recently become the subject of intense research, as theoretical work predicted orbital ordering and high-Tc superconductivity. [3][4][5] LNO is the most conductive one of the ReNiO 3 family and only exhibits film thicknessdependent MIT. That is, films above a critical thickness of 4 unit cells (u.c.) display metallic behavior while for films below the critical thickness, an insulating behavior is usually observed. 6 However, it remains difficult to modify the MIT of LaNiO 3 once the materials has been prepared. In our previous work, 4 we have tried to utilize the ferroelectric polarization of BaTiO 3 films to modify the orbital polarization on a large scale, however, the resistivity of LNO can only be slightly modified. When we turn to other oxide systems for solution, it was reported that the two dimensional electron gas (2DEG) at the (001) LaAlO 3 /SrTiO 3 interfaces can be easily tuned by substrate orientation. 7 In another similar case, LaMnO 3 /LaNiO 3 superlattices grown along different substrate orientations have unconventional ferromagnetic behavior. 8 Meanwhile, oxide thin films grown along unconventional (111)-direction is theoretically proposed to host a variety of exotic states for LNO films. [9][10][11] All this gives us an important hint that substrate orientation can be used as a tuning knob to tailor the MIT of LNO. As it is frequently reported, the competition of electronic reconstruction and polar distortions is regarded as the main factor that is responsible for the substrate orientation-induced difference in magnetic properties. Nevertheless, the effect of electronic reconstruction is especially dominated for conducting films, as the carrier density directly determines its conductive properties. Thus, it is of great significance to evaluate the effect of electronic reconstruction, namely interfacial charge transfer, on the MIT especially for nickelates. ARTICLE scitation.org/journal/adv In this work, we experimentally demonstrate that the critical thickness for metal-to-insulator transition can be controlled by the SrTiO 3 substrate orientation. With the aid of density functional theory (DFT) calculation, we further prove that the modification of MIT is due to charge transfer at the interface, which occurs at all three orientations but varies in their concentration. Our results shed new light on the interpretation of charge transfer at the oxide interface and the related transport property. Prior to deposition, the substrate was heated to 700 ○ C, at which it stayed for half an hour under the deposition pressure for the purpose of releasing unexpected contamination and reducing additional oxygen vacancy on the substrate. LaNiO 3 films were then prepared growing LaNiO 3 from a stoichiometric target on SrTiO 3 (001), (110) and (111) substrates by pulsed laser deposition (PLD). LaNiO 3 films were grown in an atmosphere of 20 mTorr at 700 ○ C. The layer-bylayer growth mode is identified by reflection high-energy electron diffraction (RHEED), one oscillation of which corresponds to one monolayer (ML). The samples were cooled to room temperature under 500 Torr oxygen pressure to avoid oxygen vacancies. Spin polarized DFT calculations were performed using the Vienna abinitio simulation package (VASP) 12 with projector augmented-wave (PAW) 13-17 pseudopotentials. As shown in Fig. 1(a), LNO films grown on STO (001) substrates display good RHEED oscillation, indicating layer-by-layer growth mode and good film quality. The RHEED patterns before growth and after growth manifests that the films remain atomic smooth surfaces as the substrate. The surface topography of films was probed by atomic force microscope (AFM), as shown Fig. 1 Figure 2 shows the temperature dependent resistance curves (R-T) for LNO films grown on (001), (110), (111) STO substrates, where ML stands for a monolayer. For films grown on (001) STO substrate ( Fig. 2(a)), all the films, even for the thinnest 5 ML film with a thickness of 1.935 nm, exhibits decreasing resistivity with decreasing temperature, characteristic of metallic conductive behavior. That is to say, the critical thickness of insulator-to-metal transition (tc) for LNO films grown on (001) substrates occurs at 1.9nm. Films grown on (110) STO substrates are much more insulting than those grown on (001) STO substrate. As can be seen in Fig. 2(b), both 7 ML and 11 ML films display an insulating conductive behavior, that is, increasing resistivity with decreasing temperature. Until the film thickness reaches 15 ML, a metallic behavior is observed. This phenomenon is more obvious in films grown on (111) STO substrate seen in Fig. 2(c). Film as thick as 14 ML remains insulating and a metallic behavior is observed for films as thick as 19 ML. An abrupt jump from an insulator to a metallic state is also observed above critical thickness tc 14 MLs and 19 MLs for (110) and (111) interfaces. Given that 1 ML ≈ 2.74 Å for (110) and 1 ML ≈ 2.23 Å for (111), tc for (110) and (111) oriented films are 3.836 nm and 4.25 nm separately, in vast contrast to tc being lower than 2 nm for (001) oriented film. We depict an orientation dependent transition temperature in Fig. 2(d). The most eminent behavior is that the resistance varies with the orientations of the substrate for films growth. The (001) films are the most metallic one, displaying metallic behavior even for the thinnest 5 ML films. In vast contrast, the (110) films are less conductive with a critical thickness of 15 ML for insulator-to-metal transition, and the (111) films are the most insulting one with a critical thickness of 19 ML. The vast contrast among those films indicates that the orientation of substrate have some fundamental effects on its conductivity and the critical thickness of MIT. As LNO has polar discontinuity as LaAlO 3 , different orientated films may have different charge transfer with the substrate, i.e. different electronic reconstruction configuration, which gives rise to substrate orientation dependent conductive behavior. To elucidate the phenomenon, we employ DFT calculations to examine the electronic structures at LNO/STO interface. For LNO growing on (001)-, (110)-and (111)-oriented STO, slab models are constructed to study the substrate orientation effects. For each direction, STO slabs larger than 2 nm are constructed in order to simulate the substrate effects in experiments. On the other hand, LNO thickness closest to 1.935 nm studied in experiments are modeled, which are 1.973 nm for (001) direction, 1.953 nm for (110) direction, 1.936 nm for (111) direction. A vacuum space of more than 2 nm is created to guarantee that there is no interaction between the simulation cells and their periodic images. The supercell consist of 2 × 2 supercell for (001) direction, 1 × 2 supercell for (110) direction, 1×1 supercell for (111) directions are selected so that three orientations end up with similar cross-sections size for calculations. Spin polarized DFT calculations were performed using the VASP with projector PAW pseudopotentials. A cutoff energy of the plane wave basis set of 500 eV is used in all calculations. Further increase in the cutoff energy up to 800eV will only introduce a tiny energy difference < 0.02eV. A 3×3 Gamma centered k-point grid is used in all calculations while the further increase of k points will not affect the energy calculated. For electronic structures, electron localizations are considered using GGA+U scheme. The rotationally invariant approach introduced by Dudarev et al. 18 is applied which takes the form: While on-site Coulomb interaction parameter U and J is set as 8 eV and 1 eV respectively for Ni due to its 3d 8 4s 2 electronic configuration since similar parameters provide reasonable predictions in previous studies. 19 Meanwhile, non-collinear magnetic calculations were performed to capture spin orbital coupling (SOC) induced splitting. Firstly, our calculation illustrates that there are strong charge transfer among LNO films with STO substrate at the interface. Charge transfer configuration across the interface is depicted in Fig. 3 To further clarify charge transfer along different growth directions, calculated density of states (DOS) and correspondent ground states of LNO growing on different orientations of STO are shown in Fig. 4. As can be inferred from the DOS calculation, the DOS of LNO near fermi level varies a lot depending on the growing orientation. For (100) direction, the DOS is much larger than those in (110) and (111) direction. More specifically, it is about 79.2 a.u. in (100) direction while for (110) and (111) directions they are 34.5 a.u. and 22.2 a.u., respectively. This gives a theoretical evidence about the more conductive state of (001) oriented films. And according to experiment results, (001) oriented films have a much lower tc than (110) and (111) oriented films. Since the DOS of (110) oriented films is larger than that of (111) oriented films, (110) oriented films are more conductive than (111) oriented films. Those theoretical calculations are in good agreement with experimental results, meanwhile indicating the DOS near fermi level is at the root of different conductive behaviour for different oriented films. It is also worthwhile to note that the states near fermi level is mainly contributed by p orbitals for all cases, indicating that p orbitals play a significant role in determining interface conducting behaviour. Recalling the charge transfer configuration of Fig. 3, charge transfer between STO and LNO happens mainly through O atoms. Thus it is nature to come to the conclusion that O 2p orbitals play an important role in determining interface DOS near fermi level or further in determining its conductivity. As frequently reported, O 2p orbitals are often the bridge for charge transfer among oxide heterostructures. On the basis of the above experimental observations and calculation, a possible scenario regarding the interfacial charge transfer can be established. The prerequisite of the Fermi energy continuity at the interface suggests the possible energy alignment, as shown in Fig. 5(a). Ti 4+ possesses an t 2g 0 electron configuration, while Ni 3+ ions preserve an t 2g 6 eg 1 electron configuration with nearly degenerate eg band. When it comes to interfacial coupling, strong hybridization between Ni 3z 2 -r 2 band and the Ti dxz or dyz orbits at the interface would form the lower energy bonding (B) orbits and antibonding (AB) orbits. That is, extended molecular orbits consist of atomic Ni 3z 2 -r 2 orbits and Ti dxz or dyz orbits with an admixture of the pz orbits on the apical oxygen. In this scenario, the electrons are transferred from the degenerate band to the molecular 3z 2 -r 2 orbits at the interface, as sketched in Fig. 5(a). In other words, the electrons at the molecular obits are possessed both by STO and LNO, which naturally lowers the valence state of Ti 4+ to Ti 3+ , i.e. electrondoped SrTiO 3 at the interface. As indicated by the 2DEGs at the interface of LAO/STO, electron-doped STO has some metallic conductive behavior, which will contribute to the overall conductivity of our heterostructure. As for LNO as a hole conductive films, 6 less electron transferred means more hole for conducting. Thus, it is easy to explain the different conductive behavior for films grown along different orientations. For (001) oriented films, more electrons are transferred from LNO to STO, with electron doped STO being conductive and hole doped LNO being more conductive compared with other oriented films. (110)-oriented films have less electrons transferred from LNO to STO, thus being less conductive. (111)-oriented films have the least electron transfer and is the most insulting one. Our results are similar to the reported different conductive behavior of different oriented LAO on STO substrates. 7 However, our case is somewhat different. LNO is a conductor itself, the hole doping level of which will influence the overall conductivity in addition to the conductive interface, and thus display a more significant difference among different oriented films. In LAO/STO system, LAO is an insulator with only the interface being conductive. Here, in our heterostructures, parallel conductive model is applied since both interfacial electron doped STO and the overlayer LNO will make a contribution to the overall conductivity. (001) oriented films with more electron transferred not only possess a more conductive hole doped LNO film but also with the interfacial electron doped STO, which also contributes to the overall conductivity. (110) oriented films with less electron transferred surely has less conductive LNO and interfacial STO, being less conductive. And (111) oriented films with the least electron transferred is the most insulting one of the three oriented films. One question worth noting is why different oriented films have different charge transfer. Rajesh V. Chopdekar observed orientation dependence magnetization at the La 0.7 Sr 0.3 MnO 3 /STO interfaces and regarded crystal surface orientation as well as epitaxial strain play an important role in tailoring their properties. 20 The theoretical calculation by N. Sivadas indicates both electronic reconstruction and polar distortions must be taken into account when analyzing the 2DEG behavior for (111) and (110) interfaces. 21 In the polar catastrophe scheme, the (110) case has no charge transfer at the interface. As suggested by our theoretical calculations, charge transfer take places regardless of substrate orientation, excluding polar catastrophe. Thus the tailoring of transport behavior of different oriented films is a matter concerning crystal surface orientation, electronic reconstruction, strain state and polar distortion, which may be independently responsible for charge transfer at the interface. Or rather, the combined effect of those factors causes charge transfer at the interface. The exact determination needs further systematical characterization. To summarize, different charge transfer on different oriented substrates may have different effect both on LNO films and interfacial layer. For (001) oriented films, strong charge transfer makes LNO more conductive and the largest of DOS of the interfacial layer; For (110) oriented films, charge transfer is not so strong as that on (001) oriented films, bringing about less conductive LNO films and lower DOS and eventually less conductive. For (111) oriented films, charge transfer effect is the least with the smallest DOS at the interface. In other words, substrate orientation has tailored the MIT transition of LNO film. Our work gives both experimental and theoretical proof of substrate induced conductive behavior of LNO films.
4,016.6
2019-10-22T00:00:00.000
[ "Materials Science", "Physics" ]
Previously Unknown Physical Formulas which Hold in a Hydrogen Atom and are Derived without Using Quantum Mechanics It is thought that quantum mechanics is the physical science describing the behavior of the electron in the micro world, e.g., inside a hydrogen atom. However, the author has previously derived the energy-momentum relationship which holds inside a hydrogen atom. This paper uses that relationship to investigate the relationships between physical quantities which hold in a hydrogen atom. In this paper, formulas are derived which hold in the micro world and make more accurate predictions than the classical quantum theory. This paper concludes that quantum mechanics is not the only theory enabling investigation of the micro world. Introduction It is thought that quantum mechanics is the physical science describing the behavior of the electron in the micro world, e.g., inside a hydrogen atom.There is no difference of opinion on that point.However, is it impossible to investigate the micro world without quantum mechanics? The author has previously derived the energy-momentum relationship which holds inside a hydrogen atom.This paper uses that relationship to investigate the relationships between physical quantities which hold in a hydrogen atom.A comparison is also made between the values of physical quantities predicted by this paper, and the values predicted by the classical quantum theory developed by Bohr.In this paper, formulas are derived which hold in the micro world and make more accurate predictions than the classical quantum theory. Results Obtained Prior to this Paper Letting 2 0 m c be the rest mass energy and p the momentum of an object or a particle existing in free space, Einstein's energy-momentum relationship is given by the following equation: Here, 2 mc is the relativistic energy.In contrast, the author has derived the following relationship for a bound electron in a hydrogen atom, which must take into account the Coulomb potential (Suto, 2011) (2) Here, re,n E is the following relativistic energy of the electron, and the electron's energy is described on an absolute scale.( 3 ) Here, re,n m is the relativistic mass of the electron.In Equation (3), n E is the total mechanical energy of a hydrogen atom.The equation derived from classical quantum theory is following: Here, α is the fine structure constant as follows.(5) The author presented the following equation as an equation indicating the relationship between the rest mass energy and potential energy of the electron (Suto, 2009). ( ) According to this equation, the potential energy of a bound electron in a hydrogen atom is equal to the reduction in rest mass energy of that electron. There is a lower limit to potential energy, and the range which energy can assume is as follows. ( ) Also, the following constraint holds regarding the relativistic energy E re of the electron due to Equations ( 3) and (4) (here, the discussion is limited to the ordinary energy levels of the atom). 2 2 e r e , e 1 .2 The logic used when deriving Equation (2) can also be applied in the region of Equation ( 8).Therefore, the author previously pointed out that there is an 0 n = energy level in a hydrogen atom, but that is a mistake (Suto, 2014a). Here, that will be corrected. Incidentally, it is known that the following formula can be derived from Equation (1). If the same logic is applied to Equation (2), then the following formula can be derived.(See Appendix A) Within a hydrogen atom, the mass of the electron decreases when the kinetic energy of the electron increases. In the theory of Dirac, the energy levels of the hydrogen atom can be expressed with the following equation (Schiff, 1968). Thus this paper makes the following assumption, based on a comparison of Equations ( 11) and (12). However, the velocity is taken to be the average velocity of the electron, in accordance quantum mechanics.Here, v on the left side was set to v n .There are also other reasons for assuming Equation ( 13).(See Appendix B) In the classical quantum theory, the following quantum condition of Bohr plays an important role.12) and re,n E of Equation ( 15) define an absolute quantity, which includes the electron's rest mass energy.Whereas n E in Equations ( 4) and ( 16) express the reduction in rest mass energy of the electron. Also, if Equation ( 15) is substituted into Equation (2), From this, it is evident that Equation (2) has a structure like the following. ( ) Equation ( 18) can also be written as follows. ( ) ( ) Incidentally, the energy of the electron in a hydrogen atom can be given not only by Equation (4) but also by the following formula. Formulas containing the Fine Structure Constant The following equation can be obtained from Equation (15). Incidentally, Equation ( 20) can also be written as follows. 2 2 e e e e / 2 1 .2 Also, the following formula for energy and momentum can be obtained from Equations ( 2) and ( 25). Furthermore, it is evident from Equations ( 23) and ( 26) that the following relationship holds. Here, ( ) region within the orbital radius n r (In contrast, e / 2 r is distance in the re 0 E ≤ region).When Equations ( 24) and ( 28) are taken into account, the following formula containing the fine structure constant is obtained. Also, the following formula is obtained from Equation ( 28). In quantum mechanics, r is an average value not a definitive value, and this paper follows that principle.Now, if a Taylor expansion is performed on the right side of Equation ( 15), This yields, Based on this result, Equation ( 28), In the end, it is evident that Equation ( 4) is an approximation.Also, if n r is found from Equation ( 33), ( 3 5 ) Formulas determined to be Approximations This section discusses the orbital radii and energy levels of a hydrogen atom derived by Bohr. 1) Orbital radii of Bohr Dirac pointed out that there is a negative solution to Equation (1).Adopting the same viewpoint, there is a negative solution to Equation (2).To find the negative solution, it is necessary to create a quadratic equation for r.Thus, from Equation ( 28), From this, the following quadratic equation is obtained. 0. 4 Rearranging this equation, To begin, the positive solution is found first.(The positive solution is the solution found by Bohr.)Letting n r + be this solution, and performing Taylor expansion of Equation (39b), In contrast, the radii n r found by Bohr are given by the following equation. Here, B a is the Bohr radius.If Equations.( 40) and ( 41) are compared, it is evident that Equation ( 41) is an approximation. Next, the negative solution n r − of Equation (39b), r can be regarded as the radius of the atomic nucleus of a hydrogen atom (i.e., the proton).Here, From this, it is evident that the negative orbital is located near the atomic nucleus.The author has pointed out that, if electron exist in this orbital, they will be a candidate for the dark matter whose real nature is currently unknown (Suto, 2017). 2) Energy levels There are also positive and negative solutions for re,n E in Equation ( 2).Here, the ordinary, known energies of a hydrogen atom are expressed as re,n E + , n E + .Also, the negative energies are expressed as re,n E − , . n E − When this is done, the formulas for positive energies are as follows. In contrast, the formulas for the negative solutions are as follows. The following compares energies when n=1. Value predicted by Bohr Equation ( 4 ) The following form can be used as a matrix with four rows and four columns. Here, the following Pauli spin matrices and the unit matrix are used. (51) In contrast, the author has derived the following equation (Suto, 2014b). ( ) (52) Where, Equation ( 52) must be solved in order to elucidate electron spin, or more complex phenomena. Discussion 1) The radius r where re 0 E = is e / 2 r due to Equation ( 21).Dirac predicted that the vacuum energy re E satisfies the relation but actually re 0 E = is the energy of the virtual electron-positron pair which make up the vacuum (see Fig. 1).Also, in quantum mechanics it is thought that atoms are more stable when electron energy is lower, but actually that is not the case.They are more stable as energy approaches closer to re 0 E = .The relationship between r and E re is summarized as follows. Equation ( 2) is applicable in the ranges of Equations ( 54a) and (54b).Equation (54a) is the region where an ordinary hydrogen atom electron exists.Also, in Equation (54b), there is a system of an electron with negative mass and a proton with positive mass.Also, as is clear from Equation (8), the electron cannot penetrate into the region of Equation (54c).Therefore, E re in Equation (54c) is likely not the energy of the electron, but rather the energy of a virtual electron-positron pair making up the vacuum.However, the energy of a virtual particle pair is twice the energy of the virtual electron, and thus Equation (54c) can be rewritten as follows. 2 2 e e e v p e There is no electron energy level in the region of Equation (54c), and thus the subscript n is omitted.Also the subscript vp on the energy indicates a virtual particle pair. Incidentally, potential energy does not exist in the region of Equation (54c).Therefore, if it is assumed that the energy-momentum relation holds for virtual particle pairs too, then that can be obtained by setting m 0 equal to zero in Equation (1).That is, Here, re,vp m indicates the mass of the virtual particle pair, and vp p indicates the momentum of the virtual particle pair. Dirac regarded the energy region of the vacuum to be as follows.( 5 7 ) In this paper, in contrast, the energy region of the vacuum in a hydrogen atom is predicted to be Equation (55). 2) The uncertainty principle is thought to be what guarantees the stability of the atom.According to the uncertainty principle, it is forbidden for the electron to approach the atomic nucleus ( ) e r r → / 4 and for the momentum to approach zero ( ) p → 0 .However, the situation is different in the following domain.(59) Also, taking an electron as an example, if the electron approaches the atomic nucleus in the region of Equation (54b), then the momentum approaches zero (this is clear from Equation ( 2)).That is, e , . 4 From this it is evident that the uncertainty principle is not a universal principle, and there are limits on its application.) comprising the vacuum around the atomic nucleus, a virtual particle acquires rest mass, and is emitted as an electron into free space, while the hole opened in the vacuum is the positron (Fig. 1a). In the author's interpretation, on the other hand, an electron-positron pair is created because a γ-ray with an energy of 1.022 MeV gives rest mass to a virtual electron-positron pair at the position e / 2 r r = (Fig. 1b).Arrows show the change in particle energy.The end point of an arrow does not indicate the position where the particle was produced. Conclusion In this paper, the departure point is Equation (2), and from there the relationships between physical quantities holding in a hydrogen atom were clarified as far as possible.The following assumption and relationship were powerful at that time. Also, in this paper, the following formulas were derived with higher precision than Bohr's Formula (4) for energy levels. When Equation ( 28) is taken into account, it is evident that the right sides of the above two equations are all different expressions of 2 re, .n m c ± Also, in this paper, the following formula with precision higher than Bohr's Formula (41) was derived for the orbital radius.Through this paper, it was possible to predict the values of physical quantities with greater accuracy than classical quantum theory by taking Equation (2) as a departure point.Due to the results obtained in this paper, the author believes that the correctness of Equation ( 2) has been demonstrated.The fact that the predictions of classical quantum theory are approximate values is already known.It is not the case that this paper has raised objections to quantum mechanics.This paper concludes that quantum mechanics is not the only theory enabling investigation of the micro world. Appendix A We consider the energy of the electron inside the hydrogen atom by referring to the logic given in textbooks (French, 1968).If the velocity of the electron is set 0 in Equation (2), then the following equation of Einstein can be derived. Appendix B In classical quantum theory, the hydrogen atom is explained using a model where an electron with negative charge rotates around a proton with positive charge due to the Coulomb attraction. If the atomic nucleus is assumed to be at rest because it is heavy, then the electron (charge e, mass m e ) is regarded as rotating at a speed v along a circular orbit with radius r, centered on the nucleus.The attraction which the electron receives from the proton is a central force, and the equation of motion can be expressed as follows. is substituted for E in Equation (20), then the r where re 0 Dirac and this paper agree to the sixth digit after the decimal point.Incidentally, the Dirac's relativistic wave equation can be written as follows Figure 1 . Figure 1.Differences between Dirac's hole theory and the interpration in this paper . of the electron in the hydrogen atom take discrete values, and thus if the subscript n is attached to the physical quantities on both sides of Equation (A3)(A4) for Equation (2) here, simplifying and using the + value, we obtain: quantum condition which Bohr assumed is following. :
3,264.8
2017-07-19T00:00:00.000
[ "Physics" ]
An “ab initio” Model for Quantum Theory and Relativity The paper introduces a theoretical model aimed to show how the relativity can be made consistent with the non reality and non locality of the quantum physics. The concepts of quantization and superposition of states, usually regarded as distinctive properties of the quantum world, can be extended also to the relativity. Introduction The quantum theory and the relativity have stimulated influential ideas and experimental efforts to investigate and understand a huge number of natural phenomena from atomic to cosmic scale [1] [2]. However, with space ranges spreading from ~10 −18 m to ~10 26 m by about 44 orders of magnitude, is comprehensibly problematic the attempt to unify in the frame of a unique theory the whole variety of related natural phenomena. Yet, is symptomatic the fact that similar difficulties often arise even in formulating a more selective class of specific physical problems. In the case of the relativity, for example, something relevant should be still missing even at the mere cosmic scale; despite the great amount of its previsions and discoveries, remain problematic crucial topics like the progressive acceleration of universe expansion, the MOND (modified Newton), the dark matter and dark energy. A possible hint to overcome these difficulties is to identify an appropriate background of ideas that integrate or modify the preexisting ones; for example, at the quantum level, the major problem of the relativity is its link to the non-reality and non-locality of the quantum theory [3]. Also, the "handwritten" cosmological constant reluctantly introduced by Eins-tein after the Hubble experimental hint, is a further example of necessary revision of the general relativity even in its most representative cultural frame i.e. the cosmology. Is reasonable the suspect that focusing greater attention on the existing conceptual background is not an additional difficulty but a possible solution? To provide a contribution to this problem, the paper [4] has introduced an operative definition of space time i.e. the estimated age and radius of the Universe, from now on quoted with shortened notation u t and u r , the Einstein cosmological factor Λ and the Hubble constant u H . Two quantities of interest are the mass ob m detectable in the universe counting the stars only [5] and the critical density cr ρ of Friedman equations [6] refers to stars only and because it concerns by definition stars whose light has in fact reached us during the life time of the Universe. The present theoretical model implements systematically uncertainty ranges to calculate quantum and relativistic quantities according to the logical step "local values → uncertainty ranges". From a formal point of view this statement can be acknowledged reminding the standard concept of measure errors: just as no one trusts the reliability of a single value measured in its experimental error bar, likewise (1.2) waive the signification of a local dynamical variable in its uncertainty range. Yet the true physical meaning of this replacement is one among the crucial points of the model, as it will be more thoroughly shown below; some examples of calculated results are also reported in the Section 6 to confirm the concepts exposed in the Sections 1 to 5. The uncertainty ranges are defined via the standard notation ( ) , being f ′′ and f ′ the range boundaries defined by two arbitrary values allowed to the concerned function; these values are arbitrary, unknown and unknowable by definition of quantum uncertainty. In general, both of them can be variables or constants. As introduces the ratio f x δ δ that in turn takes physical meaning under appropriate conditions. A further way to implement the ranges is that already highlighted about δε , i.e. n n n ω ω ω ′ ′′ ≤ ≤    , which means , n n n ′ ′′ ≤ ≤ (1.9) being of course n′ and n′′ arbitrary and unknown integers. In the following the shortened notations . Eventually note that in principle both signs are allowed for any range; for example nothing hinders that x x x x x δ =± = ± ± being both x and 0 x arbitrary. Sometimes in the following text a given result is obtained more than once in different contexts: this is not a redundant repetition, rather it must be intended as a check confirming that all conceptual steps progressively exposed are consistently linked each other. Despite the agnostic way to introduce (1.2), the remainder of this paper is able to formulate a self consistent theoretical physical model. The text is exposed in order to be as self contained as possible. Preliminary Considerations This section introduces some considerations having general character of straightforward corollaries of (1.1) to demonstrate that this definition of space time is physically sensible. All concepts introduced below are listed sequentially without calculations, while emphasizing their physical meaning; the validity of the various formulas inferred through the model will be concerned in the next section 6. Energy and Energy Density in the Space Time Implement first the dimensional analysis of (1.1), (1.3) and (1.2) defining , where  stands for energy and  for length to introduce the concept of energy density η . Then multiplying and dividing side by side these two equations one finds having identified  pertinent to η with that defined by the second equation. Equation (2.1) yields two equations. Owing to (1.4) the first one is according which the value of η depends on that of  via fundamental constants only: i.e. ( ) It is necessary to make (2. , is such that dividing both sides by t δ one finds The Planck length appears to be the smallest space range physically inferable through the definition (1.1) of space time, whereas the Planck time and energy can be nothing else but P P t c =  and =   P P t according to (1.4), whence the Planck mass Classical Newton law Since  c in (2.2) has physical dimensions of acceleration, write thus δ δ = ± = ± = ± ×    m m mass m G G G mass length (2.11) the double sign being in agreement with (2.5). The energy  is here identifiable with the analytical form of the classical Newton law, if δ  is regarded as the uncertainty range corresponding to the random distance between 1 m and 1 m . First of all, with this available dimensional information only, it is impossible to identify the respective role of either mass; otherwise stated, as 1 m and 2 m were both formally inferred from a unique 2 m , the quantum uncertainty re- (2.7). This property of c is a corollary of (1.2), not a postulate; as such it must hold in any reference system. Quantum Uncertainty and Space Time The energy density (2.2) is inherent the concept of space time according to its own origin (1.3) and has several implications, first of all the existence of a pressure st P internal to space time volume 3 x δ previously symbolized as 3 length ; the subscript means space time to emphasize that it is inherently based on the definition (1.1) only. As sketched in the Appendix A [4], one finds 1 2 , , , where m  is the reduced Compton length of m. It appears reasonable to assume that the range size δ  is an integer number n of reduced Compton wavelengths, here regarded as the shortest wavelength relatable to one particle. All of this agrees with the results preliminarily obtained in section 1. A full paper [7] has been devoted to highlight the implications of (2.20), which in fact transfers n from its basic definition in (1.2) into the specific physical problem of a bound particle moving circularly around a central force field. On the one hand (2.20) is justified by its direct corollaries (2.22) and (2.23), which however represent a particular case of boundary condition allowing steady wavelengths. While ξ δ  is uniquely definable as the radius of a circumference, it must be replaced by a combination of minor and major semi-axes of an ellipse in order that (2.23) describes still the integer number of steady wavelengths along an elliptic perimeter. Are known in this respect various formulas, e.g. [8], that calculate this perimeter; the next subsection 5.7 will show how to infer through this reasoning the perihelion precession of orbiting planets. On the other hand the definition (2.23) of momentum is merely formal, being based on a dimensional assessment compliant with the condition v c < via the arbitrary factor ξ . Nevertheless also in this reasoning energy quantization and De Broglie momentum are contextual. It is shown soon below that n of (2.17) is the refractive index of a dispersive medium. These considerations, crucial for the birth of the old quantum mechanics and here inferred as corollaries, suggest the necessity of defining more in detail the actual physical meaning of v. Quantum Velocity and Space Time The steps introduced by (2.9) are significant: whereas c in (1.1) is self-evident, it is a constant of nature, now rises the problem of clarifying further the physical meaning of velocity modulus v as a property of massive particles moving through the space time. To introduce v from a first principle, note that (2.12) yield ( ), Recall now the eq 2 r n δ λ π = found in (2.23), whose physical meaning introduces a crucial condition on any mass 2 m orbiting according to (2.11) around 1 m at constant distance having modulus r δ : owing to the dual nature wave/corpuscle of matter, a steady 2 m wave is required to describe a stable or- At the right hand side appears the momentum p λ of a De Broglie wave delocalized within 2 δ π r . At the left hand side appears the radial momentum range δ r p of a corpuscle delocalized in r δ : for example in the case of (2.11) it means that the space gap r δ between the masses 1 m and 2 m implies a steady wavelength λ along the orbital path of the running mass around the rest mass, which confirms the indistinguishability of gravitational and inertial mass introduced in section 2.2. Rewrite now 0 0 λ δ = − = − r r r p p p p of (2.27) with vector notation, noting that δ r p is radial momentum range around the rest mass, whe- , which suggests that not necessarily the orbit must be circular; even admitting δ r p con- Relativistic Outcomes The relativistic worth of these results follows straightforwardly, first of all because even the time is inherently involved by (1.2); multiplying side by side the first and third (2.9) one finds energy velocity m c and thus, owing to the first (2.17), Note that 2 n is here mere dimensionless multiplicative factor of v  , which however cannot be specifically calculated because both δ  and t δ are uncertainty ranges unknown and conceptually unknowable; so 2 n contributes in determining the resulting v  , whatever its value might be. These dimensional equations introduce the velocity and relativistic momentum components v  and p  . Moreover write according to (2.12) , i.e. just (1.2) merely with a different notation of the conjugate dynamical variables. These equations imply the indistinguishability of identical particles, because actually they concern the phase space rather than the particles themselves; in other words it is impossible to distinguish electron 1 from electron 2 delocalized in a region of space time if nothing in known about them. Indeed it has been shown in (2.29) to (2.35) that, for example, momentum and energy are directly related to the range sizes (2.40) regardless of any hypothesis about the particles themselves. On the one hand is remarkable the fact that the Newtonian definitions (2.9) imply the concept of uncertainty, thus confirming that actually even the classical gravity is rooted in the quantum equations (1.2). On the other hand the agnostic meaning of uncertainty, which implies lack of information about the boundaries of the ranges and about the local values of the dynamical variables allowed in their ranges, is not a postulate but a corollary of the way to introduce (2.40 , then it is possible to introduce f itself without defining explicitly its R, whatever the specific physical meaning of v and f might be. Nevertheless the lack of a specific reference system does not imply in fact any ambiguity, as it will appear in the following section 6 where are calculated some numerical outcomes of the present model; rather the physical meaning of v results from that of x δ and t δ or δε and p δ themselves according to (2.12). Although this physical model seems too agnostic to infer valuable information, note that these conceptual premises have been enough to infer the fundamental (1.2) from (1.1) and even preliminary relativistic results. The remainder of the paper aims to show that just this conceptual agnosticism allows to overcome the determinism of Einstein general relativity and plugs it into the elusive quantum world; the calculations will be carried only after having completed adequately the theoretical frame so far introduced. Uncertainty, Covariance, Simultaneity In general the choice of the reference system R is crucial in any classical physical model that implements deterministic local coordinates; in the Einstein relativity, the equations are required to be invariant with respect to the reference systems, including the non-inertial ones. Consider however a quantum problem formulated only via uncertainty ranges; in fact (2.41) shows that if all * j n and n are arbitrary, then there is no direct correlation between range sizes of dynamical variables and reference systems just because the former do not contain any information someway related to the latter. As it has been remarked for n, the only available information is that the product of two range sizes of conjugate dynamical variables must be quantized; i.e. both products * * j j n n ′ of (2.41) must yield an arbitrary integer whatever the local values of the respective dynamical variables j and j' might be. So, if the local coordinates are replaced by ranges that fulfill (1.2) and (2.41), then is missing "a priori" the existence of privileged reference S. Tosto systems; moreover it is easy to show that, by consequence, the requirement of the different form of equations in R and R' becomes inessential. Is instructive in this respect the classical example reported in various textbooks, e.g. [10], of a point mass m tethered by a massless and inextensible wire, so that the mass moves circularly around a fixed coordinate. This example becomes significant noting that if the wire is broken, e.g. by the centrifugal force itself, thereafter the motion of the mass is rectilinear uniform along the tangent to the circumference in the breakdown point. This is true in R with origin fixed on the rotation center of the mass. In R' fixed on the moving mass, instead, the mass is at rest; when the wire is broken the mass deviates from its initial path, it follows a curved tra- where the unique variable is  . This latter however is a local energy that must be implemented solely via its uncertainty range problem, so it is irrelevant the local disagreement a a′ ≠ and the fact that a' includes various additional terms with respect to a . The agnosticism implied by (1.2) compels considering these ranges before and after the wire breakdown regardless of how the corresponding local accelerations are made of; in other words the chance δε δε ′ = of including the respective a and a', in principle possible because a and  differ by a proportionality constant factor, bypasses the necessity of discriminating R and R' to describe the tethered system with or without its breakdown. Since this holds for any uncertainty range by definition, S. Tosto in fact (2.41) waives the necessity of specifying either reference system to concern the dynamical variables. Accordingly it is possible to regard all ranges of (1.2) independently of their definition in a specific R: rather it is possible to introduce x δ and t δ independently of the local space and time coordinates since the unprimed range sizes at the left hand sizes are arbitrary, the same must hold for the primed range sizes at the right hand side. Then it must be possible to implement identically both of them, regarding thus the lower boundary value 0 as a particular but not deterministic case. To infer physical information from these statements and check their validity, find the classical energy ′  of hydrogenlike atoms. Implementing (1.2) for r p′ , which actually owing to (2. Now it should be clear why range sizes and boundary coordinates are irrelevant as concerns the quantum problems, as in effect it has been demonstrated for various systems [11] [12]. In particular it is not necessary to specify R centered on the nucleus, it is enough to state that nucleus and electron are r δ apart; the radial range size is then defined by the non deterministic Bohr radius via the integer 1 n ≤ ≤ ∞ . Is clear thus the meaning of the coefficient 2 in the second (2.44): an electron r′ apart from the nucleus has total radial delocalization range 2r′ . In this model the quantum numbers are in fact numbers of quantum states. (ii) Relativistic implication: the invariant equations. Write (1.2) as follows 2 2 , so that, subtracting side by side, c t x δ δ − is particularly important as it has been demonstrated in [13] to be conceptual foundation of the special relativity. Special attention deserves in this respect the operator formalism of quantum mechanics, which regards since the beginning the particles as waves; instead the last equations have concerned the corpuscular properties of matter. The next section shows how to introduce in this conceptual frame also the wave formalism, in agreement with the corpuscular/wave nature of the particles. (1.2). Since any integer n can be expressed as a difference of two integers n′ and n′′ one finds The Wave Formalism and thus also having multiplied both sides of these equations by * δψ ′  and * δψ ′′  with the purpose of obtaining again via (2.55) a real value of momentum consistent with the relativistic p  of (2.49). Subtracting side by side (2.55) and writing explicitly It is easy to acknowledge that for  . An identical reasoning holds of course here; trivial algebraic steps analogous to that from (2.55) to (2.57) yield compatibly with the existence of states of negative energy. Moreover , whereas (2.57) would consist of two primed and double primed functions like this From these considerations inferred as corollaries of (1.2) and (2.32) without need of postulates, was born the early wave mechanics and the modern quantum mechanics. Relativistic Corollaries This subsection examines four main implications of (2.2) noting that if too. Owing to (1.4) and (2.9), the definition (1.1) of space time appears compliant with the idea of a dynamic system characterized by matter, energy and forces; also, the equivalence of mass and energy of the special relativity inferred in (2.33) agrees with the feature of space time characterized by the energy density η of (2.1) inherent to its definition (1.1). Without these results the space time would be an empty concept unavoidably abstract and unphysical. Instead, for reasons shown in the appendix B, does exist in principle an outwards pressure corresponding to the energy density η in (2.2), which in turn can be partially or totally counterbalanced by the attractive gravitational effect of matter/energy possibly present in a given volume of space time according to (2.32) and (2.33). The space time is therefore a dynamical system, in principle in equilibrium or non-equilibrium conditions, which evolves as a function of time. This point in particular, which anyway governs its dynamics, is now concerned to justify the possible presence of mass in a volume 3 x δ of space time. Implement (1.1) to find a further result based again on a dimensional reasoning. Note the possible correlation between quantities having the same physical dimensions; m is an arbitrary mass confined and delocalized within the arbitrary size δ  of an uncertainty space time range, thus without chance of information about its exact position. This section concerns just the physical conditions consistent with the delocalization of m in an uncertainty range, in agreement with (1.2). Real and Virtual Mass Are reasonably conceivable two conditions on the correlation (3.1), here expressed as follows being ξ an appropriate proportionality factor. (i) One concerns the Lorentz invariance of both definitions (3.1): for the first one this condition is self-evident because it is a constant, for the second one the condition must be purposely required. Write owing to (2.35) since both δ ′  and m′ are Lorentz transformations of δ  and m, it must be true that With the given definition of β , for i.e. m is the v dependent dynamic mass corresponding to the rest mass m′ defined in (2.33), whereas δ ′  is the space contraction of the proper length δ  . It is significant that (3.3) confirms the result (2.36) obtained via (2.35). (ii) Consider now the limit of (3.2) for 0 m → ; it is reasonable to expect that this limit is nothing else but the definition (1.1) of empty space time, i.e. This limit ensures the consistency of the definitions (3.1) in agreement with the idea of m delocalized in δ  : if no particle is delocalized, the range size is null. This suggests putting by dimensional reasons Note that owing to ( the last step of the chain means that whatever λ′ might be, it is possible to define a corresponding r δ ′ that must identically fulfill the condition (2.26) being inessential the primed notation. Hence (3.7) yields On the one hand multiplying and dividing the right hand side by o m m (3.9) i.e. still holds a Newton-like law but with positive sign, yet in principle still consistent with (2.11). On the other hand, examine (3.9) that holds in general for any the last position holds because n′ is due to the integer number of wavelengths consistent with 2 r δ π ′ according to (2.26), whereas n is clearly due to (1.2). In turn, via the Planck force. So bh r δ takes the form of a zero point energy of a mass m oscillating with frequency bh ν corresponding to (3.15) in its confinement range δ  . Invariant Equations of Special Relativity. Rewrite Identically (1.3) as being v c ≤ the modulus of an arbitrary velocity allowed in the space time containing mass, concerned in the previous subsection. In principle v could be the group velocity (2.25) of a wave packet propagating through space time volume filled with dispersive medium, or it could be the expansion rate of the boundary of space time volume compatible with (2.14), or eventually it could even be simply the velocity of a body of matter moving through the space time; it depends on how is defined v. To examine this point regard v as a possible velocity allowed in the space time, whatever it might represent in any reference system, and consider that (3.19) identically rewritten as describes the swelling of the early space time volume introduced in (1.3), here indicated as 3 x δ along with the factor 2 2 c v . This equation is justified by (2.13), (2.14), (2.2) and (2.9) and will be further implemented also in the next subsection 3.5. Rewriting explicitly (1.1) as a function of v one finds therefore accordingly the identity (3.19) becomes compliant with the space swelling rate during the time lapse t δ , whereas (2.1) yields In principle this result is compatible with (2.4) and (2.14). A corollary of (3.22) follows starting again from (3.21) to write Journal of Applied Mathematics and Physics according which trivial manipulations yield, as shown in (3.3), The algebraic steps show that Lorentz transformations and invariant interval in inertial R and R', here introduced for simplicity via a one dimensional approach but immediately referable to a 4D formulation, are intrinsically inherent the space time definition (1.1) yield again space contraction and time dilation of special relativity. Equation (3.27) is particularly important because it is shown in [13] that the invariant interval is the conceptual basis of the special relativity, whence the chance of obtaining in particular (3.28). Relativistic velocity The results hitherto achieved compel explaining the concept of velocity. Multiplying both sides of (2.10) by 3 v c one finds by consequence of (2.9) and according to (2.29) whereas it is possible to define that allows the last equation. Once more it is worth emphasizing that the inva- and thus, taking the reciprocals of both sides, First of all eliminate n; it could be put equal to 1 by definition, yet it is easy to follow a general procedure valid for any n. With the positions , , , Let us elaborate further this result in order to obtain a significant equation; is useful in particular the position where ξ is arbitrary proportionality factor. With V δ in principle arbitrary as well because of the uncertainty, as previously stated, this position is allowed. This step appears important rewriting (3.36) via (3.37) as where the resulting * V is still an arbitrary velocity. This formula is actually well known, as it relates in special relativity the sum of c plus any velocity returns always c. This reasoning is not at all redundant repetition of a result already known: (3.30) and (3.31) are quantum properties obtained contextually to (3.29) that is the invariant definition of relativistic momentum. Hence the reasoning implies merging of quantum and relativistic results concurring to the definition of c as an invariant limit velocity: this crucial statement of relativity is here required by (1.2). Note that (3.38) has been obtained via V r and V, which are arbitrary like the respective v r and v but leave out n: i.e. the quantization is not essential to infer (3.38), as it has been emphasized while obtaining (3.36). On the one hand it explains why the relativity was formulated without suspecting the underlying quantization, which indeed appears hidden in (3.35) in the present model. On the other hand it means that the positions (3.35) are not merely formal, as it is evident rewriting (3.33) as the problem of (3.33) is that the left hand side vanishes for n → ∞ incompatibly with the right hand side that never vanishes because v c δ ≤ . Owing to (3.35), instead, at the right hand side of (3.33) appear just the velocities leading to the result (3.38) of actual interest without contradicting the arbitrariness of n. So (3.38) completes the conclusion (2.18), where a well known quantum inequality was inferred just from a physical property of c; here also this property of c appears as a further corollary of (1.1) and (1.2). Once more, as already shown in further papers [4] [11] [12], relativistic and quantum principles appear in the present approach as harmonically coexisting concepts without "ad hoc" hypotheses. Euler-Lagrange Equations and Gravitational Potential Start from (1.2) that yields also, recalling the considerations of subsection 2.6, let us define S. Tosto Journal of Applied Mathematics and Physics Note now that (4.1) can be rewritten as The step from (4.1) to (4.3) is not trivial. As anticipated in the subsections 2.6 and 2.7, the chance of exchanging the place of x δ and p δ  fulfills the specific concept of derivative in the physical frame of the quantum uncertainty; in fact x δ  is mere ratio of arbitrary ranges finite by definition, to be regarded as independent differentials possibly but not necessarily tending both to zero. This subsection aims just to show that this way of intending the quantum derivative is physically sensible. Multiplying now both sides of the second equality (4.3) by x δ , one finds Define now a function f consistent with this result, i.e. such that f δ fulfills in turn (4.4) and (4.5) are consistent with the positions As concerns the first equation, the first (4.2) yields . Hence, merging the last two results, one finds According to (4.4) the function f has physical dimensions length 2 /time and fulfills the same kind of equation of the Lagrangian  of a physical system; in fact f is proportional to  a multiplicative constant 3 c G apart. Since is an energy, this is in principle just the sought Lagrangian. Yet the way to obtain this equation via the proportionality constant does not require the condition The Euler-Lagrange equations are well known; yet the non-trivial fact is that they have been inferred here as corollaries of (1.2) and (1.1), which are the conceptual root of both relativistic and quantum physics. Moreover this result supports the present way to regard the concept of derivative as a ratio of uncertainty ranges. Follow now two checks of the present way of reasoning. (i) The classical Newton law inferred in the section 2.2 seemingly does not account for the finite propagation rate of any perturbation or interaction. Actually such information is explicitly available writing The force defined in this way is related to an energy ′ ′′ ≤ ≤    , vanishes with of the space time range x δ , is positive or negative depending on whether x δ swells or shrinks as a function of time and vanishes for 0 s δ → ; i.e. the force is defined within 0 s δ ≠ . It reasonably means that a time range t δ is necessary in order to allow its propagation at distance s δ , outside which the force in null. The fact that s δ has been defined via c means the carrier of the force must be a virtual photon or a graviton or anyway a massless particle propagating at speed c. Note that instead the classical Gm m  has the form of a force propagating instantaneously because it is based only on (2.9) and (2.10) without implementing (1.2). Actually (2.11) avoids itself this error because it is expressed via the uncertainty range δ  , not via the deterministic  ; since (1.2) involve inherently t δ , (2.11) could have been written itself as thus involving anyway the time range t δ  governing its propagation. This holds of course for any force. As concerns the gravity note that also now it is possible to repeat for (4. might be, it is reasonable to regard its series expansion whose first order term is a constant; if so, then neglecting for the moment the higher order terms, it is possible to write This formula is formally similar to (2.11), yet it incorporates the idea of a non instantaneous long term force that worried Newton himself. The form of the higher order terms will be concerned later, see next (5.114). (ii) Consider eventually that (2.12) yields with the help of (1.2) and (4.2) In summary it is possible to write this result as where ϕ is a new function having physical dimensions velocity 2 ; with vector notation the first equation reads This definition, inferred here as a corollary, was taken in [13] as a basis to infer special and general relativity; the sign of δϕ depends on whether 1 δε δε −   . Moreover (4.11) also reads according to (2.8) where x a has physical dimensions of acceleration. It is immediate to acknowledge that ϕ of (4.11) is the definition gravitational potential [13], which will be more specifically concerned in the next subsection 5.5. Also, v  is not simply being a acceleration by dimensional reasons. Hence (3.20) yields Summarizing therefore this result as by definition of uncertainty range ( ) 1 δ −  , there are in principle three chances. The first one is that with notation emphasizing that a is a constant. Moreover are also possible for (4.15) appear natural the positions (4.14). In the particular case (4.16) where a is constant, it is possible to write whereas (4.15) becomes being  the Laplace-like curvature radius of space time with principal curvature radii 1 δ ±  and 2 δ ±  . In general the signs of these radii depend on the specific problem [14], e.g.: for a liquid droplet in a gaseous environment are both positive, for a gas bubble in a liquid environment both negative, for a liquid meniscus between solid cylinders with saddle-like geometry one positive and one negative. So it is not surprising that in principle all chances have been found in the present general approach starting from first principles as concerns the space time swelling. It is significant anyway that the concept of space time curvature is definable in a natural way even in the present quantum/relativistic context through the concept of uncertainty range. It is instructive in this respect the crucial role of (1.2) in linking quantum and relativistic points of view. Consider two remarks. ( that in turn reads ( ) the last equality is legitimated in analogy with (4.19) once having defined by dimensional reasons The General Relativity as a Corollary Some relevant concepts of general relativity are quoted in this section to show how to generalize the approach hitherto followed for the special relativity. Are examined in particular further significant implications of the quantum uncertainty ranges, to show how both special and general relativity contextually merge in a unique non-local and non-real conceptual frame. Some hints in this respect have been early examined in [16]; further topics are here reminded along with new considerations just to point out what have to do these typical concepts of quantum theory with the gravitational field. Indeed the problem of quantum gravity involves non only the quantization of this field according to the distinctive concept of superposition of quantum states, but also the inherent concepts non-reality and non-locality. The So-Called "EPR Paradox" It has been shown in the subsection 2.8 that the wave formalism is a corollary of (1.2) together with the relativistic properties inferred in subsections 2.5, 3.1 and 3.2; these results make in principle the present model compatible with the standard answer of wave quantum mechanics to the paradox. Yet, although it would be legitimate to skip additional comments to the ample literature already existing on this topic, it is instructive to emphasize the distinctive contribution provided to the paradox by the present model based on the quantum uncertainty. Deserve attention the following crucial points of this theoretical framework: (i) the concept of uncertainty ranges replacing the local dynamical variables is in principle compatible with the concept of entanglement; (ii) the difficulty of superluminal distance is bypassed, because the deterministic concept of distance between physical objects is unphysical; (iii) the concept of non-locality reduces to that of unpredictable randomness of particles confined and delocalized in quantum uncertainty ranges and excludes any kind of local information; (iv) by consequence of (iii), the concept of "non-locality" is strictly related to that of "non-reality". Consider two particles, whose delocalization is in principle possible either in their own independent uncertainty ranges or in one shared uncertainty range. In the first case the particles in holds identically also for two particles in their own uncertainty ranges. Hence do not exist "spooking actions at a distance" but rather "actions at a spooky distance": once having renounced to the classical determinism and accepted (1.2) there is no way to distinguish the behavior of particles far apart or close each other confined in a given delocalization range, whatever their interaction mechanism might be. Particularly interesting is the former case of two entangled particles born within a unique uncertainty range where, for example when one γ photon decays by interacting with a nucleus or via vacuum fluctuation, e.g. Accordingly, in the conceptual frame based on (1.1) and (1.2) the EPR paradox shouldn't even be formulated: the present model is inherently non-local by definition. Moreover the agnosticism of (1.2), not purposely invoked here but assumed since the beginning as the unique leading idea of the present physical model, implies a conceptual gap in (3.1) between elusiveness of (1.1) and reality of (3.9); the former is mere dimensional definition of the framework allowed for latent events, the latter made feasible by the measure process breaks the latency of possible events. Since nothing is "a priori" known about v m and * v m , e.g. number of pairs or energy and lifetime of pairs and so on, the present model is inherently also non-real by definition. In other words the physical agnosticism implied by the concept of uncertainty as hitherto exposed, corresponds to the non real essence of the quantum world before the experiment; hence one must accept the idea that also the relativistic properties hitherto inferred are subjected to the same non-weird but logical consequences of (1.2) without need of postulating any "collapse" of wave function into a well defined quantum state. It means that n introduced in (1.2) and next appearing in (3.9) remains arbitrary and undefinable until when the measurement converts it into a specific obs n ; in turn, the wave formalism allows calculating the probabilities inherent the superposition of allowed states. Consider now the orientation of the possible spins of the particles with respect to an arbitrary direction. When measured, their spin orientation must yield a total angular momentum equal to zero like that of the empty space time (1.1) before the vacuum fluctuation (3.11). Physical information in this respect is provided only by the angular momentum conservation law, which however presupposes a measurement process. In general this is a perturbation action that affects the quantum state of any particle. In particular, being both particles in the same x δ , the measure process perturbs the system of entangled particles wherever they might be, not either particle only. If for any physical reason the shared x δ is modified, then the consequent x δ  implies p δ  and thus a force field in x δ that in turn perturbs the couple of particles. from a deterministic metrics, whatever it might be. To confirm that all of these considerations hold also for the relativity, the next subsections concern a few selected topics purposely chosen to emphasize the role of the quantum uncertainty in the general relativity: the latter is in fact a corollary of the former. The most important point in this respect is the equivalence principle, which is soon examined first in the section below. The Equivalence Principle Two relevant results previously obtained, Equations (2.9) to (4.25), address directly to Einstein's equivalence principle, as it has been explained through the simple reasoning early concerned in [9]: the reasoning is so crucial and short to deserve being sketched here for completeness. Think a space time uncertainty range in agreement with the second (2.19). Clearly  is the space time Laplace-like curvature radius corresponding to the attractive gravity force − of (2.11), as explained in (4.19). Once having expressed the deformation of space time in terms of range size change rate x δ  , return now to the Einstein equivalence principle considering for simplicity the change of ( ) 1 x t only with constant 2 x ; this is enough to account for the rising of a force field inside x δ and highlight the reasonable conclusions of two independent Newtonian observers sitting on either boundary of ( ) The key points are: (i) the observer 1 sitting on 1 x experiences an acceleration since his variable coordinate is defined with respect to the origin of R, i.e. this observer moves far from or towards to the origin of R during the deformation of x δ ; (ii) the observer 2 feels anyway a force field inside x δ although he is at rest in R. Therefore the observer 2 concludes that an external field is acting on x δ , whereas the observer 1 acknowledges an acceleration as if his position in R would be perturbed by the force field in x δ . Once more the consistent conclusion is that in fact the space time deformation rate x δ  causes itself the rising of a force field and that an accelerated reference frame is equivalent to such a force field. Only for 0 x δ → the force field appears as a local classical force. It is immediately evident the role of the quantum uncertainty in this explanation of the concept of force, required by the physical equivalence of the boundary coordinates in lack of any discriminating information about their behavior: indeed p δ  is nothing else but a corollary of Quantum Angular Momentum This topic has been concerned in [11] [16] [20]. Here are sketched for completeness some selected reminds only, useful later. By definition the component of angular momentum along an arbitrary direction defined by the unit vector z is M z = × ⋅ r p z , which reads in the present conceptual frame as Let us sketch some properties of quantum angular momentum, which will be useful in the next subsection, assuming that L l L − ≤ ≤ ; i.e. l ranges between two allows values −L and L, of course arbitrary, whereas (5.4) holds for any L, exactly as done in (1.9). The following considerations emphasize the reasoning carried out in [11], although here the steps to calculate M 2 differ slightly from that therein exposed: consider here that if M z is the only component knowable, then M 2 must be somehow related to M z only. Note that Since the angular quantum number l is actually a number of allowed quantum states likewise n of (1.2), the idea is now that M 2 should be defined by its own quantum angular number of l states and that in turn this latter is related to the sum of all l-th states allowed to its unique defina- In effect, once having written , the knowledge of the three components (5.6) reduces in fact to that of one component only; hence it is natural that this result coincides with that of (5.5) expressed in 2  units and confirms (5.8). Follow now three important corollaries. Replace thus M z c with 2 q , as both have physical dimensions energy length × ; the same reasoning yields now (ii) A further corollary concerns the spin of particles and the Pauli principle. i.e. ( ) Note that the left hand side of (5.15) defines an angular momentum in fact allowed, so  in (5.16) is a half integer angular momentum due to 2  , which is clearly by analogy with l the component along an arbitrary direction of a new half integer angular momentum. In [9] is concerned the spin of particles more in detail starting from (5.15). Here this topic is not further concerned for brevity, e.g. to show why actually J L S = ± ; it is interesting instead to remark that the Pauli principle follows as a corollary of (5.16) [20]. This interpretation of the Pauli principle is a crucial consequence of the fact that l and L are not mere quantum numbers, but numbers of allowed quantum states likewise n of (1.2). (iii) Consider the following definition of M z , which reads having merged ζ with the arbitrary range size of δ  . This simple reasoning has defined via M z the fine structure constant, the Coulomb law, the magnetic potential  and the definition of Planck charge. Black Hole Consider (3.2) rewritten according to (2.20) as where v is velocity by dimensional reasons. Let us define now a dimensionless parameter ζ such that with the same physical meaning of (5.28). So the second (5.28) is the well known Now note eventually an interesting corollary of (5.33) that reads * * c δ =    and is identically rewritten as where z is a new arbitrary parameter not yet introduced in the present model to be appropriately defined in agreement with (5.18). Simply renaming z as z e = ± , where both signs are compatible with 2 2 z e = , it follows that e ± is the electric charge, whereas the proportionality factor α linking the Coulomb law hidden in *  via c  is actually the fine structure constant. This last result is closely related to the results from (5.17) to (5.20) previously found. Emphasize now that the particular condition (5.24) is sensible, although it has been introduced preliminarily just in order for (5.23) to match (3.17) and not as a consequence of a fundamental requirement; yet (5.24) can be generalized while regarding (5.28) as mere particular case. The key point is to replace (5.23) via a function  of ζ and bh r δ defined as follows The condition on , . bh bh On the one hand it is reasonable to assume that these equations concern two different properties of bh r δ ; on the other hand it is also reasonable to guess that two properties of the mass * m can be charge and angular momentum due to its possible angular spinning or to its possible spin or both. Anyway, since the uncertainty ranges at left hand side represent square lengths, it is immediate to conclude that the same holds for the right hand size terms; in other words, to include the charge terms it is enough to express the space range sizes that appear in the Coulomb law of (5.36). So, in Planck units, with appropriate signs at the right hand side. This yields the well known result consistent with (5.14) ( ) From Special to General Relativity Rewrite the first (3.32) as In this case write (5.40) as thus, apart from the factor in parenthesis appearing at both sizes and thus irrelevant, (5.45) implies again and merge themselves into The left hand side of (5.54) diverges for n → ∞ ; however this is not a problem, being allowed by 0 t δ → . where it is possible to identify at the right hand side The right hand side is the Hamiltonian of the orbiting system, i.e. Newtonian binding energy − G , which agrees with the idea of harmonic oscillator as a bound system itself. The minus sign of  G means that the force constant f k defining the quantum oscillator frequency implies an attractive energy between two orbiting masses; a repulsive energy would be instead inconsistent with steady quantum oscillations. As expected, whatever 1 m and 2 m might be, an appropriate n shows the actual quantization of orbital motion: for large masses, n is so large that the quantization is hidden by the values i.e. classically 2 ω is proportional to 1 m − via the factor f k . Moreover, the link between a one dimensional oscillation and a two dimensional system orbiting on an arbitrary plane implies 2 degenerate states, as the clockwise and counter clockwise rotation are both allowed and in principle indistinguishable; this also holds in the quantum world, indeed l takes all values n l n − ≤ ≤ identically to l − . So the macroscopic measurable orbiting energy (2.11) is twice that 1 2 2 m m G δ  of (5.60). This degeneracy can be also regarded as a statement of equivalence between inertial and gravitational mass: the degenerate energies concern now the systems where m 1 moves around m 2 or m 2 around m 1 , depending on either reference system R 1 or R 2 where the respective mass is at rest. Without the equivalence principle, R 1 and R 2 would not interchangeable, as instead it is true according to the quantum (2.41). The macroscopic Newton law inferred from an oscillating quantum system reveals and requires the sought equivalence. 5) Implications of harmonic oscillations. As a closing remark consider now the following dimensional definitions The physical meaning of these results will be highlighted by calculating their numerical values in section 6. 6) Gravitational waves. This subsection aims to sketch that the gravitational waves are actually quantized and fit the result inferred in (5.60); details on the physical model and results have been already concerned in a paper [16] on this topic. To add further considerations in this respect and highlight this point, let us start from the Einstein formula ( ) Just this conclusion is the key to guess the dimensionless G t ω that appears to be just a correction factor: being n δ integer, E δ − can be nothing else but something like n hν ′ with n′ integer in order to fit (5.60) [21]. Also, as G t is introduced via δε and thus arbitrary, put then ( ) Otherwise stated, G t ω has been defined in order that (5.74) is consistent with (5.62). In synthesis, the initial Einstein formula, deterministic, becomes here a very simple quantum result, showing at the right hand side the number n′ of gravitational energy quanta lost. Also here E δ − expresses the fact that n′ must be intended as n n n ′′ ′ ′′′ ≤ ≤ , with n′′ and n′′′ of course arbitrary, once more according to (1.9). Although for brevity this result has been introduced here as mere elaboration of Einstein's early achievement, reversing the steps from (5.74) to (5.70) one could find the initial E δ whose quantization is however hidden. The paper [9] concerns instead an "ab initio" model, where are also described further implications of this result. The Einstein formula is actually a quantum of gravitational energy dissipated by an orbiting system. In this quoted paper, published before the experimental evidence of the gravitational waves, it is remarked that not necessarily the gravitational system must collapse; rather both signs possible for n δ describe the exchange of gravitational quanta between orbiting systems, possibly the so called gravitons, could be regarded in principle in analogy with electromagnetic excitation and decay of atoms by exchange of photons. This supports the idea of gravitons inherent the gravity propagation rate (4.8). The black hole radii are thus the limit of (5.40) for 1 v c → suggest an interesting feature of a bound gravitational system where either mass is a black hole; merging of their masses occurs when the event horizon of the latter approaches the Compton length of the former. (ii) It is usual to say that at the center of a black hole there is a space time singularity. Emphasizing that no singularity is explicitly required by or directly implied in the present conceptual frame, such a singularity is actually unknowable and thus unphysical: according to (1.2) and (2.41), by definition non-deterministic, no information is accessible about what happens inside an uncertainty range. Thus the concept of local singularity is merely an arbitrary extrapolation allowed in the classical world only; here instead the relativity is conceived in the quantum frame of (1.2). (iii) Via (5.59) yields an expression for the force constant f k of harmonic oscillations (5.62) Where a is a further definition of acceleration being . This result, which clearly plugs the force constant of the quantum oscillator into the frame of the general relativity, will be further considered in the next subsection. (iv) The reduced Compton length Clearly the right hand side is a constant, thus invariant by definition; hence the ratio at the left hand side is an invariant as well. It is known indeed that this ratio is defined by two invariant quantities. Since the first (5.46) demonstrates that x t δ δ is a relativistic invariant, it follows that the numerator is also inva- x c t δ δ − , in particular, has been stated in [13] as the conceptual foundation of the special relativity; just for this reason it is remarkable the fact that in the present model i(5.46) and (5.90) are actually straightforward corollaries of the quantum uncertainty. The crucial difference between (5.90) and (5.86) is that now ε and pc appear through their uncertainty ranges and not as deterministic values. This result not only demonstrates the link between special relativity and quantum physics, but also allows further important steps concerning directly the general relativity. Although this point has been examined in several previous papers, see e.g. [4] [11] [12], the next section reports some relevant considerations just on this topic. Consider once more (5.83) for 1 2 v v c = = , as already done to infer (5.86); the reasoning is still that already highlighted, but now extended to find a further interesting result. Write explicitly (5.83) with the help of (5.34) as follows Einstein's special relativity. In fact the additional term in (5.92), more general than (5.86), is a known result of quantum gravity that helps solve three cosmological paradoxes [22]. More details about (5.92) are reported in [12]. Red Shift and Time Dilation Starting from (2.10) and (2.9) consider ϕ [13], being Owing to proper time 0 t t < , this result yields time dilation t δ due to gravity field with respect to field null. Black Hole Entropy Define the ratio of Planck length and mass, Perihelion Precession Consider the square ranges ( ) is now an "effective" radius, taking into account that the perimeter C of ellipse is actually a function of its semi axes a and b; an approximate formula is for example , which is reliable for the present purposes because for a b = it reduces to 2 C a = π . If b a > then ( ) ( )( ) S. Tosto (ii) The second one lucidly shows step by step how to infer classically this famous Einstein formula of Mercury perihelion simply comparing two forms of potential energy of orbiting system; the mathematical formulation introduces first the mere Newtonian potential 1 N U r β = and then also assumes an extended potential form 1 : (5.114) the notation emphasizes that the time derivative x δ  is actually regarded as ratio of ranges ( ) in agreement with the definition (6.11). It appears that the quantum energy η ε directly implied by the Hubble constant is just that given by (2.2). This calculation is interesting as it emphasizes the direct link between Hubble constant and definition of space time (1.1) through the vacuum energy density, while also confirming (2.2) and thus the Newtonian (2.11) via the concept of acceleration (2.8). 6) Implement (3.23) that yields ( ) where  and ′  are in general time dependent energies related to the initial energy density η . The physical meaning of these energies results particularly significant rewriting the first (6.26) as follows The first equation is tested with the help of the universe timeline temperature vs time published by the Fermilab and reported in [26]. This point has been already concerned in [20], where it is shown through the plot of temperature vs time implementing the timeline data; it appears that actually  is a constant, it is the best fit coefficient of T vs 1 i.e. also now appears the dependence of T upon via the ratio m length ; so, equating the dimensional definition (6.34) and (6.27) via an appropriate proportionality constant ζ , it is possible to write contextually φ yields energy per unit mass i.e. the gravitational potential, which shows that the definition (6.52) of ζ is sensible. As stated in (5.70) and (5.71) the factor 4π fits well the numerical coefficient 64/5, the deviation being a few % only; so with the definitions of ζ and m′ (6.51) reads which is is nothing else by the Einstein collapse rate of two orbiting masses: r δ  has the correct relativistic form expected for the orbit radius contraction of a gravitational system due to its energy dissipation rate via gravitational waves, but now it is quantized in agreement with (5.74). Hence (6.49) is sensible starting point to calculate the dynamics of a gravitational system compliant at least in principle with the general relativity. Of course here the reasoning has been simplified and shortened for sake of brevity only; the aim of this last point is to justify the validity of (6.49) and its ability of defining the black hole energy density ( ) ciple possible, regardless of the specific explanation about their actual formation mechanism/process. In other words, merging quantum theory and relativity into a unique conceptual frame is problematic because the two-way correspondence "deterministic metrics  wave quantum theory" doesn't work. This conceptual gap is in turn due to the initial purposes of either theory: the wave quantum theory was in fact born to explain why the electron does not fall into the nucleus, the relativity to formulate a covariant approach to the nature laws. The right direction to follow is thus to merge not the whole theories themselves, but rather their fundamental roots from which everything follows. It is intuitive that the physical frame able to account for the conceptual pillars of both theories consequently will also be able to account for their specific topics; in effect it has been easy to show throughout the exposition of the present model that relativistic and quantum outcomes are contextually inferred in a straightforward and simple way. Thus the strategy of the present model follows the idea of waiving the standard premises of both theories, not because they are wrong per se but because they are incompatible, at least in the usual form currently implemented: instead of thinking an advanced relativistic formulation of problems into which to include successively also the quantum requirements, or vice versa, seems more practicable the idea of identifying a common conceptual root to start with, in order to infer as a natural corollary the fundamental axioms of both theories. In principle it seems hard the idea of abandoning the deterministic metric able to formulate covariant laws of physics, although it conflicts with the Heisenberg principle and the non locality/non reality; likewise it seems equally hard to give up the corpuscle wave dualism capable of explaining the tunnel effect, although it has seemingly nothing to do with the perihelion precession. In addition this preliminary intent is still not enough to outline adequately the physical problem, there is a further conceptual difficulty. Usually, the idea of quantum relativity reminds concepts like quantization or gravitational interaction between particles moving at speed near c or even superposition of gravitational states. In this respect nothing hinders in principle to conceive the actual corpuscles as waves: then, since F p =  , it is anyway possible to introduce p h λ = and next to define Eventually, introducing the uncertainty (2.41), it is possible to proceed towards a gravity field valid in all reference systems. This outline of alternative approach shortly sketched as a corollary of De Broglie momentum would certainly allow an innovative relativity without insurmountable efforts. But unfortunately this is not the true crucial point: the classical mechanics or standard relativity could not fit the conceptual character of the quantum world without accounting for two points that no mathematical code could ever introduce: the non locality and non reality, without which phenomena like the entanglement could never be explained or even conceived. Without these distinctive quantum features, would be out of our mind the Bell inequality, which instead is a fingerprint of the gap between relativistic and quantum theories as shortly sketched in Appendix C. Actually the most important problem is to obtain a non-local and non-real general relativity. These features seem oxymora when concerning real corpuscles that someway must be referred sooner or later to the Newton law, may be as a particular case or limit condition. In other words the crucial point is either to make relativity non local and non real or to demonstrate that quantum physics is local and real. Yet the experimental data show that the second alternative is unphysical; so the only attempt to formulate a successful connection between the theories is the first chance, which however requires a new conceptual reformulation well beyond the mere mathematical strategies. In this respect the results of the present model indicate that (1.2) are a simple and reliable candidate to account for both theories. The standard quantum mechanics implements the operator formalism that by definition is related to the wave behavior of particles; yet to match relativity it is more sensible to implement the corpuscular behavior of particles according to the uncertainty, which inherently imply both delocalized mass and wave behavior. This intuitive statement summarizes the basic idea on which has been conceived the strategy of this paper. Clearly the mathematical formulation of the theoretical model must be consistent with these premises. The universe implies the uncertainty. Indeed the mere definition (1.1) of space time takes implementable physical meaning when written first as in (1.3), which in turn provides physical information when rewritten further as in (2.1) and then as in (2.2) and next as in (2.29). Often the algebraic steps have been inspired by and based on initial dimensional relationships, rather than on mathematical equations: the former are actually conceptual similarities, the latter prospect specific local values. This is for example the case of Equations (2.9) or (3.1) or (2.29); yet (1.3) and (3.20) are examples of how the dimensional premises turn into a physical formulation to be compared with the experience. But just this comparison rises a further crucial point: the concept of measurement. As in fact the strategy of the present paper has followed these ideas, anyway the resulting (2.40) of the section 2.6 can be nothing else but agnostic relationships between ranges of dynamical variables preliminarily introduced in (1.2); strictly speaking their agnostic essence is a corollary of the initial abstract considerations, in turn based on the physical dimensions of the fundamental constants of nature. The physical kernel of these constants contains however all ingredients necessary to "materialize" their dimensional implications: as a matter of fact the uncertainty ranges of dynamical variables inferred in this way, see e.g. (2.29) and (2.35) or (5.45) and (5.46), fulfill not only the same relativistic transformation properties of the local dynamical variables but also the Heisenberg requirement. It is then evident the more general character of the present ap-ther implemented according to its own properties in the various sections. On the other hand the uncertainty is regarded without having in mind only its original quantum implications, for this reason has been emphasized its immediate derivation from the operative definition of space time proposed in (1.1) as early exposed in [4]. In other words have physical meaning the uncertainty ranges, and not the random local dynamical variables. To link quantum and relativistic physics implies a conceptual cost; for example the Lorentz transformation of x and x' does not read ( ) ( ) x Vt x Vt β β ′ ′ − → + because the local time and space coordinates are unknown, it is only possible to consider x x δ δ ′ → but only the origins of the inertial reference systems. As a first remark about the quantum theory in this respect, note that the previous considerations are enough to bypass a wave based quantum approach only, as it is currently done, to start instead from the (1.2) totally agnostic but just for this reason more general; nevertheless the wave formalism and all its well known implications are in effect a straightforward corollary of the quantum uncertainty. As a further remark, is worth emphasizing that in this model are missing equations of motion to be solved; yet it is natural because (1.2) skip even the probability of local position. So also concepts like "comoving coordinates" are useless because is missing the concept itself of local coordinate, systematically replaced throughout this model by the physical concept of coordinate ranges; nothing is assumed known about these latter, while the same holds for any other dynamical variable. Nevertheless, just this agnostic approach allowed to obtain in a straightforward way relevant outcomes of general relativity and numerical results of the section 6 skipping crucial concepts like distances between objects, classically defined. Although the present model waives concepts definable in the frame of a deterministic metric, the conceptual limit put by the uncertainty selects the allowed knowledge actually accessible to the observer; e.g. by this reason one component of angular momentum is physically definable. Without being aware of this conceptual limit, relativistic and quantum theories would remain incompatible with each other. Also note that aim and formulation of the present model are in principle different from that of Dirac in describing the relativistic hydrogenlike atoms: in His model, Dirac implements known relativistic concepts to infer a wave equation consistent with the ideas already formulated by Einstein. Here instead the fundamental principles of both theories are consistently cogenerated "ab initio" in a self contained way. The only common premise is that both (1.1) and in turn (1.2) merge together space and time, which therefore are meaningless separately: the former implicitly by dimensional reasons, the latter explicitly. Thus it is evident that (1.2) cannot imply any metrics, i.e the chance of defining lengths, angles and so on, just because size and orientation in the space time of all uncertainty ranges are completely unknown by definition; nevertheless the conceptual physical formulation of vectors follows by extrapolating physical concepts, i.e. simply guessing the meaning of dynamical variables corresponding to the range S. Tosto position of the object; it is nevertheless reasonable the fact that according to (5.30) and (5.94) in proximity of the massive object the local time run t δ due to the gravitational field is slowed down with respect the proper time range Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper. and for 3 k = in particular by definition F is force while k x ∆ and x ∆ are arbitrary displacements of the boundaries defining the initial volume defining η , so that k P and P are pressures because S and k S are surfaces. In other words the energy density inside V is assumed to change at constant  merely because of the change of space time volume. Hence, taking the ratios side by side to eliminate δ =  F x, the first and last terms of the two chains yield This identity is inferred from and thus agrees with the initial (C.2). It is clear that reverting the order of these steps starting from this identity, one finds again (C.2). Let now C P take different values C′ and n C′ , in which case the equality does no longer hold with the same values pertinent to A P and B P . In general, with the same reasoning repeated for initial ( ) : an initial discrepancy cannot result in a final identity through the same steps that have converted an initial identity into a final identity. Repeat now the reversed steps (C.5) to (C.2) starting from this inequality; it is clear that now one finds the corresponding inequality (C.3), rewritten in general via two inequalities. To understand whether (C.7) is consistent with (C.5) it is necessary to specify the properties , , A B C P P P : the problem is comparing "determinism vs non-determinism". In this paper determinism has been referred to either existence or not of local space time coordinates; now this concept is extended to the properties of the values (C.1) and concern non locality and non reality of the quantum world. The problem of interest is thus to establish whether or not the physical properties of a system of particles do exist "a priori" or are created by the interaction with the experimental apparatus that perturbs the initial unknown system. (C.1) have been written in order that the property C P appears with not value n C only and the property A P with occurrence value A only, whereas B appears in both forms; to infer information about non local and non real systems in comparison with real and local systems, consider a possible scheme where B is a property like particle spin or photon polarization. In the case of spin let B and n B represent the respective chances of spins paired or not. The scheme implements the following attributions of values , , At the right hand side of the inequality is concerned the quantum theory, which is both non real and non local; indeed the sections 2.6 and 2.7 have shown that the uncertainty requires contextually these features. At the left hand side is concerned any non-quantum theory, which is either non local or non real only. The symbol ≥ is understandable regarding ( ) ...,... N as probabilities, which is possible simply introducing a normalization factor to unity; obviously the sum of probabilities of either property verified is higher or at least equal to that describing both probabilities contextually verified. The first column represents therefore the values A and B of the properties A P and B P in a non-quantum local theory, because it is non real only, and thus without spin correlation; this correlation requires indeed a non local spooky action to occur. In the second column the values B and C of the properties of B P and C P represents again a non-quantum theory, because of its non locality only, but now with spin correlation just due to its non locality. The third column represents the quantum theory, which is both non local and non real. The symbol ≥ identifies the Bell inequality. Predetermined physical properties, typical of non-quantum physical theories, classical and even relativistic as well, fulfill the inequality: the determinism of relativistic metrics belongs to a classical vision of universe, although enriched by covariance of physical laws, four dimensional premise, invariant light speed introduced by Einstein. The violation of the inequality does not require the existence of "hidden variables" to bypass the difficulty of a superluminal action between particles. In fact, hidden variables are excluded in the present conceptual frame based on (1.2) and bypassing the wave functions where these hypothetical variables could be somehow encoded. A further remark on the Bell inequality is that it reads
16,451.4
2021-01-01T00:00:00.000
[ "Physics" ]
Tracking Supermassive Black Hole Mergers from kpc to sub-pc Scales with AXIS We present an analysis showcasing how the Advanced X-ray Imaging Satellite (AXIS), a proposed NASA Probe-class mission, will significantly increase our understanding of supermassive black holes undergoing mergers -- from kpc to sub-pc scales. In particular, the AXIS point spread function, field of view, and effective area are expected to result in (1) the detection of hundreds to thousands of new dual AGNs across the redshift range 0<z<5 and (2) blind searches for binary AGNs that are exhibiting merger signatures in their light curves and spectra. AXIS will detect some of the highest-redshift dual AGNs to date, over a large range of physical separations. The large sample of AGN pairs detected by AXIS (over a magnitude more than currently known) will result in the first X-ray study that quantifies the frequency of dual AGNs as a function of redshift up to z = 4. Introduction Most massive galaxies are believed to have a central supermassive black hole (SMBH) with a mass of 10 6 -10 9 M ⊙ , and classical hierarchical galaxy evolution predicts that the later stages of galaxy evolution are governed by mergers (e.g., [1]).As a result, galaxy mergers provide a favorable environment for the assembly of active galactic nuclei (AGNs) pairs [2]."Dual AGNs" are pairs of AGNs in the earliest phases of the galaxy merger, where the SMBHs are gravitationally unbound.They have typical separations <30 kpc and can be in a single galaxy or an interacting system (see, e.g., [3,4]).The SMBHs will sink toward the center of the stellar distribution on the dynamical friction time scale.For typical values of the physical parameters governing the system (such as the maximum impact parameter b max = 5 kpc and v ≈ σ = 200 km s −1 ), the inspiral time is only 3 Gyr, and any 10 8 SMBH sitting within ∼10 kpc of the center of a typical galaxy will spiral to the center within a Hubble time.These inspiral times are expected to vary as a function of the merging environment.They may be shorter for eccentric orbits, where the SMBH can pass through higher density regions with stronger drag forces [5], while computational analyses have shown that most minor mergers (e.g., with stellar mass ratios less than 0.1) will not result in close (<10 kpc) SMBH pairs forming within a Hubble time [6]. The system can evolve into an SMBH binary (SMBHB), the final stage of a galaxy merger, where the two massive host galaxies have likely been interacting for hundreds of megayears to gigayears [7].The merging system is classified as a binary when the SMBHs are gravitationally bound in a Keplerian orbit, and for a wide range of SMBH masses and host galaxy environments, this occurs at orbital separations <10 pc [8][9][10].As the last stage before coalescence, SMBHBs represent an observable link between galaxy mergers and gravitational wave (GW) events.The closest pairs (at sub-pc separations, or at ∼10 3 -10 4 Schwarzschild radii) are strong emitters of low-frequency (nHz) GWs that are expected to dominate the GW background accessible to pulsar timing arrays (PTAs; [11]), which are sensitive to massive SMBHs (10 7 -10 9 M ⊙ ).They are also direct precursors to GW events detectable by future space-based laser interferometers, such as the Laser Interferometer Space Antenna (LISA), which are sensitive to massive black hole (MBH) binaries with 10 4 -10 7 M ⊙ [12][13][14].The link between pairs, binaries, and GW astrophysics and the importance of detecting more systems are becoming increasingly stronger with recent PTA results finding evidence of a stochastic GW background consistent with a population of SMBHBs (e.g., [15][16][17][18]). In the following, we summarize how the Advanced X-ray Imaging Satellite (AXIS), a proposed NASA Probe-class mission, will strengthen our understanding of SMBH evolution via mergers-from kpc to sub-pc scales.Specifically, we present the current detection techniques for dual and binary AGNs and how AXIS will improve on them in Section 1; we summarize the dual AGN studies that will result from the planned AXIS AGN surveys in Section 2; we review the binary AGN science cases that AXIS will be most sensitive to in Section 3; we present the population statistical analyses we can carry out due to the large number of both dual and binary AGN detections in Section 4; and we summarize our conclusions in Section 5. Throughout the paper, we assume a ΛCDM universe, where H 0 = 69.6,Ω M = 0.286, and Ω Λ = 0.714. Detecting Dual AGNs Commonly measured empirical trends between the SMBH mass (M) and host-galaxy bulge velocity dispersion (σ) and luminosity (L)-i.e., the M-σ and M-L relations-suggest that AGNs play vital roles in shaping the properties of galaxies across cosmic time [19][20][21][22][23]. Galaxy mergers are believed to be a key process supporting the various SMBH-galaxy scaling relations [24][25][26].Theoretically, there are many reasons to expect a link between gas-rich, similar-mass mergers and the accretion of material onto at least one of the SMBHs (e.g., [2,25]).Tidal forces between galaxies can introduce gravitational torques that effectively dissipate the specific angular momentum of material from large-scale gas reservoirs and transport significant quantities down to scales in which SMBHs can accrete [27][28][29].This can result in enhanced periods of SMBH growth, the regulation of the host galaxy's properties, and relations such as M-σ and M-L emerging (e.g., [30,31]). Detecting Binary AGNs Despite the strong theoretical case for the existence of SMBHBs, their observational evidence has been elusive.Currently, the only widely accepted SMBHB is at a projected separation of 7.3 pc (in the radio galaxy 0402 + 379 at z = 0.055) where the two nuclei are directly resolved via very long baseline interferometry [54] and their proper motion is statistically significant over the course of around a decade [76].However, 0402 + 379 is not representative of the low-frequency GW sources emitting in the PTA or LISA band, since its separation is much wider and its GW inspiral timescale is much longer than a Hubble time.In fact, direct observations cannot resolve the vast majority of SMBHBs in the GW-dominated regime of orbital evolution (which approximately corresponds to centito milliparsec separations), and therefore, the electromagnetic (EM) search for SMBHBs requires indirect observations from which the presence of a binary can be inferred. Intuitively, the orbital motion of a binary may imprint on the EM emission of the system as a periodic variation in the flux.This possible binary signature has in fact been studied extensively by analytic calculations and numerical simulations, and the physical mechanisms by which an AGN hosting an SMBHB (or a "binary AGN") can vary and periodically include the BH-disk impact [77,78], modulated accretion (e.g., [79][80][81][82][83][84]), relativistic Doppler boost [85], and self-lensing [86].Systematic searches for periodically varying AGNs in large optical time-domain surveys have yielded hundreds of binary candidates (e.g., [87][88][89][90][91][92][93]), while similar searches in X-rays have been less than successful due to the pointed nature of most X-ray observations and the depth and observing cadence of current surveys (e.g., [94]).Yet, X-rays are a more direct tracer of gas in the immediate vicinity of the BHs (the so-called "minidisks") at the inspiral stage (e.g., [95,96]), i.e., when the optical emission originates from further out in the system and may become decoupled from the binary motion (e.g., [97]).Hence, the most direct link between the growth of SMBHs and their mergers is best established (in the electromagnetic regime) with observations at short wavelengths (especially X-rays). In addition to tracking binary-induced periodicity, which has also been predicted in the optical bands, X-rays can uniquely probe signatures such as X-ray spectral hardening [98,99] and double broad Fe Kα lines [100,101].These signals are often accompanied by distinct emissions in other wavebands, suggesting strong synergies between an X-ray telescope and other observatories, including optical ground-based time-domain surveys such as LSST (∼2025-2035).More excitingly, EM observations of SMBHBs will also enable multimessenger science in the low-frequency GW regime, which has recently been opened up by the PTA experiments (e.g., [15]).If this gravitational wave background originates from a cosmic population of SMBHBs [102], individual binaries could be detected as single sources by PTAs by ∼2030 [103,104].In the mid-2030s, LISA will start probing GW sources in the mHz range, among which are the mergers of massive black holes (MBHs; ∼ 10 4 -10 7 M ⊙ ).These low-frequency GW detectors will prompt searches for EM counterparts in localized sky areas; at the same time, EM-detected SMBHBs can be used in the joint search for GW signals in PTA data or serve as "verification binaries" for LISA.The rich, multi-wavelength, and multi-messenger science of MBHBs and MBH mergers therefore demands a sensitive X-ray telescope operating at approximately the same time as the suite of EM and GW observatories in the 2030s.AXIS will have strong synergies with PTAs and LISA.For a summary of multi-messenger science opportunities with SMBHBs in X-rays, see [105]. The Power of AXIS for AGN Pair Studies AXIS is set to play a significant role for astrophysics research in the 2030s.It will provide images with 1 ′′ − 2 ′′ resolution, across a 24 ′ diameter field-of-view, and sensitivity ten times greater than that of the Chandra X-ray Observatory.These advanced capabilities will complement the James Webb Space Telescope (JWST) and upcoming ground-and space-based observatories, positioning AXIS as a key instrument for future X-ray studies (see [106] for more details).In particular, the AXIS point spread function (PSF), field of view (FOV), and effective area (A e f f ) are expected to significantly strengthen our understanding of the X-ray activity of AGN pairs in ongoing mergers. Currently, large-scale blind searches for X-ray dual AGNs are hampered by the large dependence of Chandra's PSF on the off-axis angle (OAA).The shape and size of the Highresolution Mirror Assembly's PSF varies significantly with source location in the telescope field of view, as well as the number of photons.For 0 ′ < OAA < 8 ′ , the 90% encircled energy radius grows from ∼2 ′′ to 6 ′′ .The point spread function becomes difficult to model above OAA values of 3 ′ [75], and consequently, off-axis point sources are frequently misconstrued as extended or having a multi-component structure.On top of this, putative dual AGNs with angular separations > 1 ′′ are difficult to detect at OAA > 3 ′ , as the angular separation becomes smaller than the semi-major axis of the PSF.Although the proposed on-axis angular resolution of AXIS (PSF half-energy width = 1.5 ′′ ) is marginally larger than Chandra's on-axis angular resolution (PSF half-energy width = 0.8 ′′ ), the field-of-view average PSF is stable as a function of the increasing off-axis angle (1.6 ′′ up to OAA = 7.5 ′ ) and is significantly smaller than Chandra's field-of-view average (∼5 ′′ up to OAA = 7.5 ′ on ACIS-I). The AXIS PSF, coupled with A e f f , 1 keV = 4200 cm 2 and A e f f , 6 keV = 830 cm 2 (compared to ACIS at launch with A e f f , 1 keV = 500 cm 2 and A e f f , 6 keV = 200 cm 2 ), and a 24 ′ diameter active field of view (compared to ACIS-I with a 16 ′ square field of view) will result in the detection of hundreds to thousands of new dual AGNs.A single 300 ks exposure with AXIS can yield a sample size of 1000 AGNs for which blind dual AGN searches down to 1.5 ′′ can be carried out.In comparison, with a 300 ks ACIS-I observation, it is expected that less than 20 AGNs will be detected within the field that has a PSF <1.5 ′′ .In Figure 1, we highlight the differences between a 300 ks observation of a dual AGN as viewed by both Chandra and AXIS, as a function of the increasing OAA.The sensitivity of AXIS will also greatly strengthen our current detection techniques for binary AGNs.Through a blind search among a large number of AGNs and by targeting individual candidates with high sensitivity, AXIS can detect the merger signatures of binary AGNs.These include X-ray periodicities and transient signals in the light curves.AXIS's large effective area at 6 keV is sensitive to detecting Doppler shifted fluorescent Fe Kα lines in binary AGN candidates. 1 00 1 00 1 00 1 00 1 00 1 00 ).We simulate a 300 ks observation with both Chandra and AXIS as a function of the increasing off-axis angle (OAA), from on axis (0 ′ ) to highly off axis (7.5 ′ ).On axis, Chandra observes 18 counts associated with the dual AGN, while AXIS observes 150.We show the size of the point spread function in a black dashed line.Given the stability of the shape and size of the AXIS point spread function, together with the enhanced effective area and field of view, a single 300 ks AXIS pointing results in the sensitivity to detect over 20× more dual AGNs than possible with a similar Chandra pointing. Observations of Dual AGNs via the AXIS AGN Surveys The AXIS AGN surveys will result in the first X-ray study that quantifies the frequency of dual AGNs as a function of redshift up to z = 4.In particular, 10,000 X-ray AGNs detected within a deep and intermediate survey field will yield hundreds of new dual AGN detections; including data from a serendipitous wide-area survey from Guest Observer observations could increase the detection number to the thousands.AXIS plans to follow a "Wedding cake" strategy to perform its extragalactic surveys: (1) a deep 7 Ms observation of a single AXIS pointing (∼0.16 deg 2 , or ∼24 × 24 square arcmin); and (2) an intermediate-area (2.5 deg 2 ) and intermediate-depth (375 ks exposure per pointing) one.An AXIS serendipitous field built via combining Guest Observer observations (assuming 20 Ms of guaranteed non-galactic plane time, with a median of 50 ks per pointing) could cover 50 deg 2 with a sensitivity ∼10 −16 erg s −1 cm −2 (see [106] for more details on the surveys). Detecting dual AGNs via a blind X-ray survey requires redshift measurements for each source.The survey fields targeted by AXIS will be strategically chosen in areas of the sky previously observed, including regions such as COSMOS, Chandra Deep Fields, JWST fields, and areas covered by Roman and/or Euclid.The counterparts of the AGN pairs detected within the redshift range of our study (0 ≤ z ≤ 4) are expected to have magnitudes significantly brighter than the average flux limit of deep JWST surveys (m AB-3.6µm ∼ 29).Cross-matching detected JWST sources with AXIS data will result in optical identification, and spectroscopic redshifts can be obtained either from previous JWST measurements or through follow-up observations using the JWST near-infrared spectrograph (NIRSpec) instrument for fainter sources.Spectroscopic campaigns will also be initiated using 10 m and 30 m class telescopes such as the Keck Multi-Object Spectrometer for Infra-Red Exploration (MOSFIRE) and the Subaru Prime Focus Spectrograph (PFS).Additionally, future grism spectroscopic campaigns conducted by Roman and/or Euclid can provide redshifts for the brightest sources. We used results from end-to-end AXIS simulations with the Monte Carlo code Simulation of X-ray Telescopes (SIXTE; [107]), as described in [108].Briefly, simulations were performed using SIXTE, which simulates X-ray observations by modeling the arrival time, energy, and position of each photon based on the unique telescope input parameters (i.e., effective area, field of view, point spread function, vignetting, read-out properties, redistribution matrix).The input catalog for the AXIS survey simulations was based on the [109] AGN population synthesis model.AGNs were simulated down to a 0.5-2 keV luminosity L X, 0.5-8 keV erg s −1 and up to redshift z = 3.In the high-redshift regime (i.e., at z > 3, where the AGN space density starts declining), a mock catalog built from the [110] z > 3 AGN luminosity function was used.The catalogs used are available online at http://cxb.oas.inaf.it/mock.html in FITS format and ready to be used within SIXTE. From the mock AXIS AGN fields, we selected AGNs that met the following criteria: L X, 0.5-8 keV > 10 42 erg s −1 ; n, number of 0.5-8 keV counts >50, and OAA < 10 ′ .We imposed these cuts to form a sample of AGNs where we could easily find dual AGNs.For example, if n < 50, assuming a standard flux ratio of ∼0.1 [33], the secondary would be contributing <5 X-ray counts.Our luminosity and OAA cuts followed a similar reasoning: below 10 42 erg s −1 , we may suffer from contamination from bright X-ray binaries and/or ultra-luminous X-ray sources, and at OAA < 10 ′ , the average AXIS PSF half-energy width (HEW) was 1.6 ′′ .After imposing these cuts, we had a sample of ∼10,000 X-ray AGNs.In Figure 2, we show the distributions of z and n of our AXIS sample. We compared these distributions to those for X-ray AGNs from publicly available wide and deep Chandra fields: X-UDS (Chandra imaging of the Subaru-XMM Deep/UKIDSS Ultra Deep Survey field; [111]), AEGIS-XD (Chandra imaging of the central region of the Extended Groth Strip; [112]), CDF-S (Chandra Deep Field-South; [113]), and the COSMOS-Legacy survey [114].Here, we included AGNs that satisfied the following criteria: L X, 0.5-8 keV > 10 42 erg s −1 , n > 50, and OAA < 4 ′ .In particular, the Chandra PSF HEW at 4 ′ was close to 3 ′′ .Between AXIS and Chandra, the samples were significantly different as a function of their size, redshift, and X-ray counts.In comparison to the 10,000 X-ray detected AGNs by AXIS, the Chandra sample size was composed of 428 AGNs that spanned a shorter redshift range and had far fewer counts (with the majority of X-ray AGNs at z < 2.5 and with n counts < 200) We included X-ray AGNs that met the following criteria: L X, 0.5-8 keV > 10 42 erg s −1 , n, the number of 0.5-8 keV counts, >50, and OAA < 10 ′ .We included z and n counts information for publicly available wide and deep Chandra fields: X-UDS (Chandra imaging of the Subaru-XMM Deep/UKIDSS Ultra Deep Survey field; [111]), AEGIS-XD (Chandra imaging of the central region of the Extended Groth Strip; [112]), CDF-S (Chandra Deep Field-South; [113]), and the COSMOS-Legacy survey [114]. 2.1.Quantifying the Rate of Dual AGNs to High-z AXIS will observationally constrain the frequency of X-ray dual AGNs to within 3%, up to z = 4, quantifying how (or if) mergers affect SMBH growth and galaxy evolution.If mergers play no role in enhancing SMBH growth, the expected frequency of dual AGNs is predicted to be below 3% at all redshifts [115,116]; however, large-scale cosmological hydrodynamical simulations that model the physics associated with SMBH accretion and mergers and predict that the frequency of dual AGNs should be a factor of two higher [117,118], while nearby observations of dual AGNs predict a fraction four times as high [71].The AXIS deep and intermediate survey will detect a sample size of AGNs large enough to discern between a non-enhanced and a merger-enhanced fraction, down to angular separations of 1.5 ′′ . There have been many optical searches for quasar pairs in the high-redshift Universe, where tens of candidates have been identified (z > 1; e.g., [119][120][121][122][123]).Most recently, two of the highest-z dual AGN candidates (z > 5) were detected via optical spectroscopy and photometry [124,125], and new observational techniques that leverage the angular resolution of Gaia have been effective first steps for detecting the dual AGN population at high-z (i.e., [126,127]).However, large surveys with wide-area coverage are necessary to find large samples of dual AGN candidates.A handful of large surveys in the optical regime have yielded constraints on the high-z dual AGN fraction.Ref. [128] analyzed double quasars resolved by the Hyper Suprime-Cam (HSC) Subaru, where ∼100 dual AGN candidates were identified out to z = 4.5.Ref. [129] analyzed 60 Gaia-resolved double quasars to measure the quasar pair statistics at z > 1.5.Both studies found no evidence for an evolution across redshifts and significantly different dual AGN fractions (∼0.26 ± 0.18% vs. ∼6.2± 0.5 × 10 −4 %).On top of this, optical selection techniques for AGNs are affected by optical extinction and contamination from star formation, which is especially problematic when observing highly obscured mergers [3,52,53,[65][66][67][68][69]130]. To date, most predictions of the dual AGN fraction at high-z, and as a function of z, have been carried out via cosmological simulations [117,118,131,132].The assumed physics, spatial, and mass resolution, as well as selection criteria for dual AGNs, vary across each simulation.In particular, results from the Magneticum Pathfinder Simulations (Steinborn et al. [131]; box size = 182 cMpc 3 ) resolve SMBH pairs down to 2-5 kpc; the Evolution and Assembly of GaLaxies and their Environment (EAGLE) simulations (Rosas-Guevara et al. [132]; box size = 100 cMpc 3 ) resolve SMBH pairs down to 5 kpc; the Horizon-AGN simulations (Volonteri et al. [117]; box size = 142 cMpc 3 ) resolve SMBH pairs down to 4 kpc; and ASTRID simulations (Chen et al. [118]; box size = 369 cMpc 3 ) resolve SMBH pairs with separations down to 4/(1 + z).Both Horizon-AGN and ASTRID include sub-grid dynamical friction modeling.A nearby observational constraint using nearby (z < 0.05) Chandra observations places a limit on spectroscopically confirmed X-ray dual AGNs of 4.4 +4.5 −2.2 % [71], and a high-z observational constraint analyzing Chandra survey data at 2.5 < z < 3.5 places an upper limit of 4.5% [75]. In Figure 3, we plot these two observational limits, as well as results from the Horizon and ASTRID simulations [117,118].Both the Horizon and ASTRID simulation results have been derived specifically for AXIS observations, i.e., each AGN in a pair has L bol > 10 43 (Eddington ratios down to 0.1), and all dual AGNs have separations 1.5 ′′ < r < 30 kpc (via private communication).We also show the expected fraction of X-ray dual AGNs, assuming the observed X-ray incidence of single AGNs in galaxies [115].We assumed that each dual AGN was undergoing a galaxy merger and weighed the X-ray incidence of a single AGN by the observed galaxy merger fraction [116] to derive the observed dual AGN fraction.Whereas the cosmological simulations include accretion physics introduced by galaxy mergers, the observed dual AGN fraction represents the statistical probability of detecting two X-ray AGNs in a galaxy merger, assuming that the probability of finding an X-ray AGN is not affected by the merger environment. Using a subsample of 10,000 X-ray AGNs from the AXIS survey fields (see Section 2.1), binned into four redshift bins, we can statistically (at the 95% C.L.) discern between predicted merger and secular-dominated dual AGN fractions, across 0 < z < 4 (see Figure 3).Error bars were calculated via a binomial error analysis and represent the 95% confidence interval.Interestingly, nearby observational constraints anchor the lowredshift X-ray dual AGN fraction twice as high as the merger-triggered accretion models predicted by the cosmological simulations.Assuming that the X-ray dual AGN fraction scales similarly to those predicted by cosmological simulations, we may expect the X-ray dual AGN fraction to peak at values closer to 8% at z = 2.This would amount to detecting hundreds of additional dual AGNs than predicted by the cosmological simulations and boost our population statistics. We emphasize that we will not be sensitive to the faintest and mostly closely separated dual AGNs, and thus our measurements will represent the dual AGN fraction for the most luminous and largely separated systems.Simulations have found that the dual AGN fraction, at a given redshift and as a function of redshift, significantly depend on the luminosity and separation limit of a given survey (see, e.g., [75,117]).However, quantifying the incompleteness of our expected measurements is complicated by the unknown underlying distributions of the flux ratios and separations of X-ray dual AGNs in our redshift bin.There has yet to be a large sample of detected dual AGNs beyond z > 2 for which population statistics can be measured. A recent analysis using NIRSpec on JWST claimed to find a dual AGN fraction of ≈23% in 3.0 < z < 5.5 [133].Taken at face value, this would result in detecting ≈7× more dual AGNs than expected using predictions from cosmological simulations (see Figure 3).We caution that the dual AGN fraction presented in [133] likely represents a different population of AGNs than our sample, such that the differences between our results may be expected.In particular, all of the four multiple AGN candidates had angular separations ∼1 ′′ or less, corresponding to physical separations between 2.9 and 10.5 kpc.This physical separation regime is one that our analysis is insensitive to, and which may represent a different population of dual AGNs.Importantly, numerical analyses have found that dual AGN activity is enhanced in the last stages of galaxy mergers, when the two SMBHs are separated by less than 1-10 kpc ( [66,134,135]), so the frequency of dual AGNs at low separation is likely to be enhanced with respect to that of their largerseparation counterparts.If mergers play no role in enhancing SMBH growth, we may expect the frequency of dual AGNs to be under 3% at all redshifts (as estimated by the X-ray active fraction of galaxies and the observed galaxy merger rate; [115,116]).However, predictions from large-scale cosmological simulations (green squares from Horizon-AGN [117]; purple circles from ASTRID [118]) that model the physics associated with mergers and SMBH accretion predict a dual AGN fraction twice as high (between < 1% and up to 4%); and nearby observational constraints anchor the low-redshift dual AGN fraction at a factor of 4 higher (∼4% in the local universe).Previous X-ray analyses quantifying the dual AGN fraction at both low and high redshift (shown in brown) have resulted in non-detections and large-error bars, due to the sample size of AGNs observed by Chandra [53,75].We show the constraints that AXIS can place on the dual AGN fraction in black (error bars calculated via binomial error analysis and represent the 95% confidence interval).With AXIS, we can statistically differentiate between the lowand high-end predictions and constrain the dual AGN fraction up to z = 4, measuring whether galaxy mergers enhance SMBH growth over cosmic time. Observations of Binary AGNs with AXIS Progress in the past decade in numerical simulations of an SMBH binary embedded in a circumbinary disk has drastically advanced our knowledge of its configuration, accretion mechanisms, and the expected EM output (see, e.g., [136,137] for reviews).These simulations have reached the general consensus that the binary torque carves out an empty "cavity" in the circumbinary disk, which has a radius approximately twice the binary separation.Nevertheless, gas flows into the cavity through narrow streams and fuels the BHs via "minidisks", which should sufficiently power the BHs to radiate as luminously as regular AGNs.This distinctive configuration, which is coupled to the binary's orbital motion, produces a range of observational signatures that largely fall into one of two categories: variability and spectral features. Periodicity, Chirping, and Merger Signatures The orbital motion of the BHs can imprint periodic variations on the EM flux of the binary system via mechanisms such as relativistic Doppler effects [85], gravitational lensing [86], modulated accretion onto the binary (e.g., [81,83,138,139]), out-flung streams of gas hitting the cavity wall [95], and mass exchange between the minidisks [140].In the late inspiral stage where the binary orbit is rapidly shrinking, the periodicity may still be able to follow the increasing orbital frequency, producing an EM "chirp" (e.g., [95]).In order to distinguish these binary variability signatures from the more likely occurrence of regular (single) AGN variability, at least two observational requirements should be met: (1) Because of the rarity of binary AGNs, a survey should probe a sufficiently large volume (i.e., sky area and depth) which samples a large number of AGNs.Current observational and theoretical work puts the occurrence rate of periodically varying binaries at ≲10 −4 per AGN [91,141]; detecting binary AGNs would thus require a sample size of at least ∼10 4 AGNs.(2) Because "normal" AGN variability is known to be stochastic and aperiodic, at least a few cycles should be sampled in order to distinguish true periodicity from a stochastic process [142].Within each cycle, the periodic variation should be tracked with a high sampling rate and high precision in order to faithfully characterize the variation and to distinguish it from stochastic variability.Thus, depending on SMBHB parameters, this corresponds to a sampling rate of around hours or days over a period of weeks up to decades. There have been searches in the Swift BAT dataset for X-ray periodicities, with a few possible candidates [143,144].However, past work has shown that it is challenging to distinguish true periodicity from stochastic AGN variability, especially when the measurement errors are large; and the relatively small sample size of BAT AGN (∼10 3 ) may not be sufficient for discovering rare binary AGNs.There may be opportunities for discovery with eROSITA, which surveys a much larger sample of AGNs; however, its sampling rate may not be well matched to the periodic timescales of the majority of SMBHBs [94].AXIS will remedy both issues, through a blind search among a large number of AGNs (see next section), and by targeting individual candidates to sample any periodicities with high sensitivity.Observations of these periodicities (or EM chirps) can not only identify SMBHBs but will also enable us to study gas dynamics in extreme, time-variable spacetime and the accretion disk structure of a binary BH system, by testing the predictions of (magneto-)hydrodynamic simulations.In certain binary models, intensive monitoring over a short time period can extract even more science: for example, binary self-lensing flares encode exquisite information about the binary disk structure and even BH shadow sizes which can not be resolved by very long baseline interferometry [145]; this can be measured by sampling ∼ten percent of the orbit at a rate that is equivalent to ∼1 percent of the orbit. Transient signals are also expected just prior to or shortly after the merger.For instance, in the final days before coalescence, the X-ray bright minidisks shrink as the binary separation shrinks and are eventually disrupted, causing a sudden drop in X-ray flux of around a few orbits before a merger, while the optical flux, which is dominated by the outer circumbinary disk, remains steady; the system then gradually re-brightens post-merger [97].Other simulations also show that the rapidly inspiraling binary can decouple from the circumbinary disk, which may also cause a sharp decline in flux at short wavelengths [146].The synergy here with an optical survey like LSST is clear: the candidate can be identified by the sudden disappearance of its X-ray flux accompanied by a steady optical emission.More importantly, this signature can, in principle, be identified with as few as two observations [97] and could be the "smoking gun" signature of an MBHB approaching merger. Spectral Hardening and Double Broad Fe Lines As the accretion streams strike the minidisks, shock-heating produces a bright X-ray emission in excess of the conventional power-law X-ray spectrum of an AGN [98,99].This spectral hardening signature may be distinguished by searching for excess luminosity in the X-ray energy range or modeling the AGN X-ray spectrum, as in previous studies of individual sources with Chandra and NuSTAR [147][148][149][150]. Ref. [151] predicts that the all-sky number of sources which exhibit that signature is ∼10 2 at the 10 −13 erg cm −2 s −1 flux level, or ∼10 4 for sources that are ∼100 times fainter.Thus, identifying this type of binary signature in blind searches is feasible with the large number of AGNs after combining the dedicated AXIS surveys and serendipitous observations.The depth in the latter, serendipitous field will reach ∼10 −16 erg s −1 cm −2 in an ∼50 deg 2 sky area [106], thus potentially yielding a large number of SMBHBs that display that signature. Additionally, X-ray signatures of a binary may also originate from the minidisks themselves, which produce fluorescent Fe Kα lines.The line energies are expected to be Doppler shifted in opposite directions as the result of radial velocity changes, producing a double broad Fe line feature which periodically oscillates with time [100,101].A past study with Swift XRT revealed tantalizing evidence for such a signature [143]; however, since the spectrum was effectively integrated over a significant fraction of the putative binary period, the temporal information was lost.With AXIS' large effective area at 6 keV (830 cm 2 ), a double Fe line feature in a similar source could be distinguished from noise fluctuations in a "snapshot" observation (and ideally, its oscillation would be captured over several visits per orbit).Meanwhile, resolving this broad line feature only requires a moderate energy resolution at 6 keV (∼0.1 keV), which is easily met by AXIS. Figure 4 shows an example where distinguishing between double-and single-line models at the same statistical significance level requires a ∼60 ks exposure with Chandra, but only ∼20 ks with AXIS.Similarly, AXIS is able to recover the energy of the second line with high precision, thereby constraining the physical parameters of the system; by contrast, Chandra would require three times the exposure time to achieve a comparable precision level. Synergies with Other EM Observatories Those X-ray emissions from binaries are usually accompanied by signatures at UV, optical, and infrared wavelengths, offering opportunities to probe the same SMBHB source across the EM spectrum.For instance, theory predicts that excess X-ray emission can be produced by streams crossing the cavity and striking the minidisks; the same cavity is expected to cause a deficit in the UV wavelength due to the missing gas (e.g., [98]).Other examples can be found in the wavelength-dependent variability amplitudes or patterns (or achromaticity) predicted by binary models (e.g., [85,86,152]).In fact, not only are multi-wavelength observations beneficial for the studies of SMBHBs, they are necessary in order to robustly distinguish binaries because of the high occurrence rates of interlopers (namely, regular AGNs).Therefore, an X-ray telescope will be a powerful arbitrator of SMBHB candidates discovered by other facilities in other wavebands, in addition to a potentially powerful discovery engine on its own.For example, a number of studies have been carried out with Chandra, XMM, or NuSTAR to observe X-ray spectra and search for the predicted X-ray excess, or other peculiar features, for a sample of SMBHB candidates that display possible optical periodicity selected from ground-based time-domain surveys (e.g., [147,149,150]).Around 2032, the Rubin Observatory LSST will be well into its ten-year operation and potentially will have discovered around a dozen to a hundred periodically varying SMBHBs [141,153].A similar follow-up study with an X-ray telescope such as AXIS will examine the nature of these periodic sources and place stringent tests on their binary hypothesis. Population Statistics with AGN Pairs The majority of dual AGNs detected by AXIS will span physical separations below 20 kpc, a physical regime where merger-induced effects are believed to be important to the SMBH growth [154].In the nearby universe (z < 0.1), AXIS is capable of detecting dual AGNs at L X > 10 41 erg s −1 at r<5 kpc; at z = 2, AXIS is capable of detecting dual AGNs at L X > 10 42 erg s −1 down to r = 12 kpc; and at z > 5, AXIS is capable of detecting dual AGNs down to physical separations of r > 10 kpc.Assuming the dual AGN fraction follows predictions from cosmological simulations, we expect the deep and intermediate AXIS survey to detect ∼200 dual AGNs for 0 < z < 4.This detection sample is over a magnitude more than the expected dual AGN detections from publicly available Chandra fields (∼10), assuming similar luminosity and X-ray count thresholds. In Figure 5, we show distributions for the redshift and physical separation associated with a mock sample of dual AGNs from an AXIS deep (5 Ms observation of a single AXIS pointing) and intermediate (300 ks exposure per pointing) survey.We include redshift and physical separations for a mock sample of dual AGNs detected via publicly available wide and deep Chandra fields (see Figure 2).Assuming the dual AGN fraction follows predictions from cosmological simulations, we create a mock subsample of dual AGN in each redshift bin with L x > 10 40 erg s −1 .We assign a physical separation to each dual AGN, sampling from a distribution of physical separations measured for X-ray dual AGN in the nearby universe [71].We note that our ability to detect dual AGN in a given AXIS observation can be amplified using available statistical tools.In particular, tools such as BAYMAX [72,73,155] can push analyses to angular separations ∼ 0.8 ′′ across a wide range of flux rations, corresponding to a physical separation r < 7 kpc at z = 1.6 (where the angular diameter distance peaks).We include redshift and physical separations for a mock sample of dual AGN detected via publicly available wide and deep Chandra fields (see Figure 2).Assuming the dual AGN fraction follows predictions from cosmological simulations, we create a mock subsample of dual AGNs in each redshift bin.We assign a physical separation to each dual AGN, sampling from a distribution of physical separations measured for X-ray dual AGNs in the nearby universe [71].For AXIS, a dual AGN is detected if the angular separation is larger than 1.5 ′′ , while for Chandra, a dual AGN is detected if the angular separation is larger than 0.8 ′′ . Dual AGNs Most recently, results from PTAs such as NANOGrav have found evidence for a GW background (GW frequencies between ∼1 nHz and 100 nHz), with oscillations of months to a decade [15].The GW signal has been compared to simulations of various SMBH binary populations, and based on current measurements, the amplitude of the signal suggests that SMBHs may be (1) more common or (2) more massive than previously thought.An important component in breaking this degeneracy is a strong constraint on the overall SMBH hardening timescale.In particular, the final signal of binaries detected by PTAs is driven by mergers occurring at z = 0.3-0.8, which correlate with progenitor dual AGNs at < 30 kpc scale separations at z = 1-3 (see figure 12 in [102]).Thus, constraining the frequency of dual AGN detections within z = 1-3, as a function of separation, will make a big impact on future binary SMBH model inferences.In Figure 5, we show current dual AGN detections from Chandra, which are mostly constrained to the nearby universe, and expected dual AGN detections by AXIS.In particular, AXIS will detect some of the highest-redshift dual AGNs to date, over a large range of physical separations. Binary AGNs The discovery of approximately a few dozen SMBHBs at different orbital periods would provide an indirect test of their hardening timescales [156].In the GW-driven regime, the residence timescale t res = t GW = −R/(dR/dt) is the time a binary spends at a given separation R, or equivalently, the corresponding orbital period t orb , and scales with the period: t GW ∝ t 8/3 orb .Since the number of binaries at a given orbital period is determined by the probability of observing them at that stage, N ∝ (t res /t Q ), where t Q ∼ 10 7 yr is the typical quasar lifetime, this yields a simple scaling relation between the fraction of sources and their periods in the GW regime: f ∝ t 8/3 orb .Hence, with a sample of SMBHBs whose orbital periods are measured from, e.g., EM periodicity, one can test the steep, t 8/3 orb scaling relation due to the GW inspiral.By contrast, a shallower scaling relation would probe the timescale due to gas interactions. Conclusions We presented an analysis showcasing how AXIS, a proposed NASA Probe-class mission, will significantly strengthen our understanding of SMBH evolution via mergersfrom the kpc to sub-pc scales.AXIS is set to play a significant role in astrophysics research in the 2030s.It will provide images with 1 ′′ -2 ′′ resolution, across a 24 ′ diameter field of view, and sensitivity ten times greater than that of the Chandra X-ray Observatory.These advanced X-ray capabilities will complement the James Webb Space Telescope (JWST) and upcoming ground-and space-based observatories, positioning AXIS as a key instrument for future X-ray studies.The analysis and results of our study are summarized as follows: 1. The AXIS AGN surveys (following a "Wedding cake" strategy) will result in the first X-ray study that quantifies the frequency of dual AGNs as a function of redshift up to z = 4.Using mock catalogs of AXIS deep and intermediate AGN survey fields, we found that a sample of 10,000 X-ray AGN could be analyzed for the possibility of a dual, while this sample could expand to thousands when including data from a serendipitous wide-area survey from Guest Observer observations.2. With complementary redshift measurements for each source, we showed that AXIS will observationally constrain the frequency of X-ray dual AGN to within 3%, up to z = 4, quantifying how (or if) mergers affect SMBH growth and galaxy evolution. If mergers play no role in enhancing SMBH growth, we may expect the frequency of dual AGNs to be under 3% at all redshifts; however, large-scale cosmological simulations predict a dual AGN fraction twice as high.AXIS observations will allow us to statistically differentiate between the low-and high-end predictions. 3. Through a blind search among a large number of AGNs and by targeting individual candidates with high sensitivity, AXIS will be sensitive to detecting signatures of binary AGN.These include X-ray periodicities and transient signals in the light curves.4. AXIS's large effective area at 6 keV is sensitive to detecting Doppler shifted fluorescent Fe Kα lines in binary AGN candidates.In particular, we simulated a mock binary AGN (∼10 9 M ⊙ ) at sub-pc separation with two broad iron lines (corresponding to an energy separation of 0.4 keV).We found that AXIS could constrain the energies of each emission line, confirming the binary, with a relatively shallow exposure (20 ks), which Chandra was unable to do with an exposure ∼3× as long. 5. The AGN pairs detected by AXIS will allow for statistical population analyses, as the detection sample of dual AGNs is expected to result in over a magnitude more dual AGNs than currently possible with Chandra.Assigning physical separations to our mock sample of dual AGNs, we expect to find mergers at a range of physical separations (4 kpc ≤ r ≤ 30 kpc) and redshifts (z ≤ 5).AXIS will detect some of the highest-redshift dual AGNs to date, over a large range of physical separations. Figure 1 . Figure 1.Dual AGNs as viewed by Chandra and AXIS.A dual AGN with L X, 0.5-8 keV = 5 × 10 43 erg s −1 at z = 3, with r = 1.5 ′′ (11.8 kpc), and a flux ratio of 0.5 (the secondary AGN has a luminosity of L X, 0.5-8 keV = 2.5 × 10 43 erg s −1).We simulate a 300 ks observation with both Chandra and AXIS as a function of the increasing off-axis angle (OAA), from on axis (0 ′ ) to highly off axis (7.5 ′ ).On axis, Chandra observes 18 counts associated with the dual AGN, while AXIS observes 150.We show the size of the point spread function in a black dashed line.Given the stability of the shape and size of the AXIS point spread function, together with the enhanced effective area and field of view, a single 300 ks AXIS pointing results in the sensitivity to detect over 20× more dual AGNs than possible with a similar Chandra pointing. Figure 2 . Figure 2.Properties of dual AGNs detected by AXIS.We show the distributions for redshift and number of 0.5-8 keV counts (n counts ) associated with a sample of AGNs from an AXIS deep (5 Ms observation of a single AXIS pointing) and intermediate (300 ks exposure per pointing) survey from which we could analyze the presence of a dual AGN (see Section 2 for more details).We included X-ray AGNs that met the following criteria: L X, 0.5-8 keV > 10 42 erg s −1 , n, the number of 0.5-8 keV counts, >50, and OAA < 10 ′ .We included z and n counts information for publicly available wide and deep Chandra fields: X-UDS (Chandra imaging of the Subaru-XMM Deep/UKIDSS Ultra Deep Survey field;[111]), AEGIS-XD (Chandra imaging of the central region of the Extended Groth Strip;[112]), CDF-S (Chandra Deep Field-South;[113]), and the COSMOS-Legacy survey[114]. Figure 3 . Figure 3.The frequency of dual AGNs across redshift.Dual AGN fraction versus redshift.If mergers play no role in enhancing SMBH growth, we may expect the frequency of dual AGNs to be under 3% at all redshifts (as estimated by the X-ray active fraction of galaxies and the observed galaxy merger rate;[115,116]).However, predictions from large-scale cosmological simulations (green squares from Horizon-AGN[117]; purple circles from ASTRID[118]) that model the physics associated with mergers and SMBH accretion predict a dual AGN fraction twice as high (between < 1% and up to 4%); and nearby observational constraints anchor the low-redshift dual AGN fraction at a factor of 4 higher (∼4% in the local universe).Previous X-ray analyses quantifying the dual AGN fraction at both low and high redshift (shown in brown) have resulted in non-detections and large-error bars, due to the sample size of AGNs observed by Chandra[53,75].We show the constraints that AXIS can place on the dual AGN fraction in black (error bars calculated via binomial error analysis and represent the 95% confidence interval).With AXIS, we can statistically differentiate between the lowand high-end predictions and constrain the dual AGN fraction up to z = 4, measuring whether galaxy mergers enhance SMBH growth over cosmic time. Figure 4 . Figure 4. Binary AGN detections with AXIS.We simulated a mock binary AGN with two broad iron lines separated by 0.4 keV, which corresponds to a ∼10 9 M ⊙ SMBHB at a sub-pc separation.The same spectrum is observed with Chandra and AXIS 20 ks, 40 ks, and 60 ks exposure.Left panel: a more positive ∆C stat indicates that the double-line model is statistically favored.The colored bands correspond to the respective 1σ distributions.For comparison purposes, the dotted line marks a nominal detection threshold which corresponds to the approximate ∆C stat achieved with a 60 ks Chandra observation, below which the double iron line model is not favored at high confidence.Right panel: The energy of the second line recovered by spectral fitting as a function of exposure time (the dashed line marks the true value).Chandra is unable to constrain its energy with 20 or 40 ks observations (consistent with the marginal detections shown on the left); however, AXIS can constrain the parameter to high precision with only 20 ks. LISA, ª 1 £Figure 5 . Figure 5. Redshift versus Physical Separation for Dual AGN Detections.Distributions for redshift and physical separation associated with a mock sample of dual AGN from an AXIS deep (5 Ms observation of a single AXIS pointing) and intermediate (300 ks exposure per pointing) survey.We include redshift and physical separations for a mock sample of dual AGN detected via publicly available wide and deep Chandra fields (see Figure2).Assuming the dual AGN fraction follows predictions from cosmological simulations, we create a mock subsample of dual AGNs in each redshift bin.We assign a physical separation to each dual AGN, sampling from a distribution of physical separations measured for X-ray dual AGNs in the nearby universe[71].For AXIS, a dual AGN is detected if the angular separation is larger than 1.5 ′′ , while for Chandra, a dual AGN is detected if the angular separation is larger than 0.8 ′′ .
11,020.8
2023-11-13T00:00:00.000
[ "Physics" ]
iPSC-derived hepatocytes generated from NASH donors provide a valuable platform for disease modeling and drug discovery ABSTRACT Non-alcoholic fatty liver disease (NAFLD) affects 30–40% of adults and 10% of children in the US. About 20% of people with NAFLD develop non-alcoholic steatohepatitis (NASH), which may lead to cirrhosis and liver cancer, and is projected to be a leading cause of liver transplantation in the near future. Human induced pluripotent stem cells (iPSC) from NASH patients are useful for generating a large number of hepatocytes for NASH modeling applications and identification of potential drug targets. We developed a novel defined in vitro differentiation process to generate cryopreservable hepatocytes using an iPSC panel of NASH donors and apparently healthy normal (AHN) controls. iPSC-derived hepatocytes displayed stage specific phenotypic markers, hepatocyte morphology, with bile canaliculi. Importantly, both fresh and cryopreserved definitive endoderm and hepatoblasts successfully differentiated to pure and functional hepatocytes with increased CYP3A4 activity in response to rifampicin and lipid accumulation upon fatty acid (FA) treatment. End-stage hepatocytes integrated into three-dimensional (3D) liver organoids and demonstrated increased levels of albumin secretion compared to aggregates consisting of hepatocytes alone. End-stage hepatocytes derived from NASH donors demonstrated spontaneous lipidosis without FA supplementation, recapitulating a feature of NASH hepatocytes in vivo. Cryopreserved hepatocytes generated by this protocol across multiple donors will provide a critical cell source to facilitate the fundamental understanding of NAFLD/NASH biology and potential high throughput screening applications for preclinical evaluation of therapeutic targets. INTRODUCTION Non-alcoholic fatty liver disease (NAFLD) is a multisystem disease, associated with chronic liver disease as well as affecting extrahepatic organs and regulatory pathways. About one-fifth of NAFLD patients go on to develop non-alcoholic steatohepatitis (NASH), the most severe form of NAFLD (Spengler and Loomba, 2015). NASH is characterized by increased lipid accumulation in hepatocytes, coupled with liver fibrosis and hepatocyte ballooning (Takahashi and Fukusato, 2014). While often asymptomatic, NASH can progress to cirrhosis of the liver and liver cancer and is projected to overtake hepatitis C as the leading cause of liver transplantation in the near future (Parikh et al., 2019). NAFLD and NASH are associated with environmental factors such as diet and level of physical activity, and metabolic disorders such as type 2 diabetes are often comorbid with NASH. Genetic risk factors, such as the I148M polymorphism in PNPLA gene (Romeo et al., 2008) are also associated with increased susceptibility. It is clear that the etiology of NAFLD and NASH is complex and involves various factors, of which the interplay is still poorly understood. Significant weight loss through lifestyle modification (Vilar-Gomez et al., 2015) or bariatric surgery (Talavera-Urquijo et al., 2020) have been shown to be helpful in resolution of NASH. There has also been considerable research activity aimed at developing a pharmacologic intervention against NASH with several compounds currently in clinical trials. These compounds have diverse mechanisms of action that generally focus on metabolic pathways that are disrupted in the disease state (Esler and Bence, 2019). A strong interest in developing therapies for NAFLD and NASH has created an impetus for generating in vitro models to study NASH development and to evaluate prospective drugs. Human induced pluripotent stem cells (hiPSC) with their unlimited proliferative capacity and ability to differentiate into different cell types provide a potential for generating large batches of cryopreserved end stage lineages for in vitro disease modelling applications. Indeed, hiPSC derived hepatocytes have been generated to mimic different aspects of fatty liver disease (Parafati et al., 2018). In addition to hepatocytes, the liver also contains Kupffer cells, hepatic stellate cells, and sinusoidal endothelial cells. An optimal in vitro model should include multiple cell type approaches to recapitulate the liver complexity for disease modeling (Underhill and Khetani, 2019). NASH patient-derived cells can serve as a valuable tool in understanding the disease progression and drug development. This study included iPSC lines derived from donors with NASH along with apparently healthy normal (AHN) controls to develop a novel hepatocyte differentiation protocol. This protocol is robust, i.e. it performed consistently well across iPSCs from multiple donor backgrounds and yielded cryopreservable hepatocytes with a high purity of hepatic markers that recapitulated other features of hepatocyte functionality including drug metabolism and formation of bile canaliculi. Hepatocytes produced by this protocol were amenable to co-culture with other liver relevant cell types: macrophages, mesenchymal stem cells, and endothelial cells. When exposed to fatty acids (FA), hepatocytes produced by this protocol demonstrated dose dependent intracellular lipid accumulation. While no difference in hepatic differentiation capacity and functional assays between cells from AHN and NASH iPSC lines is observed, endstage hepatocytes from NASH donors revealed higher levels of lipid accumulation than those from AHN controls even in the absence of added FA, thus displaying a hallmark of NASH hepatocytes in vivo. Development of hepatocyte differentiation protocol Episomally reprogrammed iPSCs generated from healthy (AHN) donors and NASH patients were used to develop the differentiation protocol described here. The protocol evolved from several published (Mallanna and Duncan, 2013;Peters et al., 2016;Siller et al., 2015;Takayama et al., 2012) hepatocyte differentiation protocols (Fig. 1A) by examining and modifying media compositions and culture methods at each stage of differentiation. Throughout the development of the protocol, improvements were adapted to increase the consistency of the process utilizing iPSCs from different donor backgrounds, healthy or diseased. The different stages of the finalized hepatocyte differentiation process are captured in Fig. 1B. Preconditioning with CHIR enhanced the generation of definitive endoderm (DE) cells across iPSC lines The first phase of differentiation process involved generation of DE. iPSCs derived from AHN and NASH specific donors consistently yielded pure population of DE cells defined by the co-expression of CXCR4 and CD117 ( Fig. 2A). Efficient DE induction was coupled with the decline of pluripotency markers OCT4, NANOG, and TRA1-81 ( Fig. 2A,B). Preconditioning of iPSCs with CHIR99021, a GSK3 inhibitor, either for 2 or 4 days prior to initiating DE differentiation, enhanced the efficiency of conversion of iPSC to DE cells (Fig. 3A), which further resulted in a high level of expression of both alpha-1 antitrypsin (AAT) and albumin in the end-stage hepatocyte cultures (Fig. 3B,C). There were no significant differences in outcomes between the cells preconditioned with CHIR99021 for 2 versus 4 days and thus, a 2-day preconditioning step was adapted as a routine step in the protocol. CHIR supplementation improved the efficiency of hepatocyte generation A beneficial effect of CHIR99021 supplementation was noted during the conversion of hepatoblasts to hepatocytes during Stage 2 of the differentiation process. Incorporation of CHIR99021 during Stage 2 resulted in a pronounced increase in the overall cell number resulting in higher hepatocyte yields (Fig. 3D), improving the overall process efficiencythe ratio of AAT+ cells per number of cells at the end of DE inductionacross multiple donor lines (Fig. 3E). End stage cells exhibited hepatic phenotypic characteristics As the iPSCs progressed through different stages of the hepatocyte differentiation process, an increase in the expression level of hepatic markers SERPINA, ASGR1, and ALBgenes encoding AAT, asiaglycoprotein 1, and albuminwas quantified. The level of expression approached levels detected in adult human liver (Fig. 4A). End-stage hepatocyte cultures revealed a high purity (nearly 100%) of AAT-positive cells with half or more cells coexpressing albumin (Fig. 4B). When placed on Collagen I coated plates at the end of Stage 2 and cultured in Stage 3 maturation media, the cultures exhibited cobblestone morphology with the presence of (Furue et al., 2008); hESF-DIF, human ESC differentiation media (Cell Science & Technology Institute, Inc.); Ad-FOXA2 and Ad-HNF1α, adenoviral vectors for transduction of transcription factors FOXA2 and HNF1α. (B) Schematic of hepatocyte differentiation process. iPSCs from AHN and NASH donors were maintained in E8/Matrigel and acclimatized to hypoxic conditions. To initiate differentiation, iPSCs were expanded and preconditioned prior to starting DE differentiation for 10 days. The purity of DE cultures was assessed and DE cells were transitioned to hepatoblasts (Stage 1). At the end of this stage of differentiation, the cells were detached to form aggregates and differentiated further to generate mature hepatocytes. Cells can be cryopreserved at indicated points during the differentiation process, and successfully differentiated to live end-stage hepatocytes. In order to further assess the level of hepatic maturity, the expression profile of the nuclear receptor HNF4α was quantified. This receptor is a key regulator of numerous hepatic processes and its expression is necessary for liver development. The gene encoding HNF4α, HNF4A, is under transcriptional control of two distinct promoters, P1 and P2. P1 transcripts are characteristic of more mature hepatocytes while P2 transcripts are characteristic of fetal hepatocytes (Babeu and Boudreau, 2014;Chavalit et al., 2013). P1 transcripts were predominantly detected in adult liver RNA samples and end stage hepatocytes generated by the current differentiation protocol (Fig. 5A). Xenobiotic metabolism is an important hallmark of hepatocytes and the profile of the enzymes responsible for xenobiotic metabolism changes between fetal, neonatal, and adult. A well-known indicator of hepatocyte maturation is the switch in the dominant isoform of CYP3A enzyme from CYP3A7 in the fetal and neonatal hepatocytes to CYP3A4 shortly after birth (Lacroix et al., 1997). End stage iPSC derived hepatocytes exhibited a higher level of CYP3A7 expression than CYP3A4 levels (Fig. 5B). Although the level of CYP3A4 expression was tenfold less than that in adult human liver, the cells from both AHN and NASH donors demonstrated a ∼twofold rifampicin mediated induction of CYP3A4 activity (Fig. 5C). Taken together, the expression profiles of HNF4A and CYP3A indicate that the end stage hepatocytes generated by this protocol are (2-day CHIR) or 4 (4-day CHIR) days by flow cytometry. The data includes an average of duplicate samples. (B) Quantification of AAT expression at the end of process in lines from AHN and NASH donors without preconditioning (-CHIR) or with CHIR99021 preconditioning for 2 (2-day CHIR) or 4 (4-day CHIR) days by flow cytometry analysis. The data includes an average of duplicate samples. (C) Quantification of albumin expression at the end of process in lines from AHN and NASH donors without preconditioning (-CHIR) or with CHIR99021 preconditioning for 2 (2-day CHIR) or 4 (4-day CHIR) days by flow cytometry. The data includes an average of duplicate samples. (D) Yields of AAT+ hepatocytes/ml of cell culture (culture volumes were kept equal between conditions), and (E) efficiency, defined as ratio of AAT+ cells at end of Stage 2 to number of cells at the end of DE induction, in lines from AHN and NASH donors cultured in the absence (-CHIR) or presence (+CHIR) of CHIR99021 during Stage 2 of differentiation. AAT purity quantified by flow cytometry. The graphs denote average values±s.e. from three independent experiments. at an intermediate level of maturity between fetal and adult hepatocytes. Hepatocyte recovery from cryopreservation The ability to cryopreserve iPSC derived hepatocytes greatly increases their experimental utility. To this end, cryopreservation of the cells was attempted at various time points during the differentiation process. Although cells cryopreserved at the very last step of differentiation exhibited poor recovery, cells were amenable to cryopreservation at earlier stages of the process. Cells frozen at the end of DE or Stage 1 recovered well after cryopreservation and successfully differentiated to end stage hepatocytes. The cells typically exhibited >80% viability at thaw and typical hepatocyte morphology when plated onto collagen I coated vessels (Fig. 6A). Moreover, they routinely progressed to end stage pure hepatocytes with high AAT and albumin levels similar to non-cryopreserved or fresh end stage cultures (Fig. 6B). The cells generated at the end of Stage 2 of the differentiation were only moderately amenable to cryopreservation across different donor iPSCs. Hence, intermediate cell populations cryopreserved at the end of DE or Stage 1 differentiation offered a more consistent option for cryopreservation with AHN and disease specific iPSC lines. Formation of liver organoids The liver is composed of epithelial cells (hepatocytes and cholangiocytes) that work together with stromal, endothelial cells, mesenchymal cells and Kupffer cells to perform crucial metabolic functions (Cotovio and Fernandes, 2020;Lee et al., 2020). Organoid cultures recapitulating this complexity have emerged as a useful in vitro system to model tissue behavior in a dish. The ability of the hepatocytes to survive and function in a coculture model was evaluated in the presence of isogenic iPSC derived mesenchymal stem cells (MSC, precursors of hepatic stellate cells), macrophages (Kupffer cell analogues), and endothelial cells from normal and NASH specific iPSCs. The isogenic cell types were derived using protocols used to generate highly pure populations of mesenchymal stem cells, macrophages, and endothelial cells (Fig. 7). Hepatocytes used in the co-culture studies were recovered from end of Stage 1 cryopreservation and the non-parenchymal cell types used for the co-culture studies were recovered from cryopreservation and adapted to Stage 3 media. At the end of Stage 2, hepatocytes were placed in a three-dimensional (3D) co-culture with isogenic stellatelike and Kupffer-like cells at physiologically relevant ratios. The organoid cultures generated from AHN and NASH specific iPSC remained intact and alive for 10 days (Fig. 8A) and maintained hepatic functionality by secreting albumin at higher levels compared to hepatocyte monoculture aggregates (Fig. 8B). End-stage hepatocytes manifest a NASH phenotype in vitro Excessive lipid accumulation is a hallmark of NAFLD and NASH. Several cell culture based models of NAFLD/NASH have been described recently, including those using iPSC derived hepatocytes, where a NASH-like phenotype was induced by exposure of the cells to increased lipid levels (Parafati et al., 2018). Evaluation of iPSC derived end stage hepatocytes as a model of NAFLD/NASH was performed by quantifying lipidosis post FA supplementation. End stage hepatocytes derived from both AHN and NASH donors displayed a dosedependent increase in intracellular lipid accumulation when the cells were exposed to a combination of oleic and linoleic acids. Interestingly, hepatocytes derived from NASH donors displayed the spontaneous accumulation of extracellular lipids in the absence of exogenous FA supplementation, while hepatocytes from AHN donor did not (Fig. 9). Thus, hepatocytes differentiated from NASH donors successfully preserved and recapitulated steatosis, one of the key features of the fatty liver disease under in vitro conditions. DISCUSSION We have developed and validated a novel differentiation protocol for deriving hepatocyte-like cells from iPSCs from both AHN donors and NASH donors. The hepatocytes produced by this protocol displayed the hallmark phenotypic features of hepatocytes: high purity based on the expression of hepatic markers such as HNF4α, AAT, ASGR1 and albumin, CYP3A4 activity induction in response to rifampicin, typical hepatic morphology when plated onto collagen, and formation of bile canaliculi (Figs 4 and 5). Based on the ratios of CYP3A4 to CYP3A7 and P1 to P2 transcripts of HNF4A (Fig. 5), the cells produced by this protocol are intermediate in their maturation level falling between fetal and adult hepatocytes. GSK3 inhibitor CHIR99021 supplementation enhanced the purity DE cultures and enhanced expansion/yield of cultures during Stage 2 of differentiation (Fig. 3), thus contributing to an enhanced conversion efficiency of iPSCs to end stage hepatocytes. GSK3 inhibition potentiates canonical Wnt signaling to generate DE from iPSCs (Loh et al., 2014). GSK3 inhibition by CHIR99021 has been shown to promote exit from pluripotency (Teo et al., 2014), which, in turn, has been shown to improve quality of hepatocyte differentiation of hiPSCs (Czysz et al., 2015). CHIR99021 supplementation has also been shown to promote cell proliferation during differentiation of cardiac and neuronal cell types from pluripotent stem cells (Fan et al., 2018;Pachenari et al., 2017), as well as promoting expansion of primary human hepatocytes in 3D cultures (Peng et al., 2018). The hepatocytes produced by this protocol were readily cryopreservable at intermediate stages of differentiation, and displayed similar properties as non-cryopreserved or fresh hepatocytes (Fig. 6). There are numerous published protocols for hepatic differentiation of human embryonic and pluripotent stem cells ( Fig. 1A and Toba et al., 2020), but these lack robustness for performance across donor lines and for the most part yield boutique quantities of cells at the end of the process. The protocol described here routinely produces tens of millions of cryopreserved cells across multiple lines from AHN and NASH donors. Hepatocytes produced by this protocol, from both AHN and NASH donors, displayed increased lipid accumulation in response to FA exposure (Fig. 9). Interestingly, the hepatocytes from NASH donors exhibited spontaneous lipid accumulation in the absence of FA supplementation mimicking a feature of in vivo NASH hepatocytes. This study describes the first in vitro differentiation protocol for generating hepatocyte-like cells from NASH iPSCs while preserving the NASH phenotype. The hepatocytes produced by this protocol were able to successfully integrate into 3D liver organoids with macrophages, MSCs and endothelial cells and these aggregates maintained their hepatic functionality for at least 10 days (Fig. 8). This is of a particular advantage for modeling NAFLD and NASH. Working with animal models for the disease still poses challenges in identifying those best mirroring human pathology (Lau et al., 2017) and while a NASH-like phenotype can be induced in monolayer cultures of primary human hepatocytes and iPSC derived hepatocytes (Parafati et al., 2018), such cultures rapidly decline in their performance owing to spontaneous loss in xenobiotic metabolism capacity and hormone responsiveness (Berger et al., 2015;Mazza et al., 2015). 3D hepatocyte monocultures recapitulate in vivo biology more faithfully than monolayer cultures (Sengupta et al., 2014). Hepatic co-cultures have been shown to model NAFLD and NASH more accurately still, and are emerging as the closest system to mimicking the disease in vitro (Berger et al., 2015). Importantly, the NASH patient iPSC lines used in this study can be differentiated into analogues of Kupffer cells and hepatic stellate cell precursors (Fig. 7), which together with the hepatocytes described here will provide a useful source of isogenic cells for organoid formation. Lot-to-lot variability of primary human hepatocytes is a recognized issue in their use as a model and to this end, commercial sources of 3D liver organoids typically combine cells from multiple donors (Kaserman and Wilson, 2017;Kermanizadeh, 2019;Kermanizadeh et al., 2019). While such models are very useful in disease modeling and toxicology testing, donor-to-donor variability in the sources of non-parenchymal cells, especially Kupffer cells, negatively affects their performance , and donor matched hepatocyte-Kupffer cell co-cultures display a higher sensitivity in hepatotoxicity studies than donor mismatched ones (Tasnim et al., 2019). In summary, we developed and tested a novel defined process for producing pure and cryopreservable hepatocytes along with the accessory non-parenchymal cell lineages from episomally reprogrammed iPSCs derived from healthy and NASH donors. These end stage cryopreserved cell types alone or in combination, generated in large quantities, will be an ideal tool set for preclinical evaluation of therapeutic targets for NAFLD/NASH. Cell lines AHN iPSCs from donor line 01279 were developed by FUJIFILM Cellular Dynamics, Inc. (https://hpscreg.eu/cell-line/CDIi001-A). Several NASH and AHN iPSCs were purchased from the California Institute for Regenerative Medicine (CIRM) iPSC repository. CIRM donor identifications are listed in Table S1. Cell culture iPSCs from AHN and NASH donors were maintained in Essential 8 (E8) media (Thermo Fisher Scientific, cat. #A1517001) on Matrigel (Corning, cat. #354230). Cells were maintained under hypoxic conditions for at least ten passages and confirmed to have a normal karyotype prior to initialization of hepatocyte differentiation. To initiate hepatic differentiation, iPSCs were plated at 1.7×10 4 cells/cm 2 , in E8 with 1μM H1152 onto Matrigel coated vessels. After 2 days of culture in E8 with daily media exchanges, media was changed to preconditioning media containing 3 μM CHIR99021 and cultured for 2 days with daily media exchanges. Definitive endoderm differentiation was then induced with T0 media for 1 day, followed by T1-2 media for 2 days, and then T3-6 media for further 6 days. Hepatic differentiation was induced in three stages: Stage 1 for 6 days, Stage 2 for 8 days, Stage 3 for 7-14 days. Media was exchanged daily during DE induction and then every other day for the remainder of the process (Fig. 1B). Media compositions are given in Table 1. At the end of Stage 1, the cells were detached from vessel surface with Accumax (Innovative Cell Technologies, Inc., cat. #AM105) and seeded at 0.5×10 6 cells/ml in Stage 2 media +1μM H1152 to form aggregates. Differentiation was carried out under hypoxic conditions until the middle of Stage 2, when the cells were moved to a normoxic incubator. Flow cytometry Cells at different stages of the differentiation process were individualized using TrypLE Select (Thermo Fisher Scientific, cat. #12563) for definitive endoderm analysis or 0.5% Trypsin-EDTA (Thermo Fisher Scientific, cat. #1540054) for later stage staining. For evaluation of definitive endoderm makers, cells were stained live immediately after collection for 30 min at 4°C; whereas for AAT, ASGPR1, or albumin, the cells were fixed with 4% PFA, and stained overnight at 4°C in staining buffer (1 mg/ml Saponin, 2% FBS, in Dulbecco's PBS, Thermo Fisher Scientific, cat. #14190144), followed by a 1-h staining with appropriate AlexaFluor-647 secondary antibody. Cells were analyzed on BD Accuri C6 Plus flow cytometer (BD Biosciences). RNA isolation and qPCR RNA was isolated using RNeasy kit (Qiagen, cat. #74106) according to the manufacturer's instructions. Human liver total RNA was purchased from Thermo Fisher Scientific (cat. #AM7960). cDNA was synthesized using Applied Biosystems High Capacity RNA-to-cDNA kit (Thermo Fisher Scientific, cat. #4387406). qPCR was performed using Taqman probes and ABI Taqman Gene Expression Master Mix (Thermo Fisher Scientific) on Roche Light Cycler 480 and analyzed using the Roche Light Cycler 480 software v. 1.5.1. Cryopreservation and post-cryopreservation cell recovery Cells were detached from vessel surfaces using TrypLE Select (Thermo Fisher Scientific, cat. #12563) for end of DE cryopreservation or Accumax (Innovative Cell Technologies, Inc., cat #AM105) for end of Stage 1 cryopreservation, gently dissociated by pipetting and filtered through a 100 μm cell strainer. Cells were then resuspended in Bambanker (Wako, cat. #302-14681) at 5×10 6 cells/ml for end of DE cryopreservation or 10×10 6 cells/ml for end of Stage 1 cryopreservation. 1 ml of cell suspension was distributed per cryovial and the cells were frozen in a control rate freezer and stored in liquid nitrogen. For recovery of cryopreserved cells, vials were thawed in a 37°C water bath for 2-3 min, the cell suspension was transferred to a conical tube containing EIM T3-6 ( Table 1, for end of DE cells) or Stage 2 media ( Table 1, for end of Stage 1 cells) pre-warmed to 37°C. For end of DE cells, the cells were pelleted, suspended in EIM T3-6 media+1 μM H1152 and plated at 1×10 5 cells/cm 2 onto Matrigel coated vessels. The cells were cultured in EIM T3-6 media for 2 days with daily media exchanges. After 2 days, the media was changed to Stage 1 media and the differentiation proceeded as described above. For end of Stage 1 cells, the cells were pelleted and resuspended in Stage 2 media+1μM H1152 at 0.5×10 6 cells to form aggregates. The differentiation then proceeded as described above. CYP3A4 activity assay On days 5-7 of Stage 3, hepatocyte aggregates were transitioned to William's E media with Hepatocyte Maintenance Supplement Cocktail B (Thermo Fisher Scientific, cat. #CM4000, without dexamethasone) and either vehicle (0.1% DMSO) or 50 µM rifampicin (Sigma-Aldrich, cat. #R7382) for 3 days with daily media exchanges. At the end of 3 days, the cells were dissociated and distributed into 96-well plates (2.5×10 4 cell s/ well, four to six wells per condition) and subjected to CYP3A4 activity measurement using a luminescent P450-Glo CYP3A4 Assay System (Promega, cat. #V9001) according to the manufacturer's instructions. Lipidosis assay At the end of Stage 2, cells were plated onto Collagen I coated plates (Greiner Bio-One, cat. #655956) and maintained in Stage 3 medium for 4-5 days with media exchanges every other day. Cells were then treated with 0-300 µM FAs (oleic acid-linoleic acid mixture, Sigma-Aldrich, cat. #L9656) diluted in Stage 3 media for 24 h. Cells were washed with DPBS twice and fixed with 4% PFA for 20 min at room temperature (RT). After three washes with DPBS, cells were stained with solution containing 1 µg/ml Biodipy 493/503 (Thermo Fisher Scientific, cat. #D3922), Actin-555 (Molecular Probes, cat. #R37112) and DAPI (Molecular Probes, cat. #R37606) in DPBS with 0.1% Triton-X for 20 min at RT in the dark. Cells were imaged using ImageXpress micro confocal high content imager (Molecular Devices). Mesenchymal stem cells, macrophages, and endothelial cells iCell Mesenchymal Stem Cells (cat. # R1098) and iCell Macrophages (cat. # R1114) were from AHN donor 01279 (FUJIFILM Cellular Dynamics, Inc.). NASH donor CW10202 (CIRM iPSC repository) were differentiated using proprietary differentiation protocols used for the manufacture of iCell Mesenchymal Stem Cells and iCell Macrophages. Endothelial cells from line 01279 were from FUIFILM Cellular Dynamics. Cells were thawed according to the respective cell type iCell User's Guide (https://fujifilmcdi. com/) and adapted to hepatocyte Stage 3 media for 1 week prior to initiation of co-culture experiments. Liver organoid formation Hepatocyte aggregates were dissociated with 0.5% Trypsin-EDTA for 7 min at 37°C. At the same time, macrophages, MSCs, and endothelial cells were dissociated with TrypLE Select for 5-7 min at 37°C. All cells were then suspended to a density of 1×10 6 cells/ml in hepatocyte Stage 3 media and plated in ultra-low attachment (ULA) plates (Corning, cat. #3471) at the physiologically relevant (Ware et al., 2018;Leite et al., 2016;Tasnim et al., 2019) ratio of 1: 0.5: 2: 0.2 hepatocyte: macrophage: MSC: endothelial cell. Aggregates were maintained for 10 days with media exchanges every other day. Media from the last exchange (days 8-10) was collected and secreted albumin was measured using human albumin ELISA (Thermo Fisher Scientific, cat. #EHALB) according to the manufacturer's instructions. Statistical analysis Differentiation data are presented as mean±s.e. of the mean from three independent experiments. Results in Fig. 3A-C are from the single pivotal experiment aimed at determining the effects of CHIR99021. In all subsequent differentiation runs, CHIR99021 was used during preconditioning (48 h period prior to start of DE induction) and no conditions without CHIR99021 were included.
5,996.2
2020-12-02T00:00:00.000
[ "Medicine", "Biology" ]
X-RAY SCANNING APPARATUS FOR MAPPING TEXTURE AND ELEMENT DISTRIBUTIONS The past decade has seen some remarkable progress in spatially resolved texture analysis due to new computer-aided techniques of electron diffraction with the SEM and TEM. To avoid, however, some limitations specific for electron microscopy, an x-ray scanning s has been developed for the mapping of texture and element distributions on bulk samples. The set-up consists of a "white" x-ray source, a collimator system to produce a fine primary beam spot, an x-y sample stage operated by stepper motors, and an EDX detecting system for peak separation. Energy dispersive x-ray diffraction and x-ray fluorescence analysis are used for data acquisition. The density distributions of selected crystallographic directions or of element concentrations in the sample surface are acquired spot by spot, and represented by pseudocolour or grey shade maps. Several texture distribution as well as element composition maps can be obtained simultaneously. Spatial resolution is presently limited to 50/ma by the low level of primary beam intensity. INTRODUCTION The past decade has seen some remarkable progress in spatially resolved texture analysis. The modern techniques of microscale texture measurement are based on electron diffraction in the scanning (SEM) and transmission (TEM) electron microscope (Schwarzer, 1991a;Weiland, 1994). The benefits of analytical electron microscopy are the correlation of morphology (from the micrograph) and element composition (from EDX analysis) with crystal lattice orientation on a submicron scale. As an extension of x-ray texture analysis, quantitative pole figures can be measuw by TEM in selected area diffraction mode to finally obtain the ODF for cubic crystals (Xia, Schwarzer, 1994) or texture estimates for hexagonal crystals (Helming, Schwarzer, 1994) from sample regions down to about /m wide. If the spatial arrangement of grains or misorientations across grain boundaries are important, individual lattice orientations have to be analysed grain by grain. They are most conveniently determined by interpreting Kikuchi patterns in reflection from bulk samples (SEM) or in transmission from transparent samples (TEM). Measurement of single grain orientations is straight forward using a computer-aided technique: The diffraction pattern on a luminescent screen in the microscope is transferred to a computer via a low fight level video camera, and interpreted interactively or automatically within seconds. The majority of individual grain orientation measurements have so far been performed by the backscatter Kikuchi pattern technique (Dingley, Randle, 1992) with the SEM (also known as electron back-scatter diffraction technique; "BKP". "EBS", "EBSP" and similar acronyms are used in literature). This is due to specific advantages of SEM over TEM. First of all, SEM are widely used in many laboratories and in addition, several commercial systems and computer programs for backscatter Kikuchi pattern analysis are available. The operation of an SEM is less sophisticated. The preparation of samples is extremely simple. Finally, dynamic heating or tensile stage experiments can be performed conveniently in the SEM. A fully automated orientation mapping appliance for the SEM has been developed recently which involves interfacing control of the specimen stage, pattern interpretation routines and image mapping software ("orientation image microscopy", OIMTM) (Mason, Adams, 1994). The piezoelectric driven stage has the capability of taking 0.1 /an steps over approximate 27 mm 26 mm area. The reproducibility over this range of travel is 0.25/an. As an altemative (Schwarzer, Kunze, 1994) the primary beam rather than the sample position is controlled by the computer, and the beam spot is positioned on the sample surface in incremental steps on a square grid pattern. In the latter version, lateral resolution and reproducibility are only limited by the spot size and the digitisafion depth of the digital-to-analog converter of the beam control. The first choice in materials science, however, might be the TEM. It enables the deepest insight in the microstrucmre: in addition to precise crystal lattice orientations and misorientations, the density, arrangement and type of dislocations, deformation systems, as well as the complete orientation of grain boundaries can be determined on-line (Schwarzer, Zaefferer, 1994). The spot size of a modem TEM in microbeam diffraction mode can be reduced to some 10 nm diameter while the Kikuchi diffraction pattern is still sufficiently bright to enable pattern recognition. Hence spatial resolution and the degree of plastic sample deformation make virtually no limitation. If an appropriate sample preparation is applied and a medium voltage microscope is available, contiguous sample areas more than 0.1 mm-0.1 mm wide are often electron transparent and can be studied. This means that with fme grained materials which are the real domain of electron microscopy investigations a large number of single grain orientations, indeed, can be measured per sample so that they are statistically sufficient for texture analysis. When studying technical materials, however, electron diffraction suffers from several deficiencies. Rather diffuse Kikuchi patterns are formed unless the volume of the grain illuminated by the primary beam is a fairly perfect crystal. In the SEM sharp backscatter Kikuchi patterns from bulks can hardly be generated neither on submicron grained nor extensively cold deformed materials, since spatial resolution is in the range of 0.2 to /an, depending on the beam voltage and the mean atomic number. Low electric conductivity may give rise to excessive sample charging. A thin conductive coating, however, is often prohibitive as any foreign surface layer tends to obscure the backscatter Kikuchi pattern. Furthermore the sample must be steeply tilted to the primary beam. Hence the surface image is foreshortened and distorted, and standard electron detectors often give poor image quality only. The application of the TEM, on the other hand, is handicapped by the tedious sample preparation. TEM is not adequate for coarse grain sizes. Finally there are only a few TEM to date equipped with a facility for the online measurement of SAD pole figures or individual grain orientations. EXPERIMENTAL SET-UP OF THE X-RAY SCANNING APPARATUS Electron microscopy cannot be used to perform all local texture measurements equally well. In spite of latest insmunental developments, there are still some applications where individual grain orientations cannot be measured by electron microscopy such as in the case of excessive deformation, small grain size, or incompatibility of the sample with electron microscopy. Some other applications only require a knowledge of the spatial distribution of certain crystal lattice directions averaged over small areas (i.e pole densities) in the sample surface rather than the full crystal lattice orientation of each grain. In order to avoid some limitations of electron microscopy, an x-ray scanning technique has been developed which is based on energy dispersive x-ray spectroscopy (Schwarzer, 1989). The spatial distributions of pole densities or elements in the sample surface are illustrated graphically by the construction of maps. Pseudoeolours specific for the density of a crystallographic direction or concentration of an element under consideration are assigned to the image points (x, y). Inhomogeneities in texture or composition are so easily recognised. The application of the x-ray scanning apparatus is not limited to metal samples. Insulating polycrystals can be studied as well, such as ceramics, minerals or bones. Sample preparation is simple. In most cases the same or similar methods as for reflected light microscopy can be applied. X-ray mapping is a very gentle method, since the specimen is only exposed to x-rays and stays in the atmospheric environment. The preliminary set-up (Schwarzer, 1991b;Schwarzer, 1993) has been further improved. The present apparatus (Figure 1) consists of a PHILIPS generator PW 1830/ 25 equipped with a fine-focus tube and a goniometer PW 1835. A circular diaphragm collimator is attached to the tube in order to confine the beam to a free spot. The specimen is mounted in vertical position on an x-y stage with stepper motor drives (HUBER stage 5102 and control unit 9012). The stage is capable of taking 2.5 /an steps over approximately 24 mm 24 mm sample area. The diffracted intensity values are detected with an EDX system (SPECTRACE 6100 and TRACOR NORAN Si(Li) detector). The angle of incidence, 01, and the takeoff angle, 02, can be varied independently between 20 and 65 by rotating the sample stage and the detector arm. The apparatus is controlled by a personal computer. PRINCIPLES OF ENERGY DISPERSIVE X-RAY DIFFRACFION The ordinary tube has a tungsten target. The high density material gives rise to an intense "'white" radiation primary beam consisting of continuous Bremsstraldung and characteristic tungsten lines. From the illuminated spot on the sample surface a wide spectrum of secondary x-rays is emitted which is composed of broad diffraction peaks, sharp characteristic fluorescence lines, and a.low background of scatter radiation. A crystallite diffracts partial rays from the primary beam which are matching Bragg's (2). Standard EDX systems such as used for materials analysis in the SEM are well suited. The energy resolution is in the range from 130 eV to 160 eV and sufficient for texture measurement, since diffraction peaks of low indexed planes of cubic materials are separated by several hundred electron volts in the spectrum, if the goniometer angles 0 and 02 have been set appropriately. In order to increase sensitivity by enhancing intensity, diffraction peaks may be positioned on the energy values of characteristic lines in the primary beam by varying 0 and 02 simultaneously. The sample is moved step by step in a regular grid pattern with the x-y stage, and a spectrum is acquired at each grid position. Information can be simple integrated peak intensities or processed data, such as intensities corrected for background or overlap. 25 peaks of a spectrum can be analysed simultaneously and composed to texture maps of the same sample area. The computer notes the position of the sample stage, and stores the intensity values along with the corresponding x-y coordinates in memory. Once this cycle is completed, the computer increments the stage to its next measurement location, and a new spectrum is collected. For quantitative measurements, a dwelling time of typically several seconds is required for the acquisition of an image point, while a qualitative texture analysis is possible at scan rates of a second per point. The long time of data collection restricts the total number of points in a map. As an alternative to a diffraction peak, an x-ray fluorescence peak can be processed in the same way which is then characteristic for the concentration of the corresponding element. An element distribution map is thus obtained. The beam apertures and angular settings have no effect on x-ray fluorescence analysis. Absorption in air must be considered for quantum energies < 3 keV which hinders light elements below potassium to be detected. Dedicated imaging micro-fluorescence instruments have been developed (Pella, Feng, 1992;Larsson, Engstrtim, 1992). They are, however, not capable of texture mapping, because the settings of the angles of incidence, 0, and acceptance, 02, are fixed. It should be noted that the texture maps of one run have the same sample direction in common. So energy dispersive diffraction would correspond to. a 0/20 scan in a conventional Bragg-Brentano geometry, and a sequential selection of the peaks of interest at each sample point of the map. An x-ray scanning apparatus based on the Bragg-Brentano geometry, however, is difficult to realise. A sophisticated mechanical design is required, because the 0/20 scan has to be performed eucentricly such that the beams pivot about the measured point without translation on the sample. As an alternative the texture maps for different {hid }, i.e. for different 0/20 settings, might be measured one after the other. In any way a sequential procedure would drastically increase the total time required for measurement. A further disadvantage of a Bragg-Brentano geometry is the steep increase of spot size with shallow incidence of the primary beam. APPLICATIONS In the examples shown in this study, the intensities were calculated from the peak integrals with correction for background. The texture maps were acquired at synunetrical diffraction settings (01 02), i.e. the pole density distributions were probed in the sample normal direction. If the pole density distributtions are important in another direction, the sample has to be rotated or tilted from the symmetrical to an oblique diffraction position. The spatial resolution of the x-ray scanning apparatus is primarily dependent on the x-ray spot width on the sample. Since no conventional lenses are available for white x-rays, a beam collimator is used which consists of two small circular diaphragms. The spot size cannot be reduced indefinitely by stopping down the beam, since the intensity falls off proportionally to the spot area. A marked improvement in resolution and spot brightness is expected from the replacement of the diaphragm collimator by a glass capillary collimator (Carpenter et al, 1989;Ringby et al. 1989). Resolution further decreases due to the oblique incidence of the primary beam which results in an elliptical foreshortening of the spot on the sample surface by 1/sin0. 4.1 Element mapping by x-ray micro-fluorescence analysis Spatial resolution was tested by collecting an x-ray fluorescence map (Au L,, lines) of a regular gold pattern (Figure 3). The gold patches (420 pm 420/m wide) were deposited on a glass substrate by sputter-coating through a TEM specimen grid. The free bars (< 60 pm wide gaps) between the patches are clearly resolved in two perpendicular directions by which a spatial resolution of approximately 50 #m is demonstrated. Texture mapping by energy dispersive x-ray diffraction The type of information displayed in diffraction maps depends on the ratio of x-ray spot over grain size. If grain size exceeds spot size, diffraction takes place in individual grains and orientation contrast maps are obtained. Otherwise diffracted intensity is integrated over several grains and texture maps are formed. The apertures, ,40, of the primary and secondary beams are not critical, since texture width is in the range of several degrees in most materials. As an example for texture mapping a longitudinal section through a rolled copper profile is shown in Figure 4. The profile was formed by pass rolling from a square to an elliptical cross-section. A section was cut through its mid-plane and a polished flat of 14 mm. 19 mm in size was prepared for this measurement. The 220 map ( Figure 4.a) was taken at slightly asymmetrical angular settings (01 52 , 02 60 , 0 57 , 3.0 mm I0 6.0 mm and E220 5.7 keV). In order to obtain the highest possible intensity in the weaker 331 map (Figure 4.b), the angles were changed to symmetrical diffraction settings (O 0 02 60) such that the diffraction peak was positioned on the characteristic W L line (E33 8.4 keV) of the primary beam. The 220 texture map reveals a wide and rather diffuse strip of high intensity extending from the centre to the right side of the sample. This may be a texture component left over from the initial square profile. The 331 map, however, shows a narrow strip of high intensity on the right and a smaller and less pronouced one on the left hand side. These regions have undergone the highest deformation by the rolling process, and here texture has changed more inhomogeneously. Figure 5 represents the mip-plane of a partially recrystallised iron profile which had been reduced to 70% by rolling to mm thickness. For maximum intensity the Bragg angle was set to 0 01 02 31 , in order to enhance intensity by positioning the diffracted Fe 200 peak on the W Let line (E200 E Lt 8.4 keV). Texture is distributed very inhomogeneously in this. sample. The bulk forming process is mainly characterised by the macroscopic parameters distribution of stress, friction between workpiece and forming tool, elastic deformation of workpiece and forming tool, local flux of material, speed and temperature. Finite element methods are now capable to model rather complicated forming processes, and spike forming ( Figure 6) is a critical benchmark test for numerical modelling (Schonauer et al., 1993). The integral texture was measured by x-ray diffraction over a cross-section through an aluminium bar in the initial state and after spike forming, Despite the heavy and inhomogeneous deformation of the sample (Figure 6.b), there is no drastic change in microstructure indicated in the integral textures. The recalculated 100, 110 and 111 pole figures as well as the ODF of the initial state (Figure 7) mainly differ in the strength of texture from the polq figures and ODF of the sample after the spike, forming test (Figure 8) rather than in the appearance of new texture components. The deformation by spike forming seems to merely reduce the initial cube texture to about one third in the ODF. However the texture maps reveal pronounced texture inhomogeneities in the cross section (Figure 9.a) and in the longitudinal section (Figure 9.b) after the spike forming test. These inhomogeneities have been produced as supposed by considerably different local deformations. (See Color Plate 8). CONCLUSION Scanning x-ray mapping based on energy dispersive x-ray diffraction offers a very powerful technique for texture imaging. Several texture distribution maps can be acquired simultaneously for different hkl reflections but for a common sample direction. All reflections are measured at the same Bragg angle setting. It is not necessary to adjust the sample on a Rowland circle. Measurement is not affected by the degree of plastic deformation. In addition to texture mapping, element distributions can be imaged in the same way by analysing characteristic x-ray fluorescence lines. X-ray mapping is a very gentle method, since the specimen is only exposed to x-rays and stays in the atmospheric environment. Bulks can be studied irrespective of their conductivity. Due to the large depth of focus with a collimated primary beam, rough surfaces can be imaged. Sample preparation is simple and often a thorough cleaning may just by adequate.
4,081
1997-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
“Effect of work motivation and job satisfaction on employee performance: Mediating role of employee engagement” Technological developments are things that must be followed by companies to achieve a competitive advantage to improve performance. To achieve and improve performance, companies need active employee engagement by encouraging motivation and fulfilling their job satisfaction. This study aims to analyze the effect of motivation and job satisfaction on performance with employee engagement as a mediating variable. The research sample is Information Technology (IT) companies located in the cities of Jakarta and Bandung, Indonesia. Research respondents are system developers who handle system development activities for a project or part of an ongoing project. By using the convenience sampling technique 103 responses were obtained from IT developers. The research model analysis method uses Partial Least Square (PLS) with SMART PLS Ver 3.0 software. Empirical findings prove that motivation has a positive effect on the performance of IT employees, while job satisfaction is independent. Employee engagement does not directly affect employee performance, but the effect of mediation through motivation and job satisfaction can have a significant effect on employee performance. The research findings have managerial implications, in increasing high employee involvement, motivation needs to be encouraged to be more active and innovative, and facilitate the achievement of the desired results. INTRODUCTION The rapidly changing business environment, including increasingly fierce competition, requires companies to carry out various strategies to survive. One of the strategies is related to active employee engagement to achieve the best performance. Today organizations face the formidable challenge of how to manage the labor turnover that will be caused by the migration of many industrial workers; meanwhile, the Information Technology (IT) industry relies heavily on the inherent competence of humans, namely system developers. Several previous studies have shown that such massive competitive pressure, high company demands, and work situations have caused a decrease in the motivation of system developers to stay in the company; this is due to a lack of motivation and employee commitment to the organization (Varma, 2017). Human resource management (HRM) policies and practices should be directed in a manner that is in line with the organization's strategy and employee expectations. Therefore, it is important to study and understand the factors that can motivate and build job satisfaction among employees because employees are a very valuable asset for the Work motivation Work motivation is an impulse that appears in a person consciously or unconsciously to demand action with a specific goal. It can be an important component, be it in work, education, or lifestyle. Motivational energy can move any job easier and faster. Work motivation theory is usually associated with justification, not ability; that is, some people when compared to others can get the job done well (Dal Forno & Merlone, 2010). Motivation is generally a function of equity in social exchange based on equity theory. With an understanding of organizational truth, employees can be actively involved in the organization. Meanwhile, with the injustice they get, employees reduce their involvement. As a result, employees at work want the organization to restore a balance between their contribution and the work situation (Giauque et al., 2012). Measurement of work motivation determines the goals, behavioral persistence, and work-related intensity desired by the organization (Virgiawan et al, 2021;Arshadia, 2010). Situational stimuli, personal preferences, and also interactions can determine a person's motivation in pursuing a desired goal. The resulting tendency can be a combination of several incentives based on internal (self-evaluation) and external activities, outcomes, and consequences, each of which is weighted according to personal motives (Barbuto & Story, 2011). Sometimes there can be a conflict between the original intention and the action taken. Therefore, the right balance between intrinsic and extrinsic motivators will help (Farrell & Finkelstein, 2011). Workers are proud enough of their work that every business can reach a certain level and increasing utility is an implied motivation. This assumption can be tested considering that the utility of choice can be a quadratic function of working hours (Kattenbach et al., 2010). There is a difference between the terms motive and motivation, where the term motive is used in certain contexts in everyday language. Psychologists use this term in general terms describing people, who are thought to have a motive for everything they do . The emergence of worker motivation can be observed with some of the new task-oriented approaches to goals, whereas others perform tasks in any way to get good grades or avoid bad prejudice from others (Reio & Ghosh, 2009;Ryan, 2010 Job satisfaction Job satisfaction has a very broad definition so that it cannot be seen from just one definition. Happy or positive emotions that result from assessing a person's work and work experience are also called job satisfaction (Permana et al., 2021;Valentine et al., 2011). The definition given by Tnay et al. (2013) states that job satisfaction is seen as a combination of environmental styles and psychological conditions that can make someone honestly admit satisfaction with the work done. To support this definition, the amount of job satisfaction is represented by what causes the sensation of satisfaction (Darmon, 2011). The essence of job satisfaction is feeling of comfort. During work, job satisfaction becomes unstable, which can be influenced by mood and emotions. Mood states usually last longer, have a causal object and are short-lived. Events at work that trigger emotions are easier to remember than bad moods (Tabarsa & Nazari, 2016). Job satisfaction consists of intrinsic and extrinsic job satisfaction. Extrinsic job satisfaction includes traits outside of the job itself, for example, pay, the way the company is managed, while intrinsic job satisfaction includes reactions that affect people's feelings and emotions towards job features related to the job itself, for example, expertise, autonomy, and variety (Spies, 2006). Internal job satisfaction can also be said to be in the form of employee fulfillment and job descriptions . Job satisfaction influences organizational citizenship behavior, which is in line with the hypothesis which states that the higher the job satisfaction of the employees, the better the behavior of the workers. Satisfied employees are more likely to speak well of the organization; they are more sensitive to helping coworkers and satisfied employees are more in line with task decisions (Vizano et al., 2021;Husin & Nurwati, 2014). However, dissatisfied workers are reluctant to accept the goals and values of the organization (Wu et al., 2019). It is important to do research related to organizational behavior and work psychology. Psychological research can be evaluated from two different perspectives (Albrech, 2011). First, from a utilitarian perspective, satisfaction should lead to employee behavior that supports organizational functioning (Spies, 2006). Second, from a humanitarian point of view, fair treatment and respect are evidence of the excellent treatment of employees. Job satisfaction can also arise from the need to remain in the organization by choosing a career, learning, and development opportunities (Tnay et al., 2013). Workers have a high commitment to their organization before they develop attitudes such as job satisfaction (Neininger et al., 2010). Employee job satisfaction significantly determines both forms of perceived fairness. It is the attitude of employees towards various aspects of their work (Choudhary et al., 2013). And it is usually studied in a comprehensive manner, such as when examining the work as a whole or when exploring aspects of a particular task. There are many benefits of job satisfaction, namely the benefits for the organization are providing maximum work productivity and high profitability, while the benefits for workers are fun work, worker participation, control of the work environment, and feeling part of the company's work environment (Earle, 2003). It cannot be denied that certain characteristics associated with work addiction can have positive implications for workers and organizations, such as job satisfaction, career success, and high labor productivity. In addition, satisfaction is also a reflection of employees' perceptions of the work done and what roles are assigned to valuable employees. From an organizational perspective, this reflects good job satisfaction and a very supportive organizational climate that leads to employee recruitment and survival. Job satisfaction can even predict the distribution of outcomes at the organizational level, including productivity, turnover and absenteeism rates, service quality, customer satisfaction, and financial performance (Holland et al., 2011). Factors related to supportive personnel management are indirectly related to intention through the mediating effect of job satisfaction (Chang et al., 2013). Performance and job satisfaction that are interrelated are goals that are highly desired by managers. Their relationship is a major focus of diverse studies in organizational behavior and sales management. Understanding of the two main constructs is of interest (i.e., suitability and workplace aggression) because of the previous relationship between these variables. The conflict between these roles is thought to have an impact on job satisfaction. Increased conflict between roles usually leads to decreased job satisfaction (Love et al., 2010). Therefore, satisfaction is very important to predict job satisfaction by showing new relationships between the two while controlling for individual differences and other conceptually relevant variables such as communication and team satisfaction (Rogelberg et al., 2010). Employee engagement In the terminology of "employee engagement" introduced by Gallup Institute, attachment is defined as the status (in a positive sense) of an employee regarding the work environment or the company where he works. The definition of employee engagement varies widely across organizations. Among them is the Caterpillar company, which reveals that engagement is a commitment, morale, and participation of employees who remain in the organization. Employee engagement is the antithesis of job fatigue. Engagement may be an employee's status that stems from the social exchange at work and ends with higher organizational performance. An employee shows a higher performance when he finds meaning in work, company culture, and policies. Employee involvement is also caused by self-association with job roles, which includes persistence in the workplace, strong involvement in work, and deepening in work activities (Srivastava & Madan, 2016). This is supported by the concept that the psychological experience of the workforce encourages individual attitudes, behavior, and therefore levels of engagement and discharge from work. Margaretha et al. (2021) believes that psychological meaning is an important driver of work engagement. This suggests that the main drivers of total employee engagement are "individual goals and focused energy, adaptability, effects, and persistence directed toward organizational goals" (Albrech, 2011). Engagement above and beyond simple gratification with a utilization arrangement or basic loyalty to the employer -a characteristic nearly all firms have measured over the years. Engagement, on the other hand, is about desire and commitment -the willingness to take one's place and exert one's discretionary efforts to help employers succeed (Rai, 2012). In all areas, employee engagement is linked to long-term work, transformations in the way people work, where they work, what they expect from work, and in the workplace. In addition, providing clear communication is important not only for employees to know and process information but also for them to believe that the company is committed to its involvement. Within every engagement, there is an identical pattern: executives, managers, and employees know exactly what's going on. They know what the problem is, what is not working, where communication, trust, cohesion, harmony, and communication are weak. To interact with employees, it was decided that a talent management and employee engagement program was needed to help attract, retain, and develop the simplest staff. Then a consultant is obliged to keep detailed written records of how they spent their time during client engagement and new record-keeping procedures for client collections introduced by finance. The impact of technology on job creation and destruction is also a very relevant issue in community motivation and engagement. In general, HR managers use social media because of the convenience and also thanks to the competitive scenario. The use of social media is believed to provide more benefits in the recruitment process, such as broadening the candidate's background base, both active and passive. Besides that, it also encourages employee interaction and effective collaborative activities, so that mutually beneficial two-way communication is established (Nagendra, 2014). Organizations using force-based interventions have seen significantly higher employee community growth compared to organizational effects groups. Employee involvement shows positive and proactive behavior in the workplace which is a combination of motivational drive and emotionally attached and managers have a high concern for work that is communicated to achieve company goals. Employee performance Performance is an important assessment for companies so that the company's sustainability can be guaranteed (Zhang, 2010). Employee performance includes behavior that is under control but provides limits for irrelevant behavior (Dewettinck & van Ameijde, 2011). Meanwhile, the performance also assesses the active role of employees in carrying out obligations according to the formal contract given to them by the company (Biswas, 2009). Employee performance is divided into task performance and performance behavior. This behavior involves factors related to work. In the workplace, employee behavior is reflected in instantaneous behavior and extra roles. Behavior also consists of positive and negative behavior. The existence of employee performance appraisals can increase motivation and encourage them to be actively involved in innovative programs, and make it easier to reach the desired goals (Minavand & Lorkojouri, 2013). Employee performance appraisal provides feedback, and programs are prepared to improve performance that can help employees develop skills to maximize their potential (Cascio, 2014;Susanto et al., 2020). Employees with high perceived organizational support (POS) indicate that they have a greater responsibility which collectively helps the organization achieve its goals, increases rewards for key performance, and such employees are highly committed to the organization (Neves & Eisenberger, 2012;Silitonga et al., 2020). Managers and employees may also view performance differently based on cultural and cross-cultural diversity in the definition and interpretation of performance. Thus, with an individualistic culture, stress will affect individual efforts and outcomes, demanding objective and measurable performance criteria. Managers expect much higher performance in both quality and quantity, longer hours, greater responsibility, and less demand for various types of rewards. The company's business strategy to get the best performance recognizes the need for talented managers who are ready to see opportunities. Therefore, currently, the organization continues to concentrate on implementing HR practices and methods that can create good performance through improving the quality of employees, such as both formal and informal training, compensation, teamwork, career development, and others (Hapsari et al., 2021;Mangaleswaran & Thevanes, 2018). The concept of numerical performance has not been able to explain performance systems and faces obstacles when used for direct qualitative evaluation and requires resources to handle even challenging tasks and situations (Huo, 2012). Managers perform their functions to support people development and employee performance, as well as to enable a positive work context and co-worker relationships. Therefore, a study is needed to ascertain whether having such enthusiasm can also be beneficial for employee performance and what mechanisms are related to passion (Ho et al., 2011). The broader concept encompassing various activities in which organizations seek to assess employees and develop their competencies, improve performance and distribute rewards is the concept of employee performance management (Decramer et al., 2012). Inherent knowledge capacity can be poured into the work so that it can affect employee performance (Smith-Crowe et al., 2003). Such employees are generally anxious about their work, performance, and relationships with coworkers. In addition, some of them have poor performance. Poor performance conditions also weaken resistance to various changes (Liu et al., 2012). Performance management is consistently among the lowest areas. However, performance management is the main process to get the job done. This is how organizations communicate expectations and encourage the behavior to achieve important goals for development programs or other personnel actions. One example of a management program in carrying out work alignment is to implement a work from the home system for workers, including adjustable working hours, work and rest balance, and suggesting better performance appraisals. Employee performance can be assessed on two scales: performance in roles and assistance as the main dimensions of OCB. Employee performance is assessed by participants and colleagues at work Kattenbach et al., 2010). Companies with high commitment and high performance are ready to provide sustainable performance because they need to develop the next organizational pillars: 1. Performance alignment; 2. Psychological harmony; and 3. Capacity to learn and change. HYPOTHESES DEVELOPMENT Based on an in-depth literature review, empirical evidence shows contradictory findings on the impact of motivation and job satisfaction on employee engagement and their implications for employee performance. Therefore, the hypotheses to be tested in the study are as follows: H1: Work motivation affects employee performance. H3: Motivation and job satisfaction through employee engagement affect employee performance. AIMS AND METHODOLOGY The objective of this study is to prove empirically that motivation and job satisfaction can increase employee engagement and have implications for achieving optimal employee performance. The stages of the study process were first carried out by collecting data according to the tested var- iables obtained through surveys. This study uses qualitative data quantified with a Likert scale of 1-5, with research variables consisting of motivation, job satisfaction, employee engagement, and employee performance. The following is an explanation of these variables. This study was conducted on IT companies in Indonesia, with respondents who are system developers who handle system development activities for a project or part of an ongoing project. The study was conducted from January 2020 to May 2020. The locations of the companies studied were Jakarta and Bandung. The number of respondents in this study was 103 IT developers who were carried out using the convenience sampling method. The analysis used is Partial Least Square (PLS) using SMART PLS Ver 3.0 software with independent variables of motivation and job satisfaction. RESULTS AND DISCUSSION The outer model test results are discussed in the following section which shows the outer loading value using the SmartPLS analysis tool. Test validation If the loading factor has a value above 0.5 for the construct in question, then the indicator is said to be valid. Table 2 shows the Smart PLS Output for the loading factor. First, a discussion of the impact of indicators on each variable that has been determined is carried out. In the motivation variable, it can be seen that establishing good socialization with colleagues (MOT.7) has a greater influence on motivation by 0.765, and MOT.3 (presence of sick leave) has a small effect, namely 0.515. Reliability test The reliability test is useful for calculating the composite reliability value of the indicator block that measures the construct. The results of the composite reliability calculation get a satisfactory value if it exceeds 0.7. The composite reliability values for the output are shown in Table 4. The calculation results show that all variables meet the desired composite reliability value, which is above 0.7, which means that all variables are realistic. Table 5 shows the recommended value is above 0.6, where Cronbach's alpha > 0.6 and the lowest value is 0.850 meaning that it meets the desired criteria. Structural model testing (inner model) The estimated model meets the performance of the outer model, then the next step is to test the structural model (inner model). The value of R-Square in the construct is shown in Table 6. The results show that the variables of motivation and job satisfaction affect employee performance by 63.60%, while employee engagement is not very influential. Hypothesis is accepted or rejected based on the significance value between the constructs, t-statistics, and p-values. With the test results, standard errors and measurement estimates are no longer calculated based on statistical assumptions but depend on empirical observations. In the bootstrap resampling method, the hypothesis is accepted if the significance value of t-value is greater than 1.96 and p-value is less than 0.05 then the hypothesis is accepted, and vice versa. The results of testing the hypothesis of the influence of motivation (H1) and job satisfaction (H2) on performance are shown in Table 8. Table 8 shows that job satisfaction does not affect employee performance where the t-statistic value is 1.109 (< 1.96). The estimated value of the original sample is 0.030 that indicates the relationship between motivation and employee performance is positive. The probability value obtained is 0.268 (< 0.05). Thus, H2 is rejected: there is no effect of job satisfaction on employee performance. Table 8 shows that motivation affects employee performance where the t-statistic value is 2.173 (> 1.96). The estimated value of the original sample is 0.070 that indicates the relationship between motivation and employee performance is positive. The probability value obtained is 0.030 (< 0.05). So, H1 is accepted: there is an influence of motivation on employee performance. Testing the effect of employee engagement mediation on performance (H3) is shown by the structural model in Figure 3 where the value of t-statistics is based on the output with Smart PLS. Based on Table 2, the results of the analysis show that the overall indicator value is above 5. Now the indicator can influence the variable. First, for the motivation variable, the highest indicator is MOT.7, and the lowest is MOT.3. Second, regarding job satisfaction, the highest indicator is JS.13, and the lowest is JS.6. Third, for employee engagement, the highest indicator is EE.9, and the lowest is EE.4. Fourth, regarding employee performance, the highest indicator is EP.5, and the lowest is EP.11. So it can be said that motivation has a more significant influence on employee engagement when compared to job satisfaction. Then employee performance is more significant and positively influenced by motivation with a value of 6.972, then job satisfaction with a value of 3.619, and the smallest is employee engagement with a value of 2.274. CONCLUSION Empirical findings proved that the motivation variable has a positive effect on employee performance variables; on the other hand, job satisfaction does not have any impact. Motivation and job satisfaction have a positive and significant effect on employee performance. The direct involvement of workers does not affect employee performance, but mediating the effect through motivation and job satisfaction can significantly affect employee performance. The results of this study provide recommendations for company management, in increasing high employee engagement, employee motivation needs to be encouraged to be more active and innovative, and facilitate the achievement of desired results, reviews generate feedback, and performance improvement plans help employees develop skills that maximize their potential. The organization communicates expectations and encourages personnel behavior to achieve important goals for the development program so that personnel who have this passion can benefit from employee performance. Active employee involvement needs to be encouraged to provide job satisfaction and motivation according to employee expectations so that passion for work is high and performance achievement can be optimal.
5,258.4
2021-08-20T00:00:00.000
[ "Business", "Computer Science" ]
New Criteria for Selecting Reliable Thellier-Thellier type Paleointensity Results from the 1960 Kilauea Lava Flow, Hawaii Thellier-Thellier type paleointensity experiments associated with partial thermal remanent magnetization (pTRM) checks have been widely used to determine paleointensity values from volcanic and archaeological media. However, previous studies further revealed that a substantial portion of paleointensity results with positive checks for historical lavas largely fails to predict the known Earth’s field intensity values. To determine the fidelity of paleointesnity values, conventional Thellier-Thellier type paleointensity experiments have been performed from the Kilauea lava flows erupted in 1960. Our results show that positive pTRM checks range from 30.34±1.39 to 53.04±1.80 μT. This strongly indicates that positive pTRM checks can’t guarantee the fidelity of paleointensity results especially when the unblocking temperatures for the newly-formed magnetic particles are higher than the treated temperature. Thus, in this study, to check thermal alteration during heating, the temperature-dependent of hysteresis parameter measured at the room-temperature for the thermally-treated samples were also measured. Our new results show that almost all biased paleointensity values correspond to Bcr /Bc >3 and CI >~10%, which indicates strong effects of domain state and thermal alteration on the fidelity of paleointensity results. Our study provides a feasible criteria to further improve the fidelity of paleointensity estimations. Introduction Variations in the Earth's magnetic eld intensity at different time scales bear great information on the growth of Earth's deep interiors (Macouin et al. 2004;Tarduno et al. 2007; Biggin et al. 2015) and the evolution of geodynamo (Larson and Olson, 1991;Glatzmaier et al. 1999; Olson et al. 2013). There are two different types of approaches to trace the temporal variation of geomagnetic eld intensity: relative paleointensity determination (RPI, Tauxe 1993) and absolute paleointensity determinations (Thellier 1938;Thellier and Thellier 1959). Sediments carry depositional or post-depositional remanent magnetization (DRM or pDRM), and are excellent media to records the semi-continuous RPI (Valet et Modern absolute paleointensity determinations require multiple step heatings with systematic alteration (or consistency) checks. The Thellier protocol (Thellier 1938;Thellier and Thellier 1959) was initially proposed to compare the destruction of thermoremanent magnetization (TRM) and acquisition of laboratory-induced partial thermoremanent magnetization (pTRM) at equal temperatures. Thellier-type double heating techniques have been slightly modi ed, with each method has pros and cons (Thellier 1938;Thellier and Thellier 1959;Coe 1967;Aitken et al. 1988;Yu et al. 2004). The most commonly used technique is the so-called "Coe" protocol (Coe 1967), in which we rst heat the specimen to T i in a zeroeld to determine the NRM lost. To determine the pTRM gained, we second heat specimen to T i in an ineld conditions. Aitken et al. (1988) modi ed the Coe method (1967) by reversing the order of double heating. The IZZI protocol alternates the Aitken method in odd steps and the Coe method in even steps (Yu et al. 2004). Detailed reviews of various Thellier-type techniques were provided by Valet (2003) and Biggin (2010). Once the paleointensity determination is carried out, the results are displayed in an Arai plot (Nagata et al. 1963) where the slope of NRM remaining versus pTRM gained is the ratio of ancient to laboratory magnetic eld intensity. Only the stable single-domain (SD) particles follow a linear Arai plots, re ecting identical spectrum of unbloking temperature (T ub ) and blocking temperature (T b ). Multi-domain (MD) and pseudo-single domain (PSD) grains tend to produce a non-linear (or sometimes zig-zagging) Arai plots The most convenient (yet the easiest) way to ensure the high delity of paleointensity determinations is to check the quality of paleointensity determination using historic rocks whose geomagnetic eld intensity is readily known (e.g., IGRF-International Geomagnetic Reference Field). To this end, the reliability of absolute paleointensity determinations was tested using historic lavas from Hawaii, US The present study was designed to incorporate the temperature dependence of magnetic hysteresis as a potential alteration checker. We aim to determine the exact mechanisms for the biased paleointensity results with positive pTRM checks. This is extremely important for paleointensity determinations for older rocks. Samples As the largest volcano in the world, the Kilauea is located at 19° 3' N, 204° 3' E ( Fig. 1). On January 13, 1960, eruption began along the east rift zone of Kilauea. The 1960 eruption lasted for about a month and the lava covered an area of 10 km 2 (Ritcher et al. 1970; Rowland and Walker. 1987). According to international geomagnetic reference eld (IGRF-1965) model, the geomagnetic eld intensity at the eruption site would be 36.2 µT. Paleointensity determinations have yielded results ranging from 33.55 to 53.5 µT (Tanaka and Kono 1991; Tsunakawa and Shaw 1994; Tanaka et al. 1995;and Yamamoto et al. 2003). For instance, Tanaka and Kono (1991) documented presence of two segments in an Arai plot. They picked lower temperature segment as pTRM checks failed at higher temperature ranges. As a consequence, paleointensity determinations were overestimated (Tanaka and Kono 1991). The rst order reversal curve (FORC) diagrams were also obtained (SF = 8), using an alternating gradient force magnetometer (MicroMag 2900). Results from the chip of A1 display closing and elongated contours, re ecting ne-grained nature (Fig. 3). It is common to observed vertical spreads on closed contours, indicating the presence of PSD (Fig. 3). Paleointensity determination Paleointensity determinations were accepted when they pass the following selection criteria: (1) demagnetization of NRM must be univectorial with maximum angular deviation (MAD) angles < 4° ( Kirschvink 1980); (2) pTRM checks must agree with original pTRM within 5%; (3) the ratio of maximum difference produced by a pTRM check over the length of the best-t line DRAT should be less than 10% (Selkin and Tauxe 2000); (4) the angle between the principal fractions anchored to and free from the origin DANG (Tauxe and Staudigel 2004) should be < 4°; (5) at least 6 data-points with more than 48% of extrapolated NRM fraction (Coe et al. 1978) must be included in regression analysis; (6) the degree of scatter about the best-t line normalized by the estimated slope (York 1969) should be < 5%. Paleointensity estimates were calculated from incline of the Arai plot (Fig. 4). Paleointensity estimates were calculated from slope of Arai plot, which ratio between ancient geomagnetic eld and the applied laboratory eld. With these selection criteria, a total of 9 samples were successful paleointensity estimation ( Table 1). The magnetization removed at 560 °C (< 10%) with a few exceptions. The paleoinensity results range from 30.34±1.39 to 53.04±1.80 µT. Although consistent in range of errors (10% of IGRF model≈36.1±3.61 µT), large value differed by more than 10% from expected values. This erroneously result such as high estimated paleointensity caused by production of thermochemical chemical remanent magnetization (TCRM) during heating proccese (Yamamoto et al. 2003). µT is the temperature interval used in paleointensity estimation; n is the number of points used in paleointensity estimation; f,g, and q are NRM graction, gap factor, and quality factor of Coe et al., (1987). *is value meets the condition (B cr /B c <3, CI < 10%). Temperature-dependent rock magnetic properties The paleointensity results vary considerably by several factors. Non-linear feature in Arai plots can be caused either by MD effect or by thermal alteration during heating. In order to detect exact reason, sister samples were heated to the same paleointensity measuring heating steps (200, 300, 350, 400, 450, 480, 500, 520, 540, 560, 580, 600 ºC). Then after, hysteresis loops were measured at room temperature of each step to detect thermal effects (Henry et al. 2005). The room temperature normalized magnetic saturation remanence ( Figure. 5) versus temperature. The feature of Mrs T /Mrs room can be divided into two groups from room temperature to 500 °C. The rst group shows excluding low temperature relatively stable from 300 to 500 °C (Fig. 5a, e, f and g). The other group shows dome like feature (Fig. 5b, c, d, h and i). This feature mean that super-paramagnetic particles change to single domain particles or surface of pseudo single domain gained characteristic remanence magnetization at the 300 ~ 500 °C intervals. Under 300 °C points were considered to in uence of VSM. This effect is re ected in the trend of TRM-NRM trend. Basis on this, a new interval from 300 °C to 500 °C was set up and estimation of paleointensity. Discussion The studied of lava ows was erupted in 1960 from Hawaiian volcano. All together, 9 of 30 samples yield paleointensity results with positive pTRM checks using the IZZI protocol. The mean value of paleointensity was ~ 43.6 µT, which is higher than the expected value from the study area (~ 36.1 µT). In addition, the paleointensity results range widely between 30.34±1.39 and 53.04±1.80 µT (Fig. 6). Previous studies have already shown that positive pTRM checks don't really guarantee the delity of paleointensity results for the Thellier-Thellier type experiments because the unblocking temperatures of the newly-formed magnetic particles could be higher than the thermally treatment temperature (Zhao et al. 2014). However, such a aw can be overcome by the room-temperature-dependent of magnetic parameters (e.g., ARM-T, SIRM-T). Upon the thermal treatment, two kinds of mineral transformations occur. The rst type of alteration can produce newly-formed strongly magnetic minerals. Despite of the distribution of the unblocking temperatures, the enhanced concentration-dependent of magnetic parameters can sensitively detect the neoformation of magnetic minerals. The second type of thermal alterations do not produce new minerals but can oxidize the primary magnetic minerals to a less magnetic state, e.g., from magnetite to maghemite, which could signi cantly decrease the saturation magnetization (also ARM and SIRM values). Then, the rst type and second type of thermal alterations will decrease and enhance the estimated paleointensity values, respectively (Zhao et al. 2014 Our study provided a practical new criteria to determine the delity of the paleointensity especially for results with conventional positive checks. This raise a serious question for the existing paleointensity results from the geological history. We strongly suggest that domain state and thermal stability for samples should be systematically incorporated into the paleotiensity studies except for all conventional criteria. Conclusions We present paleointensity using a temperature dependence hysteresis parameters with modi ed Thellier experiments of IZZI protocol from the 1960 Kilauea lava. Only 9 samples of the 30 samples succeeded on the basis of the positive pTRM checks. However, a large proportion of the "successful" paleointensity results fail to predict the expected value. This strongly indicate that positive pTRM checks can't completely exclude the failure results because the conventional pTRM check can't detect the thermal alteration with the unblocking temperature of the newly-formed magnetic particles higher than the treated temperature. Therefore, we put forward that the combination of chemical index and domain state parameters should be used in the future study, especially for the older samples, to improve the delity of the paleointensity results. Declarations
2,508.8
2020-11-06T00:00:00.000
[ "Geology" ]
Characterization of Two Cryptic Plasmids Isolated in Haiti from Clinical Vibrio cholerae Non-O1/Non-O139 We report the complete sequence of two novel plasmids, pSDH-1 and pSDH-2, isolated from clinical Vibrio cholerae non-O1/non-O139 during the early phase of the 2010 Haitian cholera epidemic. Plasmids were revealed by employing single-cell genomics and their genome content suggests self-mobilization and, for pSDH-2, a toxin-antitoxin (TA) system for plasmid stabilization was identified. The putative origin of replication of pSDH-2 was mapped suggesting it replicates following the ColE1 model of plasmid replication. pSDH-1 and pSDH-2 were widespread among environmental V. cholerae non-O1/non-O139 with variable prevalence in four Haitian Departments. pSDH-2 was the most common element, either alone or with pSDH-1. The two plasmids detection adds to the composite scenario of mobile genetic elements (MGEs) observed in V. cholerae in Haiti. The role these small cryptic plasmids circulating in Vibrio spp. play in bacterial fitness or pathogenicity merits further investigation. INTRODUCTION In January 2010, Haiti was struck by a devastating earthquake responsible of a humanitarian crisis still ongoing. Since the first case was diagnosed in October 2010 (World Health Organization, 2010) cholera remains a serious health threat in Haiti. 41,421 new cases were reported by the WHO for Haiti alone in 2016 (World Health Organization, 2017), a reminder of how significant the epidemic still is. The agent of epidemic cholera was recognized as Vibrio cholerae toxigenic serogroup O1 (Ceccarelli et al., 2011;Chin et al., 2011) although a population of V. cholerae non-O1/non-O139 was shown to coexist with clinical V. cholerae O1 in Haiti early in the epidemic (Hasan et al., 2012). V. cholerae non-O1/non-O139 have been isolated in Haiti, mostly from the aquatic environment (Baron et al., 2016), but proof of their presence in stools collected from asymptomatic Haitian infants two years prior to the earthquake has been published (Liu et al., 2014). V. cholerae non-O1/non-O139 are natural inhabitants of estuarine and coastal waters and, as opportunistic pathogens, can be responsible for infections other than cholera, generally through the consumption of raw or undercooked seafood. Non-O1/non-O139 V. cholerae infections are continuously reported worldwide (Chomvarin et al., 2014;Crowe et al., 2016;Hirk et al., 2016), emphasizing their growing clinical significance. Mobile genetic elements (MGEs) play an essential role in gene transfer. The high incidence of plasmids in marine Vibrios (Hazen et al., 2007;Zhang et al., 2012;Aedo et al., 2014;Wang et al., 2016) suggests that the marine environment is an important source of genome plasticity mediated by acquisition of MGEs and recombination. The Haitian environment is no exception and V. cholerae non-O1/non-O139 has shown a higher genomic variability compared to clinical V. cholerae O1. We previously reported the presence of the SXT/R391-related integrating conjugative element ICEVchHai2 circulating among closely related V. cholerae non-O1/non-O139 (Ceccarelli et al., 2013), carrying new genes involved in recombination. More recently, we identified MGIVchHai6, a novel mobilizable genomic island (MGI) containing a mercury resistance transposon and an integron conferring resistance to chloramphenicol, trimethoprim and streptomycin/spectinomycin (Carraro et al., 2016). MGIVchHai6, whose sibling MGIs have also been detected in silico in the Indian subcontinent, North and South America, can be mobilized by IncA/C plasmids, highly conjugative elements also known to circulate in Haitian V. cholerae (Folster et al., 2014). While performing genomic heterogeneity tests on Haitian clinical strain V. cholerae HC-1A2, using single cell genomics techniques (Stepanauskas, 2012), we unexpectedly recovered two novel plasmids. Complete sequence and distribution of these plasmids in clinical and environmental V. cholerae non-O1/non-O139 and O1 isolated during the early days of the 2010 epidemic in Haiti are here reported, adding up to the composite scenario of MGEs circulating in Haitian V. cholerae. Plasmid Sequencing and Annotation V. cholerae non-O1/non-O139 strain HC-1A2 was isolated from stool sample of a cholera patient in Saint-Marc (Artibonite) in 2010, with traditional methods of isolation and identification as previously described (Hasan et al., 2012). 317 single amplified genomes (SAGs) of V. cholerae HC-1A2 were generated at the Bigelow Laboratory Single Cell Genomics Center (scgc.bigelow.org), as previously described (Swan et al., 2013). Briefly, single cell DNA genome sequencing involves isolating a single cell and performing whole-genome-amplification; this step is then followed by construction of sequencing libraries and DNA sequencing by a next-generation sequencer. One of the SAGs of V. cholerae HC-1A2, AD-538-E13, was genomically sequenced as follows. Single cell multiple displacement amplification products were sheared with M220 Focused Ultrasonicator (Covaris, Woburn, MA, USA) and 450-550 bp fragments were size-selected with BluePippin (Sage Science, Beverly, MA, USA). Paired-end sequencing libraries were generated using NEBNext Ultra DNA Library Prep kit (New England Biolabs, Ipswich, MA, USA), and 14 million 2 × 250 bp reads were produced using MiSeq (Illumina, San Diego, CA). The obtained sequence reads were quality-trimmed with Trimmomatic v0.32 5 using the following settings: -phred33 LEADING:0 TRAILING:5 SLIDINGWINDOW:4:15 MINLEN:36. Human DNA (≥95% identity to H. sapiens reference assembly GRCh38) and low complexity reads (containing <5% of any nucleotide) were removed. The quality-filtered reads were digitally normalized with kmernorm 1.05 (http://sourceforge.net/ projects/kmernorm) using settings -k 18 -t 80 -c 2 and de novo assembled with Geneious 7.1.2 (Biomatters, Auckland, New Zealand) using medium-low sensitivity and allowing for circularization. Circular contigs with no homology to the previously sequenced V. cholerae HC-1A2 genome (Hasan et al., 2012) were selected for further analysis. The annotation of these putative plasmids was performed using RAST and GLIMMER. RAST subsystem for functional annotation was used to determine position 1 of the assembled plasmids. Database searches were carried out through NCBI using ORF Finder, BLASTn, BLASTp, and SMART (Schultz et al., 1998;Letunic et al., 2015). Bacterial Isolates By the end of October 2010 cholera had been confirmed in four of Haiti's 10 departments (administrative regions): Artibonite, Centre, Nord and Ouest, including the capital Port-au-Prince, and the capital's Cité Soleil district. In November 2010 sampling in these four Haitian departments took place; sampling details and isolation procedures are described elsewhere (Hasan et al., 2012). 175 Haitian bacterial strains isolated during this sampling campaign were screened for presence of plasmids pSDH-1 and pSDH-2: 70 clinical V. cholerae O1, 50 clinical V. cholerae non-O1/non-O139, 22 environmental V. cholerae non-O1/non-O139 and 33 isolates of Vibrio spp. and Aeromonas spp. Antiserum kits for V. cholerae O1 (V. cholerae Antiserum Poly; Difco, USA) and V. cholerae O139 (O139 Bengal; Hardy Diagnostics, USA) were used to determine serotype by slide agglutination, according to manufacturers' instructions. Serotyping was confirmed by multiplex PCR (Hoshino et al., 1998). Bacterial isolates were stored at −80 • C in LB broth containing 50% (vol/vol) glycerol. RESULTS AND DISCUSSION Single-cell genomics performed on the pure culture of strain V. cholerae HC-1A2 with the aim of performing genomic heterogeneity tests revealed the presence of two previously unknown, circular plasmids pSDH-1 (4,985 bp) and pSDH-2 (5,580 bp) (Figure 1). We hypothesize that these two plasmids were likely missed during the original sequencing of V. cholerae isolates (Hasan et al., 2012) due to the shotgun library construction step (with ∼3 kb insert). Identification and Genomic Structure of Plasmid pSDH-1 We found that plasmid pSDH-1 encodes 10 putative open reading frames (ORFs) (Figure 1 and Table 1). orf1 to orf4 and orf6 encode putative proteins of unknown function for which no known homologs could be found. BlastP search revealed that the predicted protein encoded by orf5 shows 45% identity to a replicase family protein from plasmid pRGRH0347, and contains replicase (RepA) and priCT_1 (primase C-terminal) domains. It is possible that this protein is involved in replication of pSDH-1. Accordingly, orf5 was renamed repA. orf7 and orf8 were respectively renamed mobA and mobC. The protein encoded by mobA shares homology with a relaxase protein from Vibrio parahaemolyticus, although similarity was observed for only part of the protein sequence. The product of mobC shares low similarity (33%) with MobC from pMOMA1, a small cryptic plasmid recently described in Moraxella macacae (Whitehouse et al., 2015), but has no conserved domain hits. In various instances (Francia et al., 2004) MobA/MobC proteins have been shown to be part of a relaxosome, a protein complex required for initiation of transfer of conjugative and mobilizable elements. Conjugative elements encode all the functions required for their own transfer from a donor to a recipient cell, such as the proteins encoding the conjugative machinery and the components of the relaxosome. On the other hand, mobilizable elements rely on the conjugative machinery provided by a coresiding conjugative element. In most cases mobilizable elements encode their own relaxase, but some cases of trans-acting relaxases have been reported (Daccord et al., 2010;Carraro et al., 2017). The putative MobA protein encoded by pSDH-1 seems to be related to the MOB HEN family of relaxases (Francia et al., 2004), to which the ColE1 relaxase belongs. Although, it shares only low overall similarity with the relaxases of this family, the conserved motifs previously described are perfectly conserved in pSDH-1 MobA, with the exception of proline in motif III. The MOB HEN family relaxases are typically associated with accessory proteins, namely MobC, normally encoded by a NA: not applicable gene located directly upstream of mobA and one or two other proteins encoded by genes located within the mobA coding sequence (Francia et al., 2004). In the case of pSDH-1, we were able to identify a MobC-like protein, encoded by the ORF located directly upstream of mobA. However, the two predicted proteins encoded by the ORFs embedded in the mobA coding sequence (orf9 and orf10) do not contain any mobilization domains or features. Further analysis of the nucleotide sequence of pSDH-1 revealed a region that is similar to the origin of transfer (oriT) found in plasmids carrying a MOB HEN family relaxase (Figure 2). This region, located 52 bp upstream the MobC translation start site contains a sequence (CTGGCTTA) that is identical to the FIGURE 2 | Putative oriT region of pSDH-1. Near-perfect inverted repeats are indicated by arrows. Nucleotide mismatches are underlined. Nucleotides identical to the putative oriT region of ColE1 are boxed and the specific dinucleotide where the relaxase-mediated cleavage occurs (nic) in ColE1 is indicated by the arrow. one found in the ColE1 oriT and which contains the cleavage site (or nic site) of ColE1 relaxase. Additionally, the putative oriT of pSDH-1 comprises two nearly perfect inverted repeats. Although the sequence of these inverted repeats differs between pSDH-1 and ColE1, they are located at a similar relative distance from the putative nic sites. Identification and Genomic Structure of plasmid pSDH-2 Plasmid pSDH-2 encodes 11 putative ORFs (Figure 1 and Table 1). orf1, orf6, orf8, orf9, and orf11 encode putative proteins of unknown function and do not have any identifiable homologs. orf2 encodes a putative protein that shares 57% identity on 45% of its sequence with a hypothetical protein from a Vibrio tasmaniensis plasmid. orf3 encodes a protein sharing 99% identity with a membrane protein from V. cholerae strain VCC19. Further analysis shows that it contains MipA/OmpV domains as well as a predicted signal peptide (Blastp and SMART). MipA (MltA-interacting protein) is believed to be a scaffolding protein for murein synthesis that mediates assembly of the lytic transglycosylase MltA with the bifunctional transglycosylase/transpeptidase PBP1B, which respectively play a role in synthesis and degradation of peptidoglycan (Vollmer et al., 1999). Furthermore, UVC-and γ-irradiation, as well as starvation, were shown to induce expression of MipA in some strains of V. alginolyticus (Ben Abdallah et al., 2010;Abdallah et al., 2012) suggesting a possible role in host cell persistence in the environment. pSDH-2 orf3 was renamed mipA. orf4 and orf5 encode a putative ParD-ParE type II toxinantitoxin (TA) system. Homologs of both these proteins were found as part of the same TA system on an extrachromosomal element, likely a plasmid, from Vibrio sp. FF_304. Type II TA systems act as plasmid stabilization systems. Loss of the plasmid by segregation results in death of the plasmid-free cell, since the toxin is more stable than its cognate antitoxin, a mechanism termed post-segregational killing. BlastP and SMART analysis revealed that the protein encoded by orf4 contains a parE_toxin domain. A BlastP search showed that the product of orf5 contains a predicted transcriptional regulator COG3905 domain and SMART analyses further revealed that the N-terminal portion of orf5 product carries an RHH_1 motif. This ribbon-helix-helix motif is found in the ParD antitoxin of the RP4 ParD-ParE TA system, as well as in many transcriptional repressors, and is believed to mediate the DNA-binding function of ParD (Oberer et al., 2007). The unstructured C-terminal region is believed to be involved in interaction with its cognate toxin, ParE. orf4 and orf5 were respectively renamed parE and parD. orf10, renamed mobM, encodes a protein that shares 98% identity with a plasmid recombination protein from V. cholerae HE-09 and carries a SMC_proK_B domain (structural maintenance of chromosomes) at its C-terminal end. The N-terminal region of MobM pSDH−2 contains most of the conserved residues found in motifs I, II and III of Mob and Pre proteins of the pMV158 family (Francia et al., 2004). Plasmid recombination proteins such as Pre from plasmid pMV158 (also termed MobM) are relaxases involved in conjugative transfer (Priebe and Lacks, 1989;Francia et al., 2004). BlastP analysis of MobM pMV158 reveals that it also carries a C-terminal SMC_prok_B domain. Plasmids carrying a relaxase belonging to the pMV158 superfamily have been shown to be mobilizable by a large number of conjugative plasmids and integrating conjugative elements (also known as conjugative transposons) (Francia et al., 2004) (and references therein). Identification of the Putative Origin of Replication of pSDH-2 Comparison of the nucleotide sequence of pSDH-2 with publicly available sequences reveals that it shares no similarity with known plasmids, with the exception of two ca. 500 bp stretches, the first located between orf2 and mipA and the second encompassing orf7 and half of orf8. The first of these regions shares 88% similarity with an intergenic region of a mobilizable plasmid identified by in silico analysis in Vibrio tasmaniensis ZF-76 clone 102 (KP795524.1) (Xue et al., 2015). The second region shares 97% homology with the replication region of small Vibrionaceae plasmids (Figure 3) shown to utilize a ColE1-like replication mechanism (Pan et al., 2010). Like ColE1, these CLV (ColE1like Vibrionaceae) plasmids encode two constitutively expressed and convergently transcribed RNAs (RNA I and RNA II), which respectively specify their incompatibility and replication initiation determinants (Pan et al., 2010). Studies (Pan et al., 2010;Brantl, 2014;Lilly and Camps, 2015) (and references therein) of the replication of plasmid ColE1 have shown that RNA II is the only plasmid-encoded feature required for initiation of replication. The ∼550-bp pre-primer form of RNA II forms a persistent hybrid (R-loop) with the lagging-strand at the site of the origin of replication. Once the R-loop is formed, RNA II is processed by RNase H, producing a 3 ′ -OH end that serves as a primer for leading-strand synthesis. RNA I (108bp) specifies incompatibility and controls copy number of ColE1 by binding with the RNA II pre-primer and inhibiting R-loop formation. The corresponding region of pSDH-2 contains all the features typically found in CLV plasmids, suggesting it replicates following the ColE1 model of plasmid replication (Figure 3). Furthermore, the regions of pSDH-2, corresponding to the two ColE1 RNAs, are predicted to fold into stem loop structures ( Figure S1). pSDH-1 and pSDH-2 Distribution in Haitian Isolates Two sets of primers specific for each plasmid (Figure 1) were designed to allow screening of a collection of Haitian strains, including V. cholerae O1 and V. cholerae non-O1/non-O139 isolated from clinical and environmental samples, and Vibrio spp. and Aeromonas spp. isolated from the same biological samples. pSDH-1 and/or pSDH-2 were predominantly detected in V. cholerae non-O1/non-O139 of clinical origin (34 out of 120). However, both plasmids were detected in a V. cholerae non-O1/non-O139 environmental isolate from the Cange district ( Table 2). Two clinical isolates of V. cholerae O1 isolated in Cite Soleil also carried pSDH-1 and/or pSDH-2. 107 isolates did not hold any of the two plasmids. Other Vibrio spp. and Aeromonas spp. were all negative (data not shown). Both plasmids were geographically widespread in 8 of 13 analyzed districts, with variable prevalence in each Department (Figure 4): 33.7% (n = 30) in Ouest, 14.3% (n = 4) in Artibonite, 4.7% (n = 1) in Centre, and none in Nord Ouest ( Table 2). pSDH-2 was the most common element either alone (n = 14) or in association with pSDH-1 (n = 21). The latter was never detected alone. Presence of the ParD-ParE TA system on pSDH-2 may explain why this plasmid was found more frequently since the TA system would ensure maintenance. Alternatively, pSDH-2 could be mobilized at a higher frequency or by a broader range of helper conjugative elements than pSDH-1. FIGURE 4 | Distribution of pSDH-1 and pSDH-2 in Haiti. Purple and green circles indicate sites where clinical and environmental isolates were collected, respectively, in the Departments of Nord-Ouest, Artibonite, Ouest, and Centre. Red indicates districts where pSDH-1 and/or pSDH-2 were detected (see Table 2). Original figure was downloaded from http://www.d-maps.com/carte.php?num_car=5262&lang=en, according to the website terms and conditions of use. CONCLUSION In this study, we report two previously unknown plasmids prevalent in V. cholerae in the early phase of the 2010 Haitian cholera outbreak. The genome content of these plasmids suggests self-mobilization and, at least in the case of one of them, a TA system for plasmid stabilization through post-segregational killing. pSDH-1 and pSDH-2 enrich the list of small cryptic plasmids circulating in Vibrio spp. in the aquatic (Bidinost et al., 1999;Powers et al., 2000;Zhang et al., 2007) and clinical environments and whose role in bacterial fitness or pathogenicity merits further investigation. AUTHOR CONTRIBUTIONS Data analysis: DC, GG, and NH. Strain screening: DC. Singlecell genomics: RS. Plasmid assembly and annotation: GG, SC, and RS. Contributed materials and reviewed the paper: MP, AH, and RC. Manuscript writing: DC and GG. All authors discussed, read, contributed to, and approved the final manuscript. FUNDING This research was supported by the National Institute of Allergy and Infectious Disease (NIAID) grant no. 2RO1A1039129-11A2 from the National Institutes of Health (NIH) to RC and by the National Science Foundation grant no. OCE-1335810 to RS. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
4,221
2017-11-23T00:00:00.000
[ "Biology" ]
Low Power Resonant Optical Excitation of an Optomechanical Cavity We demonstrate the actuation of a double beam opto-mechanical cavity with a sinusoidally varying optical input power. We observe the driven mechanical motion with only 200 nW coupled to the optical cavity mode. We also investigate the pump power dependence of the radio-frequency response for both the driving power and the probe power. Finally, we investigate the dependence of the amplitude of the mechanical motion on mechanical cavity quality factor. Optomechanics, the study of the interaction between light and mechanical motion, has recently captured the imagination of photonics researchers [1,2]. For example, researchers have probed radio frequency (RF) mechanical motion of nanometer sized objects [3,4]. In addition, proposals for using the optical gradient force to induce mechanical motion [6,7,5] with [8,3,9,10,11] and without [12,13,14] the use of an optical cavity have been experimentally demonstrated. In fact, at very high optical and mechanical confinement [16,3,15], the amplitude of a mechanical mode can be greatly increased. In addition, at high input powers, regenerative mechanical oscillations occur, where the linewidth of the mechanical mode greatly decreases, while the amplitude of the mechanical oscillation greatly increases. The experiments above have been done with continuous-wave (CW) excitation of an optical cavity or modulated excitation of a waveguide. However, the CW excitation mechanism requires the mechanical motion to induce an out-of-phase modulation of the laser input, as only those forces in quadrature with the mechanical motion perform mechanical work on the structure. Such effects are generally small, as the thermal motion of the structure only weakly perturbs the optical transmission properties of a waveguide or cavity. In the CW case, the amount of work done on the mechanical cavity is proportional to κ −2 , where κ is the optical field decay rate in the cavity. However, an alternative to increase the transduction between optical power and mechanical motion is to use modulated pumping [12,13]. Such a scheme can do work that is proportional to κ −1 , and greatly reduce the amount of power needed to excite the mechanical mode (see Appendix). In this work, we demonstrate the use of optical pump modulation in conjunction with an optical cavity to reduce the amount of power needed to actuate the mechanical mode. Because of the optical confinement and recirculation of photons, we hope to obtain large mechanical oscillations without regenerative feedback. In particular, we choose to work with the double beam one-dimensional photonic crystal (PC) cavity configuration in silicon. Due to its high optical quality factor (Q > 10 4 ), which enhances the circulating optical power inside the cavity, and low mode volume (∼ (λ/n) 3 ), which also enhances the local field potential, the PC cavity can greatly enhance the optical gradient force. The optomechanical coupling rate is defined as: where ω is the optical cavity frequency and x is the mechanical displacement of the cavity. By using cavities where the E-field is increased near material boundaries (such as in a slotted design [4,17]), the frequency perturbation with mechanical motion and the optomechanical coupling can both be tailored. We fabricate devices on a silicon-on-insulator (SOI) wafer with a 150 nm thick layer of Si and a 1 µm thick oxide layer, such as the cavity shown in Fig. 1(a). The beam cavities have lengths of approximately 13 µm, single beam widths of 550 nm, and a middle slot width of 100 nm. We use the design of Ref. [18], where the hole lattice constant is kept constant at a = 400 nm, and the radii of circular holes are reduced as the distance from the center of the cavity increases. The larger holes at the center of the cavity create an optical potential well that lies in the optical bandgap of the array of outer holes, and such a design allows robust and high-efficiency coupling to the cavity region via a coupling waveguide. The hole at the center of the cavity has radius r = 0.28a, and the total cavity length is 34 holes. The cavity is fabricated with electron-beam lithography, and the pattern is transferred into the silicon layer by a Cl 2 :HBr plasma dry etch. The oxide sacrificial layer is then etched away using a buffered oxide etch (BOE) to obtain the free standing beams. In addition to the beam cavity, we also attach coupling waveguides on both sides of the cavity, and one of the waveguides is bent 90 • to configure the device to be probed in a cross-polarization geometry [ Fig. 1(a)] [19,20]. We first simulate the beam cavities in the optical regime using the three dimensional finite-difference timedomain (3D-FDTD) method. Double beam cavities support bonded (+) and anti-bonded (−) optical supermodes, formed from the the transverse-electric (TE) modes of the individual beam cavities. In particular, the E y field is symmetric or anti-symmetric about the xz-plane going through the slot for the bonded and anti-bonded modes, respectively. We find that the first (TE 1,+ ) and second (TE 2,+ ) order bonded modes (see Fig 1(b)-(c)) have theoretical radiation-limited Qs of 30,000 and 1,500, respectively. We observe an enhanced electric field in the air slot region for the bonded modes because of the continuity conditions for the dominant E y field at the slot boundaries (i.e. continuity of the displacement vector ǫ E). Thus, we expect that the bonded optical modes have the highest optomechanical coupling to the in-plane mechanical modes, as the high electric field concentration in the middle of the cavity enhances the change in the optical cavity frequency with mechanical deformations. For this reason, we work with the first and second order bonded optical modes in our experiments. We experimentally analyze the optical properties of the cavity using the setup in Fig. 2(a). We pump the cavities with a broadband LED bank, which is coupled into a waveguide using a dielectric grating coupler. We align the cavity such that the input grating polarization is along |H , while the output polarization is along |V , to obtain the maximum signal to noise ratio. The transmission characteristics of the cavity are shown in Fig. 2(b), where we are able to observe the first two orders of the bonded and the anti-bonded modes. We are able to differentiate the bonded modes from the anti-bonded modes by moving the input beam on the grating coupler to change the input parity. The first order modes have high Q-factors, and we use a tunable laser to fully characterize the cavity. The laser scan at low input powers (1 nW) shows a Lorentzian spectrum with Q ≈ 15, 000 for the bonded first order mode (TE 1,+ ) (inset Fig. 2(b)). In addition, we observe that the higher order bonded mode (TE 2,+ ) has Q ≈ 2, 000. Both Q values are comparable to the FDTD simulated values. We next use the COMSOL finite element solver to find the frequencies of the mechanical modes, using library parameters for silicon: Young's modulus of 131 GPa, Poisson's ratio of 0.27, and a density of 2.33 g/cm 3 . As described above and in previous work [4], mechanical modes with in-plane (in this case, referring to the xy plane) motion will have significant optomechanical coupling to the bonded modes. In particular, we find the first order common and differential modes for in-plane motion [4]. The common and differential modes have the beams moving in phase and out of phase, primarily in the y-direction, and have displacement profiles shown in Fig. 1(d) and (e), respectively. By simulating the structure observed in the SEM image, we find that these two mechanical modes have mechanical frequencies of 25.72 MHz and 26.74 MHz. We find the optomechanical coupling strength similarly to previous work [3], with the optomechanical coupling length defined as: Here, q is the mechanical displacement, α is the parameterized displacement of the mechanical mode,n is the surface normal vector, E || is the electric field parallel to the surface, D ⊥ is displacement field normal to the surface, ∆ǫ = ǫ 1 − ǫ 2 , and ∆(ǫ) −1 = ǫ −1 1 − ǫ −1 2 , with ǫ 1 being the dielectric constant of silicon, and ǫ 2 the dielectric constant of the surrounding medium. Because of the high E-field enhancement in the slot and the differential mechanical resonance having opposite parity to the E y field, we observe very strong optomechanical coupling lengths of L OM = 1.3 µm and 1.8 µm for the coupling between the differential mechanical mode and the TE 1,+ and TE 2,+ optical modes, respectively. On the other hand, the coupling between the TE 1,+ and TE 2,+ optical modes and the common mechanical mode was calculated to be far weaker (L OM > 40 µm), because this mechanical mode has the same parity as the optical field. In order to first characterize the mechanical modes of the system, we pump the second order bonded mode with a red detuned probe laser, at the cavity half-max, with low pump power (300 µW before the objective) to observe the mechanical modes in air. The transmission signal is fiber-coupled and sent to a photodiode detector with a transimpedance gain of 2.5 × 10 4 V/A and a bandwidth of 125 MHz, and the electrical signal is then read by an RF spectrum analyzer. We estimate coupling efficiencies of 2% to the TE 2,+ mode and 0.5% to the TE 1,+ mode, assuming symmetric losses at the input and output gratings, and accounting for the transmission losses of the coupling waveguides using FDTD simulations. We observe the two mechanical modes in the RF spectrum, shown in Fig. 3(a), which correspond well to the simulated in-plane mechanical mode frequencies, and slight discrepancies can be attributed to minor differences in the clamping conditions of the fabricated device. Because of the low optical Q of the TE 2,+ mode and the low optical power buildup, we do not observe the giant optical spring effect seen in previous works [4,11], as the mechanical modes do not change frequency with increasing pump power. We also do not observe significant changes in the optical cavity wavelength with pump power, suggesting minimal heating. Because of the mechanical damping of the ambient atmosphere, the mechanical Q-factors of these modes are limited to 50-100. When we test the same cavity in vacuum, we observe the two modes more clearly, as shown in Fig. 3(b). In vacuum, the mechanical Qs are as high as 2,500, and are limited by the clamping geometry of our cavity. We choose to work with the higher frequency mode (the differential mode), as it is the in-plane mechanical mode with higher optomechanical coupling to the second order optical mode. Next, we pump the TE 1,+ mode with a second (pump) laser tuned to the optical cavity resonance wavelength and sinusoidally modulated near the RF frequency of the mechanical mode, while keeping the first CW laser tuned to the half-maximum of TE 2,+ . We observe the effect of the second, modulated laser on the RF modulation of the first laser. We scan through the first-order optical mode with various unmodulated powers, and observe that the first order cavity resonance is not significantly changed, suggesting that the injected power on the first order optical mode does not change the temperature of the beam, and thus does not modulate the beam transmission via the thermo-optic effect. Although both lasers pass through the cavity and are extracted with the same output grating coupler, the laser on TE 1,+ is blocked by a band-pass filter centered at 1550 nm with a full-width at half-max of 12 nm. The power of the laser on TE 1,+ is modulated by a Mach-Zender interferometer modulator with a bandwidth of 2.5 GHz and full modulation depth [ Fig. 2(a)]. First, we fix the input power on the first-order optical mode at under 2 µW, and scan the modulation frequency through the mechanical resonance. When we tune the RF input frequency near the mechanical resonance frequency, we observe a narrow response in the RF spectrum (of the laser on TE 2,+ ) [ Fig. 3(c)]. In addition, as the RF input frequency is tuned around the mechanical resonance frequency, we observe that the integrated power within the narrow bandwidth response matches exactly that of the mechanical cavity resonance [ Fig. 3(d)], suggesting that the optical power in the first order mode is modulating the transmission properties of the second order mode through the mechanical resonance. In addition, we observe the Lorentzian mechanical mode with far better signal to noise, and can observe the tails of the mechanical mode even when detuned by more than three mechanical cavity linewidths. We also measure the RF response of the probe laser on TE 2,+ as we change the power of the modulated pump laser on TE 1,+ . We first do so with the probe power for TE 2,+ fixed at 2 µW coupled into the cavity, and observe the RF response with varying average power on TE 1,+ for different RF detunings from the mechanical resonance [plotted on a log-log scale in Fig 4(a)]. Similar to the data in Fig. 3(d), we observe the RF response is decreased as the modulation frequency is detuned from the mechanical resonance. We observe that the relationship between the integrated power in the RF response and the input laser power on TE 1,+ is quadratic for all detunings. This is expected, as the RF spectrum analyzer measures the power of the voltage signal from the transimpedance amplifier of our detector, and that power has a quadratic relationship with the amplifier output voltage and thus a quadratic relationship with the output RF oscillation amplitude. This indicates a linear relationship between displacement and input pump power on the first order mode. We also measure the RF power spectrum from TE 2,+ when we fix the average laser power on TE 1,+ , and increase the power of the pump on the second order mode, as shown in Fig 4(b). Again, we observe that the integrated RF response of the driven mechanical mode is quadratic with the input power, which is expected as the sideband amplitude is linearly related to the probe power. We also obtain the RF response as a function of the input power on TE 1,+ for various probe powers on TE 2,+ , shown in Fig 4(c). The RF response is reduced for lower input powers, as the sideband powers are proportional to the input probe power. However, we are able to observe an RF response with only 100 nW coupled to the TE 1,+ mode to drive the mechanical oscillations, and only 200 nW coupled to the TE 2,+ mode to sense the mechanical motion. Finally, we compare the efficiency of exciting the mechanical mode in vacuum and in ambient atmosphere. We fix the input power for the probe laser on the TE 2,+ mode in both air and vacuum to 2 µW, and obtain the same output coupled power into our photodetector. We obtain the power series from the same cavity under both conditions, which is shown in Fig. 4(d). As expected, the amplitude of the mechanical oscillation is significantly higher in vacuum than in ambient atmosphere, due to the higher mechanical Q. In fact, the experimentally measured factor of 20 between the power needed to generate the same RF response in air and vacuum matches well with the ratio of mechanical Qs for the two conditions (31). In conclusion, we have demonstrated resonant actuation of a mechanical mode with optical gradient forces. The input power needed to observe driven motion of the mechanical cavity is greatly decreased in the presence of an optical cavity, and hundreds of nanowatts can drive the mechanical motion via a modulated laser coupled to a second cavity mode. This type of excitation can be used to probe various mechanical modes, as the RF response can be increased relative to the thermal-driven oscillations. Furthermore, optomechanical cavities can be used to mix RF signals, with the mechanical resonance enhancing the beat note of two RF signals. Similarly, the actuation of mechanical motion can also be used for a variety of applications, such as mechanical motors that do work on nanometer-sized objects. We acknowledge support from the Presidential Early Career Award for Science and Engineering (PECASE), administered by the Office of Naval Research (Dr. Chagaan Baatar). We also acknowledge support from the National Science Foundation graduate research fellowship (YG), and the Stanford Graduate Fellowship (AR, AM). 1 Appendix: Theory of resonant excitation of mechanical mode with optical gradient force: We would like to solve for the mechanical amplitude as a function of the average input power of a modulated laser. We follow the derivation given in Ref. [3] and start with the cavity field equation: where s(t) is the time-varying pump field, ω 0 is the cavity frequency, κ is the cavity field decay rate, κ e is the external coupling rate, L OM is the optomechanical coupling, c(t) is the cavity field, and α(t) is the mechanical mode amplitude. In this case, we are inputing a laser at ω which is detuned from the optical cavity mode center frequency, and the input is modulated periodically with frequency Ω, which is detuned from the mechanical mode center frequency Ω 0 . We assume sinusoidal mechanical motion, such that the beam also moves with modulation frequency Ω: Note that Ω could be different from Ω 0 , but since we're driving the motion, we can assume the mechanical mode responds with the same frequency. Then the equation becomes: The homogeneous solution is: with β = ω 0 α 0 /L OM Ω, and the inhomogeneous solution is: Since our pump is modulated with frequency Ω, we express s(t) = k a k e ikΩt as a Fourier Series, and find the full inhomogeneous solution: with ∆ = ω − ω 0 , and neglecting the κ e /2 term as normalization: Because the homogeneous solution levels out with rate κ and this is fast, the particular solution is the steady state solution. The optical force is: Taking only the zeroth order in J 0 (β), as β ≪ 1 and J 1 (β) ≈ β: The optical force, normalized to the average input power is then: We will only consider the elements with frequency Ω, in quadrature with the beam motion, as they will contribute to work getting done on the mechanical mode, so we isolate the cos(Ωt) terms: Note that this force is maximized near ∆ = 0 (as all four terms are near Lorentzian functions in terms of ∆), and we consider the force amplitude (dropping the harmonic variation): (14) or with the normalization (such that input power is proportional to |s| 2 ): We note that in the case of pumping with a CW laser, the equivalent force is [3]: Thus, comparing some sort of AC pump scheme (assuming ∆ = 0, to maximize force) to the DC pumping (assuming ∆ = κ/2, where the force is approximately maximized), we see that the transferred power should be approximately κ 2 /(βΩ 2 ) more efficient. In addition, if we assume that our in-coupling efficiency is sufficiently high, then we would have κ e ∼ κ. Using the above two equations, our optical force is ∝ κ −2 for the CW case, and ∝ κ −1 for the modulated laser case. Note that our thermal amplitude is x 2 = k b T /m eff Ω 2 , and | x | ≈ 10 pm in this case, which places us in the high β regime (despite the sidebands being unresolved, we have β = 45). We can calculate the force as a function of β as well, plotted in Fig. 5, using real parameters of m ef f = 2 × 10 −15 kg, Ω 0 = 2π · 22 × 10 6 Hz, Q m = 70, κ e = κ/2, L OM =2 µm, optical wavelength λ = 1500 nm, and optical Q = 2 × 10 4 . We plot the kernel of the force term using Eqn. 9 for different β, but fixing all other parameters, and plot the results in Fig. 5. We notice that for our parameters, the force on the beam is relatively unchanged even up to β ≈ 100. Thus, we use sinusoidal pump to increase the force amplitude. Example 2: Square wave input We can also explore input powers that are periodic with frequency Ω, but not sinuisoidal. One example is a square wave input that is that is zero for some amount of time, and a fixed amplitude A = T 2T1 (chosen to have fixed energy input) for time 2T 1 . The pulse train is input with period T = 2π/Ω. s(t) = A if |t| < T 1 0 otherwise (17) Figure 6: The scaled force for fixed input energy, as a function of the duty cycle of the pump, and the optical detuning from the cavity. The Fourier coefficients of this input are: By evaluating the sum numerically, we obtain the force as a function of the duty cycle (2T 1 /T ) in Fig. 6 for ∆ = 0 (other detunings only decreased the force). We observe that the forcing term that does work is not drastically increased with pulsed (short duty cycle) pumping. Note that this is assuming that only the zeroth order correction for J n (β) is necessary. It is possible that higher order corrections, like that used in the derivation from [3], may be needed. We observe that the maximum average force resulting from a square wave input is lower compared with the sinusoidal input of the same average power, which is expected as the power of the square input is spread into more Fourier components.
5,100.2
2010-11-16T00:00:00.000
[ "Physics" ]
Advances in Fractional-Order Neural Networks, Volume II Fractional-order neural network models have become an active research subject and have attracted increasing attention in many fields [...] Fractional-order neural network models have become an active research subject and have attracted increasing attention in many fields.For instance, fractional-order neural networks are recognized as effective tools for the modeling, validation and guaranteed learning of dynamical processes in biology, biochemistry, neurocomputing, engineering, physics, economics, etc. [1][2][3][4][5].Advances in fractional calculus lead to the development of new fractional-order neural network models.From the other side, challenges and knowledge from research in science and engineering motivate new advancements in the area of fractional-order neural networks [6][7][8][9]. After the successful production of Volume I of this Special Issue, we invited investigators to contribute original research articles as well as review articles focused on the latest achievements in the modeling, control and applications of fractional-order neural networks. Volume II of the Special Issue was successful; twelve research papers were published that addressed advances in fractional-order neural networks. Four papers addressed interesting topics related to the synchronization of fractionalorder neural networks, which is one of the significant research directions in such systems. For example, Wang et al. studied the synchronization in finite time of fractional-order complex-valued gene networks with time delays.They established several synchronization in finite time criteria based on feedback controllers and adaptive controllers.The setting time of the response is estimated using the theory of fractional calculus. He, Li and Liu investigated the asymptotic synchronization of fractional-order complex dynamical networks with different structures and parameter uncertainties.The proposed controller is more adaptable and effective and, as such, the derived results extend some of those found in the existing literature.The validity and feasibility of the theoretical outcomes are confirmed via two simulation instances. Song, Cao and Abdel-Aty considered the synchronization problem for a more general kind of master-slave memristor-based neural networks with fractional derivatives.By applying a continuous-frequency-distributed equivalent model tool, some new outcomes and sufficient conditions on the robust synchronization of the master-slave neural networks with uncertainty are proposed via linear matrix inequality. Chen et al. studied the finite-time synchronization problem of fractional-order stochastic memristive bidirectional associative memory neural networks with discontinuous jumps.They proposed a novel criterion for finite-time synchronization by utilizing the properties of quadratic fractional-order Gronwall inequality with time delay and the comparison principle.This criterion provides a new approach to analyzing the finite-time synchronization problem of neural networks with stochasticity. Variable-order fractionals are a relatively recent development in the field of fractional neural networks.This extremely interesting topic has been investigated in two papers. Karoun et al. introduced a discrete-time Hopfield neural network with non-commensurate fractional variable orders for three neurons.Its chaotic behavior was studied via phase-plot portraits, bifurcation and Lyapunov exponent diagrams. Alsaade et al. proposed a model-free and finite-time super-twisting control technique for a variable-order fractional Hopfield-like neural network.The controller proposed in their research is able to regulate the system even when its complex variable-order fractional dynamic is completely unknown. One of the most important concepts in neural network models in a periodic environment is that of periodicity.However, in real-world problems, the exact periodicity of the states is usually too strong and has limited applicability. Feckan and Danca considered some aspects related to the non-periodicity of a class of complex maps defined in the sense of Caputo-like fractional differences and related to the asymptotical stability of fixed points.The presented results are exemplified in the case of the Mandelbrot set of fractional order. Stamov et al. studied the almost periodic behavior of factional-order impulsive delayed reaction-diffusion gene regulatory networks with Caputo-type fractional-order derivatives and impulsive disturbances at unfixed instants of time.In addition, using Lyapunov-like impulsive functions, perfect Mittag-Leffler stability criteria were proposed. Very interesting results related to fractional factorial split-plot designs, with replicated settings of the whole plot factors from the viewpoint of clear effects, are proposed by Zhao. Xu and Li studied the problem of a group consensus for a fractional-order multi-agent system without considering the intergroup balance condition.By adopting a dynamic event-triggered mechanism, the updating frequency of the control input is significantly reduced while the consensus performance is maintained. The study by Wei et al. is devoted to one of the main challenges in using a fractional-order neural network modeling approach, namely the fact that a long memory property is necessary, whereas infinite memory is undesirable.Their study puts particular emphasis on the topic, developing some remarkable properties such as the equivalence relation, the nabla Taylor formula and the nabla Laplace transform of such nabla-tempered fractional calculus. Finally, the paper authored by Wang, Wang and Chu investigated Hopfield-type neural networks with fractional derivatives of incommensurate fractional orders.Numerous fundamental and qualitative properties were studied, including dissipative properties and stabilization. The editors of this Special Issue "Advances in Fractional-Order Neural Networks, Volume II" would like to express their sincere gratitude to all the authors who contributed their valuable works, and also to the reviewers for their exceptional efforts in reviewing the manuscripts.We believe that the selected papers will enrich the readers' knowledge and will stimulate the continuing efforts to develop the theory and applications of fractional-order neural networks, which will continue to be one of the dominant themes in mathematics and mathematics applications due to its theoretical and practical significance.
1,216.6
2023-11-29T00:00:00.000
[ "Computer Science", "Mathematics", "Engineering", "Physics" ]
Robust 1-norm Periodograms for Analysis of Noisy Non-Gaussian Time Series with Irregular Cadences: Application to VLBI Astrometry of Quasars Astronomical time series often have non-uniform sampling in time, or irregular cadences, with long gaps separating clusters of observations. Some of these data sets are also explicitly non-Gaussian with respect to the expected model fit, or the simple mean. The standard Lomb–Scargle periodogram is based on the least squares solution for a set of test periods and, therefore, is easily corrupted by a subset of statistical outliers or an intrinsically non-Gaussian population. It can produce completely misleading results for heavy-tailed distribution of residuals. We propose a robust 1-norm periodogram technique, which is based on the principles of robust statistical estimation. This technique can be implemented in weighted or unweighted options. The method is described in detail and compared with the classical least squares periodogram on a set of astrometric VLBI measurements of the ICRF quasar IERS B0642+449. It is uniformly applied to a collection of 259 ICRF3 quasars each with more than 200 epoch VLBI measurements, resulting in a list of 49 objects with quasi-periodic position changes above the 3σ level, which warrant further investigation. INTRODUCTION The Lomb-Scargle periodogram calculation is a powerful technique designed to reveal and characterize the periodic components in observational data sequences, which finds a wide scope of applications.For a review of its properties and underlying assumptions from the user's perspective, cf.VanderPlas (2018).The need for this technique arises from the character of astronomical data (observational measurements), which are practically never evenly sampled in time.This makes the standard Fourier power spectrum analysis inapplicable for astronomical time series.Detection of orbiting exoplanets from precision radial velocities of host stars is one of the well-known use cases for the Least-Squares (LS) periodogram method (Hara & Ford 2023).The periodic component of the measured radial velocity sequence is caused by the reflex orbital motion of the host star orbiting the system's barycentre.The period of the main sinusoidal mode in the computed periodogram in this case estimates the orbital period of the planet, which often cannot be directly observed. The periodogram method finds a somewhat less known application in precision astrometry of celestial bodies' positions.Binary stars with unresolved or dim companions have periodic signals, which are the harmonics of the orbital frequency, in either of the sky coordinates referenced to a fixed celestial frame.Given a significantly long and precise cadence of position measurements covering at least one orbital period, the more general approach is to directly fit a set of Kepler elements of the emerging explicitly nonlinear 2D model, which proves a daunting and ambiguous task in the presence of even a small admixture of statistical outliers (Goldin & Makarov 2006).A robust and reliable periodogram decomposition is a welcome alternative when a large amount of observational data has to be processed with a low output of true positives.The need for a resilient periodogram algorithm, which can produce meaningful results outside of the normal distribution of data points, also emerges in the interpretation of high quality photometric time series.Magnetically active stars, for example, often manifest complex structures of signals in their light curves with periodic modulation mixed with stochastic, unpredictable bursts of radiation (Makarov & Goldin 2017). Our main goal for this study is to develop and test a modification of the classical 2-norm periodogram algorithm (also known as the Lomb-Scargle periodogram) based on the principles of robust statistical estimation.This algorithm is intended to be used for processing of a massive data base that includes single-epoch astrometric measurements of thousands of radio-emitting quasars with the geodetic Very Long Baseline Interferometry (VLBI) world-wide facility.The system of accurate positions of these sources constitutes the fundamental International Celestial Reference Frame (ICRF3, Charlot et al. 2020), which underpins all other derivative celestial and geodetic reference frames.The astrometric stability of the most frequently observed quasars is of crucial importance for the overall accuracy and stability of ICRF3.We therefore develop a method to determine if some of the ICRF3 sources manifest periodic signals in their celestial positions, which could emerge from dual orbiting black holes in their centers, as well as a number of other effects in the extended structures and jets (Makarov et al. 2012). The need for robust statistical estimation techniques generally arises in astronomical data processing when the available data are ridden with a large fraction of outliers outside of the commonly assumed Gaussian distribution of errors, representing a heavy-tailed sample distribution.Examples of critically important applications can be found in the mutual orientation alignment of different celestial reference frames (Malkin 2021;Lambert & Malkin 2023;Frouard 2023), where common object show a high rate of position offsets with extremely low formal probabilities. THE LEAST-SQUARES (2-NORM) PERIODOGRAM In the most general setup of the problem, our task is to mathematically analyze a given time series (observations) d(t i ), where periodic sinusoidal signals may be hidden.The data is discretized on a sequence of specific times of measurement t i , i = 1, 2, . . ., N .When the measurements are taken on a regular, equally spaced grid with time step ∆t, the problem is solved by the direct Fourier transform and subsequent computation of the Fourier power spectrum.The spectrum is quantified on a grid of angular frequencies f k = 2 π/(k∆t), k = 2, 3, . . ., N , where the highest nondegenerate frequency f 2 = π/∆t is the angular Nyquist frequency.This set of frequencies is complete, because all other signals within the functional space spanned by the Fourier basis functions are not independent.In other words, the fitting function is the exact and unique representation of any sequence d i with a zero mean.This is no longer true if the cadence {t i } is irregular.The Fourier harmonics are not orthogonal if sampled on an irregular cadence.In principle, this difficulty can be bypassed by constructing an ad hoc orthogonal basis from the Fourier harmonics using, for example, the Gram-Schmidt process, but the practical value of such representation is dubious, because the emerging fitting functions do not find a simple interpretation.However, we can disregard the issue of nonorthogonality and seek a solution to Eq. 1 for a chosen f k .Under this generalization, the classical periodogram analysis is equivalent to the least-squares (LS) fitting of model (1) for a grid of trial periods p k = 2 π/f k (Scargle 1982).The emerging LS problems, for a specific p k , can be written as where the design matrix A has two columns with calculated sequences of cos(f k t i ) and sin(f k t i ) values, and the number of its rows is equal to the number of data points N .The right-hand part vector d is the vector of centralized observations, and the vector of unknowns x comprises the two coefficients c k and s k .Standard LS algorithms can quickly solve this system to obtain the solution vector Any standard LS algorithms can be used to solve this system.This has to be done for each trial period p k ≡ 2π/f k separately, including the setup of the design matrix A. In the example used in our paper, the number of trial periods N p = 1000, but in other applications, it can be up to O(10 5 ).This is not a problem for modern computers and LS algorithms, but in the 1970-ies, when the periodogram method started to attract astronomers' attention, the speed of computation was a crucial consideration.This motivated Lomb (1976) to propose a modification where the design matrix A is orthogonalized by introducing a phase shift τ k in each of the fitting functions in (1), to the effect that the normal matrix A T A becomes diagonal.This modification has little practical advantage now but it brings in additional restrictions precluding necessary extensions of the model, as we will now discuss.Lomb's modification is therefore not recommended.A more accommodating and rigorous way of orthogonalization, if such a technical action is deemed desirable, was proposed by Ferraz-Mello (1981).Scargle (1982) also mentions that the spectral power of a pure noise data has a more predictable statistical distribution, but this argument is only valid for Gaussian noise with equal variances, which is never the case in practice. EXTENDED PERIODOGRAM MODELS Astronomical data series d often include non-periodic components.Using the archetypical example of detection of exoplanet signals in precision radial velocity measurements of host stars, the expected additional terms include a constant offset (the systemic radial velocity of the exoplanet system) and possibly a linear trend from perspective acceleration or a distant binary companion.It is not recommended to estimate these terms separately and prior to the periodogram solution and subtract them from the original data, which, unfortunately, is often done in practice.The reason why this pre-processing leads to an error in the periodogram is that these terms are not orthogonal to the fitted sine functions on a non-uniform cadence of data points.Unlike the regular Fourier transform, the trial frequencies are not integer multiples (harmonics) of the time interval.If a sinusoidal signal is present in the data, its estimated amplitude or power will be affected by the biased estimate of the constant term.The only correct and consistent way of dealing with additional terms is to include them in the fitting model (Ferraz-Mello 1981;Cumming et al. 1999).For example, the fitting model suitable for exoplanet detection can be di for each trial period p k .Note that a separate constant term x 0 and a linear slope x 1 are obtained for each trial period, and they are not equal for different trial periods.The variation of these terms with the trial period reflects the error introduced into the periodogram by subtracting the common terms a priori. The periodogram estimate can be the amplitude of the fitted sinusoid As discussed above, we are using the amplitude periodogram in this paper, which has a more intuitive interpretation as the amplitude of the periodic signal in the same units as the measurements.The linear condition equations still take the form (2), but the design matrix A now has four columns, and vector x includes four unknowns x 0 , x 1 , x 2 , and x 3 .The solution vector x is obtained from the least-squares solution, Eq. 3. If a significant signal a(p k ) is detected, the corresponding values x 0 (p k ), x 1 (p k ) provide the best estimates of the constant term and the linear trend. STATISTICAL UNCERTAINTIES What is the confidence level of a detected signal in the LS periodogram?This is an estimate of crucial importance, because the probability of the null hypothesis (that the detected feature is just a random fluke), also known as the false alarm probability (FAP), determines if we can believe the result.Traditionally, a high formal confidence is desired in astronomical applications such as detection of exoplanet signals, the recommended value being 0.997 (the 3σ level for a normal distribution).A robust method of estimating the FAP is the bootstrap simulation, which is also extendable to non-Gaussian distributions of measurement error.This is the method of "last resort" when the signal-to-noise ratio (SNR) of the detected signal leaves room for a catastrophic false positive.It is computationally expensive, however, and requires a sufficiently large number of data points.Monte Carlo methods, which are also computationally expensive, can be efficient when the statistical distribution of the observational noise is known.A random number generator is used to construct a sequence of synthetic measurements on the given sequence of times t i , then a periodogram solution is obtained for each realization of noise, and the signal amplitude (c 1 2 is computed.Repeating this process multiple times (O(10 3 ) is usually required for an accurate estimation) allows us to estimate the CDF of the posterior distribution of the periodogram amplitude at any trial period, and hence, the p-value of the null hypothesis (or FAP). Here we describe a computationally efficient and direct method of confidence estimation for LS periodograms in the extended form (Eq. 4).Noting that the standard periodogram solution obtained from the LS adjustment per Eq. 3 is already based on the assumption that the measurement error is normally distributed, a direct computation of the periodogram cumulative distribution function (CDF) can be performed.If the covariance matrix is known or assumed, the corresponding covariance of the solution vector is where C = (A T A) −1 .It is often assumed that the measurements d are statistically independent, in which case the matrix C d is diagonal.If the errors also have the same variance σ 2 , this equation further simplifies to The solution covariance C x is a 4 × 4 symmetric matrix, which is computed for each trial period p k .We are mostly interested in the a(p k ) statistics per Eq. 5.The two involved statistics x 2 and x 3 , in accordance with the assumed normal distribution N (0, σ i ) for each data point, are binormal variates, whose covariance matrix C a is the corresponding 2 × 2 block of C x .The estimated vector y = [x 2 , x 3 ] T can be standardized to obtain so that ȳ is a binormal uncorrelated variate of unit variance.This is equivalent to determining the error ellipse for binormal variates.The components of ȳ can be interpreted as the upper and lower signal-to-noise ratios of the given periodogram result.Consequently, the quadratic form is a χ 2 -distributed variate with 2 degrees of freedom.The corresponding confidence of rejecting the null hypothesis can be computed from the cumulative distribution function (CDF) of the distribution χ 2 [2] for each periodogram value. For a graphical representation of perodogram results, it is convenient to compare the confidence levels to specific points, which correspond to the ±1σ, ±2σ, and ±3σ intervals of the normal distribution, which have the cumulative probabilities of 0.683, 0.955, and 0.997, respectively.The corresponding levels of ψ (computed as CDF −1 χ 2 [2] ) are 2.296, 6.180, and 11.829.Periodogram amplitudes with ψ-values above 11.829 can then be regarded as highly confident positive detections at a confidence level above 0.997. In the exoplanet detection literature, an alternative method of FAP-estimation is often used, developed by Baluev (2008).It is also based on the assumption that the signal contains only a finite set of model (base) functions, and the random component of the data vector is pure Gaussian noise with the known standard deviations σ i .The statistical significance of a single periodogram value can then be naturally estimated from the properly normalized difference of the reduced χ 2 statistic of residuals with and without the corresponding harmonic terms (e.g., Eq. 6 in Cumming et al. 1999), which follows the F -distribution (or beta-distribution if two or more specific periodogram frequencies are considered).However, while we have in practice a large number of periodogram value realizations, only the maximum value and the corresponding trial period are of interest.Even in the absence of detectable signal, given a large number of trials, it is probable that the highest significance value exceeds the threshold confidence level.This probability can be estimated within the extreme value statistic of an F -distributed homoscedastic random process assuming that the periodogram values are independent.This method is vulnerable to aliasing, which is caused by the limited spectral window of the given data series.The adjacent periodogram values are not independent, and periodogram features become increasingly wider "window functions" toward the longest trial periods.Non-uniform cadences with long gaps can also generate aliasing, spectral leakage, and spurious periodogram peaks.The extreme-value distribution fitting method (Süveges 2014) is more general for non-Gaussian processes, but it still refers to the null hypothesis of uncorrelated white noise in the data, which is inaccurate for the specific applications in this paper, or the spectroscopic detection of exoplanets (Makarov et al. 2009). TO WEIGHT OR NOT TO WEIGHT? Astronomical time series often have unequal formal errors of individual data points.The formal error represents the expected standard deviation of the measurement, which can vary in a wide range because of observational conditions, instrument setup, etc.The LS solution in Eq. 3, on the other hand, is unweighted, because it does not involve the estimated formal errors.The standard way of dealing with processing data of non-uniform precision is to use weighted LS fitting.It can be applied to LS periodogram analysis too in the framework of weighted periodogram solution.The basic equation replacing (2) becomes where the weight matrix The covariance of the right-hand part is now the identity.The formal covariance of the periodogram coefficients of interest transforms from Eq. 7 into The subsequent analysis of periodogram uncertainties is the same as described in Sect. 4. We performed limited experiments using the formal weights on the example described in Section 6 to estimate the impact of this additional modification.We found rather limited changes in the computed periodogram amplitudes with the unweighted and weighted LS options.The most prominent features indicating possible signals have approximately the same shape and location.The greatest difference is found in the estimation of the 1σ and 3σ confidence intervals.The weighted covariance of the periodogram coefficients C a is generally much smaller for the weighted solution than for the unweighted solution.This is caused by a large spread of individual formal errors, and the fact that the weighted LS solution is optimal.If all the formal errors are equal, the periodogram covariances and the derived amplitudes become equivalent in the two solutions.Thus, the weighted covariance C a is the global minimum of all possible unweighted counterparts.The lower covariances result in narrower confidence intervals, and the net result is that most of the periodogram solution becomes a highly confident positive detection.This result is completely misleading for the given example, because, as we will see in the next Section, the formal errors of the data points have little bearing on the actual dispersion and statistical distribution of the data. WHY DO WE NEED SOMETHING ELSE? Let us summarize the implicit assumptions involved in the LS periodogram method. 1.The data vector is a composition of random uncorrelated noise and a single monochromatic sinusoidal signal, whose amplitude and period are to be determined. 2. The measurement noise is Gaussian. 3. The data sequence is centralized, i.e., has a zero mean-unless the extended version of the method is employed. An example when the second assumption is violated can be found in (Makarov et al. 2010).The astrometric position (photocenter) of the Sun as measured by a distant observer is subject to stochastic variations caused by the presence of sunspot groups and bright plage areas on the rotating surface.Each photometric feature generates a time-variable shift of the unresolved disk on the time-scale of days, which is not periodic because of the phase scrambling.The composition of such stochastic signals is an unpredictable "jitter".The measured shifts from the mean photocenter show an utterly non-Gaussian distribution because the intrinsic distributions of the sunspot sizes, lifetimes, and positions within the disk are not normal.In this case, the nominal LS periodogram, as well as the traditional FAP estimation, are likely to produce misleading and inaccurate results. Fig. 1 shows the observed time series used in this paper to illustrate the application of the proposed 1-norm periodogram analysis.It shows the high-accuracy astrometric data collected by geodetic VLBI for the ICRF3 source IERS B0642+449 for nearly 40 years of continuous observations.Each data point corresponds to a one-day "session" with multiple delay measurements of this source together with a number of other ICRF3 sources.The observational data are represented as coordinate offsets x = (α obs − α mean ) cos δ mean (left panel) and y = δ obs − δ mean (right panel) in mas, where {α mean , δ mean } are the weighted mean coordinates for this source in the equatorial coordinate system.The formal errors for each observation are shown as error bars.This enigmatic high-redshift (z = 3.41) gamma-ray blazar is obviously one of the astrometrically unstable ICRF3 sources with shifting position mostly in the R.A. component.The origin of the position variations is outside of the topic of this paper, but we briefly note the study by Xu et al. (2016), who detected a dual structure of IERS B0642+449 from a closed delay analysis of a high-intensity geodetic VLBI session.The detected separation of the dual components is approximately 0.46 mas and the position angle is 262.2 • .One of the interesting applications of sub-mas astrometry with VLBI is the possibility of detection of orbiting dual AGNs.With an estimated scale of 7.6 pc/mas at this redshift, a binary black hole with a total mass of 10 10 M ⊙ and a period of 40 years may have an angular separation of about 10 µas, which may be within the reach with this type of data.Central engine binarity may be one of the explanations for the observed quasi-periodic modulation of some gamma-ray blazars' light curves (Ackermann et al. 2015). Both coordinate trajectories in Fig. 1 appear to include long-term variations and, possibly, periodic components on the timescale of a few hundred days.Are they statistically significant?We begin with the standard LS periodogram analysis using the extended model Eq. 4. The need to include the linear terms, in particular, comes from the possibility of a "secular" proper motion in the data, which is not part of the astrometric model used in the VLBI data reduction pipeline.We compute the periodogram fitting coefficients {x 0 , . . ., x 3 } for an exponential grid of 1000 trial periods, p k = p 0 dex(q k), with the exponent step q = (3652.5− p 0 )/1000 in days.The longest trial period is then 10 yr, which is practically limited by the time span of the available data.The results are shown in the upper row plots of Fig. 2 for the two coordinate components.The periodogram amplitude estimates are connected with a black line to aid the eye.The significance of each periodogram point is also computed for this unweighted solution according to Sect. 4. We color-coded the significance by the normalized confidence level, so that estimates below the 1σ level are marked with blue dots, and estimates above the 3σ level are marked with red dots.A large number of values appear to be highly significant with periods across the entire range, including some short periods below 100 d, which obviously cannot be physical.This result, with a jungle of sharp peaks in the short-period domain and a few prominent features in the long-period domain, is typical of LS periodograms for "noisy" data.The inference is completely false, and we will now reveal why that happens. The single-epoch positions measured with VLBI are two-dimensional, and each position determination {x, y} comes with a formal covariance G, which is a 2 by 2 matrix.It is convenient to consider the normalized and centralized single-epoch position offset because it is a scalar variate, which is expected to follow a Rayleigh distribution with scale 1, reducing the dimensionality of statistical analysis to 1.The true coordinates {x, ȳ} are not known, but they can be separately estimated as the weighted mean position.Note that even the well-known formula for covariance G is based on the underlying assumption of normally distributed random errors.The variate D allows us to test this basic assumption.The histogram of D values computed for the example data set in Fig. 1 is shown in Fig. 3.For reference, the expected Rayleigh[1] distribution (normalized to the same area) is shown with the blue line.We can see that the actual distribution of astrometric offsets is very far from the expectation, and the difference cannot be fixed just by scaling the formal errors. Although the mode of the empirical distribution is approximately where it is expected to be (at 1), a long and powerful tail stretching far beyond the Rayleigh[1] curve indicates that nearly half of the available measurements have values associated with nil probabilities of occurrence within the assumed statistical model. The heavy-tailed nature of the data distribution invalidates the LS periodogram method.The data points with large deviations from the mean should not be called outliers in this case, because they represent a large part, if not the majority, of the population.Simple fixes such as clipping the data outside the 3σ threshold are not justified.The numerous deviant data points corrupt any least-squares estimation, and generate bogus signals in this periodogram analysis.Methods of robust estimation are designed to handle heavy-tailed data in a more consistent way.In particular, the 1-norm estimation seeks to minimize the sum of absolute values of residuals rather than the sum of their squares: This merit function diminishes the impact of large deviants and permits a meaningful solution for any intrinsically symmetric populations.It is robust with respect to the subsample containing high normalized offsets, because each individual data point has an effectively lower weight in the solution, irrespective of its value. IMPLEMENTATION OF 1-NORM PERIODOGRAMS The same periodogram models (Eqs. 1 and 4) can be used as in the classical LS method.The main differences in implementation are of the technical character.The main optimization problem is no longer linear, and it cannot be formalized as Eq. 2. Consequently, there is no direct calculation of the associated covariance matrix of the periodogram coefficients.This can still be done numerically using Monte Carlo simulations.The solution itself is implemented with one of the existing global nonlinear optimization methods with vector-valued arguments, such as the Nelder-Mead (simplex downhill), differential evolution, or simulated annealing methods.1These methods are computationally much more expensive than the regular LS periodogram.However, we achieved a computing time of about 1 min on a regular laptop for the given example with 1668 data points and 1000 trial periods for the two time series. The resulting 1-norm periodograms for the given data sets are shown in Fig. 2, lower row.They are expressed in the same values (amplitudes, per Eq. 5) and units as the LS periodograms in the upper row, so that they can be directly compared.Formal confidence levels cannot be directly computed for the 1-norm solutions, because the population distribution is non-Gaussian.We reproduce, however, the re-normalized confidence intervals 1σ and 3σ from the LS solution to emphasize the significance of the results. Quite clearly, the robust 1-norm periodograms paint a different picture about the temporal variations of the given data.The amplitude values dropped by half or more, and most of the estimated values are now below the 1σ-interval.The largest reduction is seen in the high-frequency domain.Given the nature of the object under investigation, the low-frequency features are of special interest.We find the main features at different locations than with the LS method, and for the RA component, they clearly dominate the spectral power distribution.Intriguingly, there is a compact location around 1730 d with periodogram amplitudes above 3σ, which was completely insignificant in the LS solution. To test if this point is indeed associated with a high level of confidence, a much more extensive bootstrapping or Monte Carlo simulations are required. We performed a non-parametric bootstrap by producing 100 data samples from the original time series by randomly permuting its elements but keeping the fixed cadence of epochs, thus inheriting the same distribution of the uncorrelated noise component as the original data.Then, for each data sample, we computed the LS and 1-norm periodograms.The bootstrap distribution appears reasonably symmetric.For each period, the N % confidence interval (0 ≤ N ≤ 100) is given by the non-parametric percentile bootstrap interval contained between the (N/2)th and (100−N/2)th percentiles.For verification purposes, the periodogram analysis and the bootstrapping estimation was independently implemented by two authors using different computer languages, on the same data set.Figure 4 displays the obtained results where the grey lines represents the difference between the upper and lower bounds of the confidence intervals, respectively for N = 68.3%(dotted line), N = 95.5% (dashed line), and N = 99.7%(solid line).In the case of the 2-norm implementation, the bootstrap provides a confidence interval consistent with results shown in Fig. 2. For the 1-norm spectrum, the confidence interval is lower than for the 2-norm, reflecting the robustness of the 1-norm estimation, which is less sensitive to the data points in the extended tail of the distribution.The amplitudes reach highest significant values, with a significance level well higher than 99.7%, for the right ascension that the bump observed at a period of ∼1700 days is not due to chance and the secondary peak at ∼2800 days should also be considered for further investigations.We note, however, that the bootstrap-estimated confidence only refers to the random uncorrelated noise in the data (of arbitrary PDF).The data may include a time-correlated component of physical or instrumental origin.Broad periodogram features seen in Fig. 4 may require additional analysis using, e.g., structure functions of first or second order, which also incorporate periodic components with time-variable phase (Rutman 1978;Simonetti et al. 1985;Rutman & Walls 1991). A SEARCH FOR PERIODIC MODULATION IN OBSERVED POSITIONS OF ICRF3 SOURCES Diurnal geodetic VLBI sessions have been regularly scheduled over nearly 40 years, using networks of stations separated by baselines of hundreds to thousands of kilometers long.Within each daily session, a number of widely separated radio sources are observed multiple times over the course of 24 hours.The resulting data are processed in a few data analysis centers, including the U.S. Naval Observatory.In this paper, we use a global solution for twodimensional coordinates of epoch calculated at USNO (2022a) in the standard S/X band setup.This data product includes astrometric time series from more than 6000 diurnal sessions.The total number of sources is 5153 in this data set, but here we only consider 259 of them with more than 200 single-epoch measurements. The 1-norm periodogram computation was uniformly applied to each of the frequently observed ICRF3 sources, separately for RA and Dec offsets from the weighted mean positions.These mean positions are specifically computed for the given data set and the solution version, so they may slightly differ from the published ICRF3 mean positions.The purpose of this numerical experiment was to identify sources with possible sinusoidal variations in the observed positions on the sky.The results are presented in a compact form in Table 1 for 49 quasars where the formal significance criterion ψ > 11.829, from Eq. 9, is triggered in either of the coordinates for at least one trial period.The table provides IERS names of the sources (which should be predended with letter B to match Simbad identification), our computed mean RA and Dec coordinates in degrees, the total number of diurnal sessions, and the significant trial periods. The main result of this computation is that there seems to be no isolated single-frequency sinusoidal signals in the observational data similar to those that are found for astrometric binaries.Instead of well defined peaks in the periodograms, we find "packages" of trial periods with elevated amplitudes and significance levels.The emerging picture is more consistent with an ensemble of small vaguely periodic modes of non-commensurate frequencies.Onefifth of the detections have a rising periodogram amplitude toward the upper limit of this analysis (10 yr).The significant periods are mostly longer than 2 yr, although much shorter trial periods have been tested.Only 19% of the sample show periodic variations above the 3σ level.The typical peak amplitudes are in the range 100-200 µas. We note that the robust periodograms were computed separately for the RA and Dec coordinates of epoch positions.In the case of Keplerian motion in a binary system, the detectable signal may be present in both coordinates with the same principal period (and its harmonics) but with different phase and amplitude.The probability density of the angle i between the line of sight and the vector of orbital angular momentum is proportional to | sin(i)|.Therefore, nearly face-on projected orbits are less likely than nearly edge-on orbits.For marginally detectable trajectories, the detactable signal is mostly present in one dimension, which is uniformly distributed with respect to the local north direction.The largest extent of the projected orbit can be aligned with one of the coordinate axes with the same probability as a tilt of 45 • or 135 • .In the latter case, the detectable signal is split between the coordinates, and it should be harder to find it with confidence.Possible ways to deal with this problem include rotating the RA-Dec measurements on a grid of position angles to find a preferred direction maximizing the signal amplitude from a 1D periodogram.Technically, if a significant single-period signal is detected in both RA and Dec coordinates, a 2D version of robust periodogram can be implemented.The available coordinate measurements are combined in a single LS adjustment, but the number of unknown terms per trial period increases to a minimum of 8 because of the unknown phase.This may erode the confidence level of the signal, if the the projected orbit is strongly elongated due to the geometric orientation or large eccentricity.Quasars IERS B1451−375, 2234+282, and 2318+049 are attractive targets for further investigation, because they show coherent periodicities in both coordinates from our results in Table 1. SUMMARY AND DISCUSSION We have shown in this paper that the classical LS periodogram method is firmly based on strong and restrictive assumptions about the distribution of post-fit residuals (which is assumed to be Normal) and the character of physical signals in the data.It provides an optimal, unbiased, and unique solution for periodogram power or amplitude only under these conditions.Whenever the sample distribution shows significant departures from the Gaussian PDF, or more complex signals are present that are not captured in the model, the LS method becomes corrupted and can produce absolutely misleading results. We have considered a specific observational data set for a moderately variable ICRF3 source collected over > 30 yr by the global geodetic VLBI system.The distribution of astrometric positional offsets with respect to the mean position on the sky is explicitly non-Gaussian when scaled with the given 2D formal covariances or in absolute values.The normalized offsets are well represented by a log-normal distribution with a tighter mode and a heavy tail extending to high values.Nearly half of the measurements are way outside of the expected distribution.As a result, the traditional 2-norm (LS) periodogram produces a complex structure with multiple features that are formally above the 3σ confidence interval across the spectrum of trial periods.This result is completely bogus.The robust 1-norm periodogram method, when applied to the same data, produces amplitudes that are smaller by half or more.Ranked by the same previously estimated single-point confidence, the 1-norm values are all insignificant except for a single point in the RA component with a period of 1730 d and amplitude 72 µas, which appears to be above 3σ.Is this periodic signal real?The best way to find out is to continue taking high-precision measurements of this source with VLBI for a few years.A stable sinusoidal signal, which could be produced by an orbiting binary black hole, for example, would emerge more strongly on the longer time scale.Alternatively, physical models could be tested, where transient periodic signals wax and wane in segments of the data due to phase scrambling.This new method provides the opportunity to more reliably and extensively search for periodic signals in non-Gaussian time series at the margin of available accuracy. The 1-norm periodogram computation was performed for 259 ICRF3 sources with more than 200 diurnal sessions collected over nearly 40 years.These measurements are characterized by heavy-tailed sample distributions of residuals.We identified 49 objects (19%), which have at least one statistically significant periodogram value in either coordinate component.Short periods are never found, indicating a possible physical mechanism of these signals in the transient structure of the radio-emitting sources.The signals are not consistent with clean sinusoidal variation at a specific frequency, which would emerge for an orbiting binary black hole.Rather, the pattern is that of "vague periodicity" represented by packages of sine waves with a distribution of frequencies.A possible physical model is a source that moves in loops on the sky returning to the vicinity of the initial position after some characteristic time, which may also vary with time.The estimated amplitude of these vaguely periodic excursions is 70 µas and higher.About one-fifth of the detected signals are truncated by the upper boundary of our periodograms (10 yr).Further investigation of these astrometric wobbles and continuous daily measurements will refine the models and allow us to understand the nature of the phenomenon. Figure 1 .Figure 2 . Figure 1.Astrometric offsets from the mean position of the ICRF3 source IERS B0642+449 measured by VLBI over 30 years.Left plot: right ascension tangential component (x) in mas.Right plot: declination tangential components (y) in mas.Each data point is shown with its formal ±1σ error bar. Figure 3 . Figure 3. Distribution of standardized astrometric deviations for the data set shown in Fig. 1.The expected distribution, which is Rayleigh[1], is shown with the blue curve.The red curve is the empirical best-fitting distribution, which is LogNormal[0.484,0.779]. Figure 4 . Figure 4. Periodograms calculated for the astrometric time series shown in Fig. 1.Left column: right ascension components in mas.Right column: declination components in mas.Upper row: the classic (2-norm) unweighted LS periodogram.Lower row: the proposed robust 1-norm periodogram.In all graphs, the thin black curves represent computed periodogram amplitudes, the blue dots show the values below the 68% confidence level, the red dots show the values above the 99% confidence level.
8,617
2024-05-01T00:00:00.000
[ "Physics" ]
Investigations on Stability of Polycarboxylate Superplasticizers in Alkaline Activators for Geopolymer Binders Calcined clays are interesting starting materials to be used as SCMs (supplementary cementitious materials) in cements or to be converted to geopolymers by activation with a high alkaline activator. The adjustment of the properties in the fresh state, especially regarding the consistency of these binders, is almost exclusively achieved by the addition of water, since commercially available superplasticizers seem to be ineffective in low-calcium geopolymer systems. The aim of this study was a systematic investigation of various PCE (polycarboxylate ester/ether) superplasticizers (methacrylate ester PCE: MPEG, isoprenol ether PCE: IPEG, methallyl ether PCE: HPEG) with respect to their stability in different alkaline activators (NaOH, KOH, sodium and potassium silicate solutions). The effectiveness of superplasticizers (SPs) in low-calcium geopolymer binders was verified by rheological tests. Size exclusion chromatography was used to investigate if structural degradation of the superplasticizers occurs. The investigated PCE superplasticizers showed a thickening effect in the low-calcium geopolymer system. Depending on the alkalinity of the activator solution, a degradation process was detected for all the PCEs investigated. The side chains of the PCEs are cleaved off the backbone by basic ester and ether hydrolysis. The highest degree of degradation was found in sodium and potassium silicate solutions. In alkaline hydroxide solutions, the degradation process increases with increasing alkalinity. Introduction New binder systems for the building materials industry are needed to reduce the high CO 2 emissions from cement production, which are responsible for over 8% of anthropogenic CO 2 emissions [1,2]. A promising way for more environmentally friendly cement is the use of secondary cementitious materials, which include fly ash, ground granulated blast furnace slag (GGBS), silica fume, natural pozzolana and natural calcined pozzolana. In particular, calcined clays can be dehydroxylated at low temperatures (550-800 • C) and used as supplementary cementitious materials (SCMs) to reduce the amount of Portland cement clinker [3][4][5]. The suitability depends on their pozzolanic reactivity and their global availability. However, the substitution of cement for calcined clays often leads to reduced workability of concrete [6,7]. By using alkaline activators like NaOH, KOH or alkaline silicate solutions, the calcined clays form alkali-activated binders (AABs) or geopolymers [8]. In particular, low-calcium geopolymer binders are characterized by high stability to acid attack, high strength and durability [9][10][11][12]. A major problem in the use of calcined clays and other SCMs is their high-water demand, resulting from a high fineness and specific surface [13], which increases the viscosity and yield strength of the binders. Usually, superplasticizers (SPs), like polycarboxylate ethers (PCEs) or polycondensates (PCs), are used to control and adjust the rheological properties of cementitious materials in The PCEs were supplied in liquid form. To characterize the molecular weight, the three PCEs were lyophilized until mass consistency was obtained and ground with a mortar and pestle. While HPEG-and IPEG-PCEs lead to fine white powders, the drying and grinding of MPEG-PCEs lead to small yellow-brown flakes, as seen in Figure 2. The PCEs were supplied in liquid form. To characterize the molecular weight, the three PCEs were lyophilized until mass consistency was obtained and ground with a mortar and pestle. While HPEG-and IPEG-PCEs lead to fine white powders, the drying and grinding of MPEG-PCEs lead to small yellow-brown flakes, as seen in Figure 2. and 0.3% K2O, which was measured by ICP-OES. XRD and Rietveld phase analysis showed 72% amorphous content, 24% of kaolinite, 0.9% of anatase and 2.5% of quartz. The particle size distribution of the metakaolin was determined using laser granulometry and showed an average particle size of 8.19 µm with a d10 percentile of 0.75 µm and a d90 percentile of 20.22 µm. The specific surface area of the metakaolin was determined as ≈11.5 m 2 /g via the BET method. The modified Chapelle test described in the French norm NF P 18-513, Annexe A was used to investigate the pozzolanic reactivity of NaOH pellets (≥98%, Carl Roth GmbH + Co. KG, Karlsruhe, Germany) or KOH pellets (≥85%, Carl Roth GmbH + Co. KG) were dissolved in ultrapure water (ASTM type 1) to obtain sodium hydroxide and potassium hydroxide solutions with concentrations of 1 mol/L (1 M), 4 mol/L (4 M) and 8 mol/L (8 M). Silica gel (≥99.4%, 400-220 mesh, Carl Roth GmbH + Co. KG) was dissolved in sodium hydroxide solution (7.8 M) or potassium hydroxide solution (6.6 M) to obtain alkali silicate solutions with a solid content of 40 wt.-% and a SiO 2 /M 2 O ratio of 2. Hydrochloric acid (37 wt.-%, Carl Roth GmbH + Co. KG) was diluted with ultrapure water to different concentrations for pH neutralization of the alkaline solutions described above. For an investigation of the influence of the PCEs on the rheological properties of low-calcium geopolymer pastes, a metakaolin (Metaver O, Newchem GmbH, Baden, Austria) was used for the preparation of the pastes. The metakaolin is almost free of CaO and is mainly composed of 52.0% SiO 2 , 41.4% Al 2 O 3 , 0.9% TiO 2 , 0.6% and 0.3% K 2 O, which was measured by ICP-OES. XRD and Rietveld phase analysis showed 72% amorphous content, 24% of kaolinite, 0.9% of anatase and 2.5% of quartz. The particle size distribution of the metakaolin was determined using laser granulometry and showed an average particle size of 8.19 µm with a d 10 percentile of 0.75 µm and a d 90 percentile of 20.22 µm. The specific surface area of the metakaolin was determined as ≈11.5 m 2 /g via the BET method. The modified Chapelle test described in the French norm NF P 18-513, Annexe A was used to investigate the pozzolanic reactivity of the metakaolin [42]. Since the norm requires a reaction conversion of 700 mg of Ca(OH) 2 per gram of sample for a material to be defined as "pozzolanic", the investigated metakaolin reactivity was 1275 ± 30 mg/g. Size Exclusion Chromatography Size exclusion chromatography (SEC) was used to analyze the samples' molecular weight distribution of the superplasticizers before and after stability investigations. An AF2000 MultiFlow FFF System (Postnova Analytics GmbH, Landsberg am Lech, Germany) was equipped with a Shodex OHpak SB-805 HQ SEC column (Resonac Europe GmbH, Gersthofen, Germany). The particle size of the column material was 13 µm with a pore size of 7000 Å. A refractive index (RI) detector was used for the acquisition of the sample fractions. The RI detector was calibrated using 180-708.000 Da pullulan standards in 0.05% NaN 3 eluent (PSS Polymer Standards Service GmbH, Mainz, Germany). A 0.05% NaN 3 solution was used as the eluent, and the samples were prepared by the dissolution of 10 mg of sample in 10 mL of eluent. The investigations were performed at a flow rate of 0.5 mL/min. For each sample, 20 µL was injected into the SEC by an autosampler. The RI detector was used for the evaluation of the results since it was best suited to detect the polymers as well as the monomers in the sample. The accuracy of the method was investigated by measuring three samples with 1 mg/mL of PCE and three with 5 mg/mL of PCE for each type of PCE. The results of the six samples of each PCE type were then used to determine the standard deviations and coefficients of deviation for the weight and number average molar mass (M w and M n ) as well as the polydispersity index (PDI). The dn/dc value was measured by direct injection without separation by column for each PCE to determine the recovery rate. For the HPEG-PCEs, the dn/dc was determined as 0.16, and it was determined as 0.174 for the IPEG-PCEs and 0.17 for the MPEG-PCEs. The possibility of PCEs being adsorbed by the precipitated silica gel needed to be considered for the design of the experiment. Therefore, a sample with MPEG-PCEs and a sodium silicate solution was neutralized with hydrochloric acid and centrifuged at 2215× g for five minutes to separate the precipitated silica gel from the supernatant. The silica gel was washed five times with ethanol (≥96%, Carl Roth GmbH + Co. KG) to wash off any adsorbed PCEs. The ethanol was evaporated, the dry residual chemicals were dissolved in 0.05% NaN 3 solution and the sample was analyzed by SEC. Only small amounts of (macro-)monomers were found in the sample, which lead to the assumption that PCEs do not get adsorbed by the precipitated silica gel in significant amounts. Binder Preparation and Rheological Investigations PCE solutions of different concentrations (0.5, 1.0 and 2.0 wt.-%) were prepared by dissolving dry PCE powders in ultrapure water. The concentration was related to the amount of binder, which is considered the amount of solid content of the alkaline activator and metakaolin. The recipe for the geopolymer pastes was designed to result in pastes with similar viscosity, regardless of whether a sodium silicate solution or potassium silicate solution was used as the alkaline activator. In total, 20 g of metakaolin was mixed with 18 g of sodium silicate solution or 16 g of potassium silicate solution, a fixed amount of water and 1 mL of PCE solution. The components were mixed with an overhead stirrer for two minutes at 400 RPM. The resulting paste's water-to-binder ratio was 0.40 for pastes with potassium silicate solution and 0.43 for pastes with sodium silicate solution. The rheology of geopolymer pastes was investigated by the measurement of dynamic viscosity and shear stress. A Brookfield DV-III rheometer (AMETEK GmbH, Berwyn, PA, USA) equipped with a spindle of type SC4 29 was used for the measurements. The measurement began after pre-shearing the samples at 120 RPM for 60 s. After that, the rotational speed was reduced in 9 steps to 1 RPM. Each step was fixed for 30 s, and every 5 s, the shear stress and dynamic viscosity were determined. All the examinations were performed in triplicate. Stability Test Preparation of PCEs in Different Alkaline Media For chemical stability investigations, liquid samples containing 20 wt.-% of PCEs were prepared by dissolving 1 g of PCEs in 5 mL of ultrapure water. A beaker with a magnetic stirrer was filled with 99 mL of alkaline solution before 1 mL of the prepared PCE solution was added. The resulting PCE concentration in the solution was, therefore, approximately 2 mg/mL The solutions were neutralized with hydrochloric acid (HCl) after time intervals of 5, 10, 15 and 30 min, filled in round bottom flasks and lyophilized. Due to the decrease in the pH value during the addition of HCl, silica gel (SiO 2 ) precipitated. Therefore, dried samples from the alkali hydroxide solution were mainly composed of sodium chloride (NaCl) or potassium chloride (KCl), while the samples from the alkali silicate solutions also contained silica gel (SiO 2 ). The solubility of NaCl (0.65 g/kg) and KCl (0.37 g/kg) in ethanol is low, while SiO 2 is insoluble in ethanol at 25 • C. Therefore, to isolate the PCEs and degradation products, 30 mL of ethanol (≥96%) was added to the dried samples, and the flasks were shaken for an hour on a horizontal shaker. The samples were transferred to tubes and centrifuged at 2215× g for five minutes (Eppendorf, Centrifuge 5804R, Hamburg, Germany). The samples' supernatant was separated with a syringe and filtered into round bottom flasks through a syringe filter with a 0.45 µm mesh. The solution was diluted with 60 mL of ultrapure water and subsequently lyophilized. A 0.05% NaN 3 solution was used as the eluent, and the samples were prepared by the dissolution of 10 mg of sample in 10 mL of eluent. The samples were then analyzed as described in Section 2.2.1. Figure 3 shows the elution curves of all the PCE samples, while the samples' M w and M n , PDI and polymer content are given in Table 1. A large peak followed by a set of two smaller peaks, or in the case of IPEG-PCEs, a peak with a right-hand shoulder, can be seen for all three samples. Due to the rapid elution of polymers and macromonomers, a baseline separation is not possible with the used column. The first peak corresponds to the PCE polymers in the sample, while further peaks and shoulders can be assigned to the (macro-)monomeric residual chemicals used in the synthesis of PCEs. These residual chemicals are usually not separated from the synthesized PCE superplasticizers. The portion corresponds to methoxy polyethylene glycol (MPEG), hydroxy polyethylene glycol (HPEG) or isopropoxy polyethylene glycol (IPEG). The smaller monomer portion corresponds to polyethylene glycol, which is a byproduct of the synthesis of the macromonomer. The M n and M w hint at the HPEG-and MPEG-PCEs being structurally more like one another than the IPEG-PCEs, while the PDI for all the samples varied only slightly. The calculated polymer content of the samples was below 90%, meaning that over 10% of each sample is residual reactants from synthesis. Figure 3 shows the elution curves of all the PCE samples, while the samples' Mw and Mn, PDI and polymer content are given in Table 1. A large peak followed by a set of two smaller peaks, or in the case of IPEG-PCEs, a peak with a right-hand shoulder, can be seen for all three samples. Due to the rapid elution of polymers and macromonomers, a baseline separation is not possible with the used column. The first peak corresponds to the PCE polymers in the sample, while further peaks and shoulders can be assigned to the (macro-)monomeric residual chemicals used in the synthesis of PCEs. These residual chemicals are usually not separated from the synthesized PCE superplasticizers. The portion corresponds to methoxy polyethylene glycol (MPEG), hydroxy polyethylene glycol (HPEG) or isopropoxy polyethylene glycol (IPEG). The smaller monomer portion corresponds to polyethylene glycol, which is a byproduct of the synthesis of the macromonomer. The Mn and Mw hint at the HPEG-and MPEG-PCEs being structurally more like one another than the IPEG-PCEs, while the PDI for all the samples varied only slightly. The calculated polymer content of the samples was below 90%, meaning that over 10% of each sample is residual reactants from synthesis. Impact of PCE on Rheology of Geopolymer Pastes Two reference geopolymer pastes, one with a sodium silicate solution and one with a potassium silicate solution, were prepared. Their dynamic (dyn.) viscosity and shear stress were measured, and the average of four individual samples, together with the standard deviation, is plotted in Figure 4. The dynamic viscosity and shear stress were higher for the pastes prepared with the sodium silicate solution than with the potassium silicate solution. This is because sodium silicate solutions have a higher viscosity and shear stress by a factor of approx. 10 at comparable modulus [29,43,44]. The difference in the dynamic viscosity is high at shear rates below 1 and negligible at higher shear rates. The difference in the shear stress is only small at low shear rates and increases together with the shear rate. The dynamic viscosity and shear rates for geopolymer pastes with 0.25%, 0.5% and 1.0% are plotted in Figure 5. Note that the scale for dynamic viscosity is 0-200,000 mPa·s for pastes made with sodium silicate solution and 0-100,000 mPa·s for pastes made with potassium silicate solution. A significant increase in the dynamic viscosity and shear stress can be seen with the introduction of the PCEs, regardless of their type and the type of alkali silicate solution. The dynamic viscosity of all the samples containing PCEs is much higher than the reference, especially at low shear rates. However, no clear increase in the dynamic viscosity with the PCE concentration can be seen. With an increasing shear rate, the dynamic viscosity of the samples and the difference between samples and the reference are becoming smaller. For high shear rates > 25, an increase in the dynamic viscosity can only be seen for pastes made with the potassium Materials 2023, 16, 5369 7 of 14 silicate solution. The shear stress of all the samples containing PCEs is much higher than the reference for shear rates > 1. As expected, the shear stress increases with the shear rate. However, while a clear dependency on the PCE concentration can be seen for pastes made from the sodium silicate solution for shear rates ≥ 2, the same can only be seen at high shear rates of ≥25 for pastes made with the potassium silicate solution. higher for the pastes prepared with the sodium silicate solution than with the potassium silicate solution. This is because sodium silicate solutions have a higher viscosity and shear stress by a factor of approx. 10 at comparable modulus [29,43,44]. The difference in the dynamic viscosity is high at shear rates below 1 and negligible at higher shear rates. The difference in the shear stress is only small at low shear rates and increases together with the shear rate. The dynamic viscosity and shear rates for geopolymer pastes with 0.25%, 0.5% and 1.0% are plotted in Figure 5. Note that the scale for dynamic viscosity is 0-200,000 mPa·s for pastes made with sodium silicate solution and 0-100,000 mPa·s for pastes made with potassium silicate solution. A significant increase in the dynamic viscosity and shear stress can be seen with the introduction of the PCEs, regardless of their type and the type of alkali silicate solution. The dynamic viscosity of all the samples containing PCEs is much higher than the reference, especially at low shear rates. However, no clear increase in the dynamic viscosity with the PCE concentration can be seen. With an increasing shear rate, the dynamic viscosity of the samples and the difference between samples and the reference are becoming smaller. For high shear rates > 25, an increase in the dynamic viscosity can only be seen for pastes made with the potassium silicate solution. The shear stress of all the samples containing PCEs is much higher than the reference for shear rates > 1. As expected, the shear stress increases with the shear rate. However, while a clear dependency on the PCE concentration can be seen for pastes made from the sodium silicate solution for shear rates ≥ 2, the same can only be seen at high shear rates of ≥25 for pastes made with the potassium silicate solution. The rheological studies have shown that all PCEs act as thickeners and do not cause any dispersing effect in geopolymer pastes, which is shown in Figure 5. It is assumed that PCEs are unstable in the highly alkaline environment of the geopolymer activators, and the polyethylene side chains are cleaved by hydrolysis. In addition, the free PEG side chains are thought to be responsible for the thickening effect in the geopolymer pastes. This corresponds to the results of investigations by Palacios and Puertas [45]. PCEs lose their steric repulsion forces when their PEG sidechains are cleaved off in high-alkaline The rheological studies have shown that all PCEs act as thickeners and do not cause any dispersing effect in geopolymer pastes, which is shown in Figure 5. It is assumed that PCEs are unstable in the highly alkaline environment of the geopolymer activators, and the polyethylene side chains are cleaved by hydrolysis. In addition, the free PEG side chains are thought to be responsible for the thickening effect in the geopolymer pastes. This corresponds to the results of investigations by Palacios and Puertas [45]. PCEs lose their steric repulsion forces when their PEG sidechains are cleaved off in high-alkaline media. The remaining electrostatic repulsion forces of the residual PCE backbones are superimposed by the thickening effect of the cleaved-off PEG sidechains. To clarify this, the reference geopolymer paste was mixed with 0.1% polyethylene glycol (PEG-1000, for synthesis, Sigma Aldrich, Darmstadt, Germany). The addition of the PEG-1000 also caused the thickening of the geopolymer paste. It is assumed that the hydrolyzed PCE side chains cause the bridging of the metakaolin particles, which results in an increase in the viscosity and yield strength of the geopolymer paste. Degradation in Alkali Hydroxide Solutions The elugrams for PCEs before and after 30 min of exposure to 4 M sodium hydroxide solution are given in Figure 6. All the samples show a reduction in height and width for the first peak, which corresponds to the polymeric PCE molecules after the exposure, indicating degradation of those polymers. At the same time, a much higher third peak can be seen after the exposure, which dwarfs the second peak to a mere shoulder in the case of MPEG-PCEs. Therefore, the third peak can be assigned to cleaved PEG sidechains and the second peak to macromonomers, like methoxy polyethylene glycol (MPEG), hydroxy polyethylene glycol (HPEG) or isopropoxy polyethylene glycol (IPEG). Since no further peaks are formed, the molecular weight of the PCE backbones must be like the molecular weight of the PEG sidechains. This degradation process was observed to be more dependent on the molarity of the alkali hydroxide solution rather than its type. With increasing molarity, a stronger decrease in the first peak and a stronger increase in the third peak were seen. The polymeric content of the sample was determined by assigning it to the first peak and the (macro-)monomer content to the second and third peaks of the elugram. The calculated polymer content of the samples exposed to alkali hydroxide solutions with a molarity of 1, 4 and 8 and a duration of exposure of 5, 10, 15 and 30 min are shown in Figure 7. molarity, a stronger decrease in the first peak and a stronger increase in the third peak were seen. The polymeric content of the sample was determined by assigning it to the first peak and the (macro-)monomer content to the second and third peaks of the elugram. The calculated polymer content of the samples exposed to alkali hydroxide solutions with a molarity of 1, 4 and 8 and a duration of exposure of 5, 10, 15 and 30 min are shown in Figure 7. The degradation process of each PCE superplasticizer appears to be time-dependent in the 1 M alkali hydroxide solution. The decrease in the polymer content for the samples exposed to higher concentration solutions (4 M and 8 M) shows that most of the reaction takes place in the first five minutes of exposure, after which no significant change in the polymer content can be seen. Degradation in Alkali Hydroxide Solutions The degree of degradation is dependent on the molarity of the alkali hydroxide solution, with a remaining polymer content of 45-75% after 30 min of exposure. The highest degree of degradation was found in the 8 M alkali hydroxide solution. Furthermore, a difference dependent on the type of solution can be seen. The polymer content was reduced to >45% in the sodium hydroxide solution and >55% in the potassium hydroxide solution. Table 2 shows the Mn and Mw for the samples exposed to alkali hydroxide solutions for 30 min. A reduction in the Mn and Mw with increasing alkali concentration can be seen for all the samples. The values are mostly similar between the samples exposed to the sodium hydroxide solution and the potassium hydroxide solution. However, the MPEG-PCE samples' decline in the Mn and Mw was less in the 1 M and 4 M potassium The degradation process of each PCE superplasticizer appears to be time-dependent in the 1 M alkali hydroxide solution. The decrease in the polymer content for the samples exposed to higher concentration solutions (4 M and 8 M) shows that most of the reaction takes place in the first five minutes of exposure, after which no significant change in the polymer content can be seen. The degree of degradation is dependent on the molarity of the alkali hydroxide solution, with a remaining polymer content of 45-75% after 30 min of exposure. The highest degree of degradation was found in the 8 M alkali hydroxide solution. Furthermore, a difference dependent on the type of solution can be seen. The polymer content was reduced to >45% in the sodium hydroxide solution and >55% in the potassium hydroxide solution. Table 2 shows the M n and M w for the samples exposed to alkali hydroxide solutions for 30 min. A reduction in the M n and M w with increasing alkali concentration can be seen for all the samples. The values are mostly similar between the samples exposed to the sodium hydroxide solution and the potassium hydroxide solution. However, the MPEG-PCE samples' decline in the M n and M w was less in the 1 M and 4 M potassium hydroxide solutions than in the corresponding sodium hydroxide solutions. As expected, the MPEG-PCEs ester bonds were less stable under alkaline conditions compared to the ether bonds of the HPEG-and IPEG-PCEs. hydroxide solutions than in the corresponding sodium hydroxide solutions. As expected, the MPEG-PCEs ester bonds were less stable under alkaline conditions compared to the ether bonds of the HPEG-and IPEG-PCEs. Degradation in Alkali Silicate Solution In comparison to the alkali hydroxide solutions, the degradation process of the PCEs was found to be much stronger in the alkali silicate solutions. This corresponds to the results reported by Palacios et al., who found that commercial superplasticizers were ineffective, especially in sodium silicate solutions [36]. Figure 8 shows the exemplary elugrams for the samples exposed to a sodium silicate solution for 30 min. The disappearance of the first peak of all the samples after exposure to the alkaline medium indicates that most of the PCEs were degraded to PEG sidechains and the respective polyacrylic backbones. The second peak is significantly smaller after the exposure, which could be assigned to the degradation of the macromonomers to PEG and the respective acrylic acid derivatives. These apparently have a similar molecular mass to the PEG side chains. As expected, the third peak grows much larger in size due to the degradation process of the polymers and macromonomers. The calculated polymer content of the samples exposed to the alkaline medium is plotted in Figure 9, and a drastic decrease in the polymer content to <10% is shown, while the monomer content increases to >90% after five minutes of exposure. With a longer exposure, no significant change in the polymer content can be seen for most of the samples. However, the MPEG-PCE samples' polymer content decreases from ≈10% to ≈5% going from 5 to 10 min of exposure to the alkaline medium. Degradation in Alkali Silicate Solution In comparison to the alkali hydroxide solutions, the degradation process of the PCEs was found to be much stronger in the alkali silicate solutions. This corresponds to the results reported by Palacios et al., who found that commercial superplasticizers were ineffective, especially in sodium silicate solutions [36]. Figure 8 shows the exemplary elugrams for the samples exposed to a sodium silicate solution for 30 min. The disappearance of the first peak of all the samples after exposure to the alkaline medium indicates that most of the PCEs were degraded to PEG sidechains and the respective polyacrylic backbones. The second peak is significantly smaller after the exposure, which could be assigned to the degradation of the macromonomers to PEG and the respective acrylic acid derivatives. These apparently have a similar molecular mass to the PEG side chains. As expected, the third peak grows much larger in size due to the degradation process of the polymers and macromonomers. The calculated polymer content of the samples exposed to the alkaline medium is plotted in Figure 9, and a drastic decrease in the polymer content to <10% is shown, while the monomer content increases to >90% after five minutes of exposure. With a longer exposure, no significant change in the polymer content can be seen for most of the samples. However, the MPEG-PCE samples' polymer content decreases from ≈10% to ≈5% going from 5 to 10 min of exposure to the alkaline medium. Figure 10 shows a comparison of the Mn and Mw for the samples exposed to the alkali silicate solution for 30 min, with concentrations of 7.8 M for the sodium silicate solution and 6.6 M for the potassium silicate solution. A strong decrease in both values to >2 kDa can be seen, regardless of the type of alkali silicate solution. Although these concentrations of alkali hydroxide are comparable to the investigated alkali hydroxide solution concentrations, a much stronger degree of degradation was seen. This is most likely linked to the presence of silicate ions and will be further investigated. (a) (b) Figure 9. Polymer content of PCE samples exposed to sodium silicate solution (a) and potassium silicate solution (b) over time. Figure 9. Polymer content of PCE samples exposed to sodium silicate solution (a) and potassium silicate solution (b) over time. Figure 10 shows a comparison of the M n and M w for the samples exposed to the alkali silicate solution for 30 min, with concentrations of 7.8 M for the sodium silicate solution and 6.6 M for the potassium silicate solution. A strong decrease in both values to >2 kDa can be seen, regardless of the type of alkali silicate solution. Although these concentrations of alkali hydroxide are comparable to the investigated alkali hydroxide solution concentrations, a much stronger degree of degradation was seen. This is most likely linked to the presence of silicate ions and will be further investigated. Conclusions The study involved systematic stability investigations of ester and ether-type polycarboxylate superplasticizers (MPEG, HPEG and IPEG PCEs) in high-alkaline activators for low-calcium geopolymers. In particular, the type (NaOH, KOH sodium and potassium silicate solutions), alkalinity and exposure time in the alkaline activator play an important role in the hydrolyzation of the polyethylene glycol side chains of PCEs. By size exclusion chromatography, it could be shown that all the investigated PCEs will be degraded independent of the alkalinity of the sodium hydroxide and potassium hydroxide solutions. In particular, for a 1 M sodium hydroxide and potassium hydroxide solution, the degradation process is dependent on the time of exposure. At higher alkalinity (4 M and 8 M), degradation occurs in the first 5 min of exposure and leads to a reduction of the molecular weight of approx. 45-55%. When exposed to alkali silicate solutions, the degree of degradation of the PCEs was found to be much higher, so the polymer content was reduced to >10% within the first five minutes. The Mn and Mw were reduced to <2 kDa. The reason for the low efficiency of the PCE superplasticizers is the hydrolyzation of the ether and ester-bonded side chains. By rheological investigations, it could be shown that all the investigated PCEs generated an increase in the viscosity and yield strength of the geopolymer paste. This is due to the free polyethylene glycol side chains, which are responsible for the thickening of the geopolymer paste. Therefore, it will be necessary to develop effi- Conclusions The study involved systematic stability investigations of ester and ether-type polycarboxylate superplasticizers (MPEG, HPEG and IPEG PCEs) in high-alkaline activators for low-calcium geopolymers. In particular, the type (NaOH, KOH sodium and potassium silicate solutions), alkalinity and exposure time in the alkaline activator play an important role in the hydrolyzation of the polyethylene glycol side chains of PCEs. By size exclusion chromatography, it could be shown that all the investigated PCEs will be degraded independent of the alkalinity of the sodium hydroxide and potassium hydroxide solutions. In particular, for a 1 M sodium hydroxide and potassium hydroxide solution, the degradation process is dependent on the time of exposure. At higher alkalinity (4 M and 8 M), degradation occurs in the first 5 min of exposure and leads to a reduction of the molecular weight of approx. 45-55%. When exposed to alkali silicate solutions, the degree of degradation of the PCEs was found to be much higher, so the polymer content was reduced to >10% within the first five minutes. The M n and M w were reduced to <2 kDa. The reason for the low efficiency of the PCE superplasticizers is the hydrolyzation of the ether and ester-bonded side chains. By rheological investigations, it could be shown that all the investigated PCEs generated an increase in the viscosity and yield strength of the geopolymer paste. This is due to the free polyethylene glycol side chains, which are responsible for the thickening of the geopolymer paste. Therefore, it will be necessary to develop efficient and environmentally friendly superplasticizers that exhibit high stability in alkaline activators of future binder systems. Data Availability Statement: The data presented in this study are available from the corresponding authors upon reasonable request.
7,564.2
2023-07-31T00:00:00.000
[ "Materials Science" ]
Parameters Identification of the Fractional-Order Permanent Magnet Synchronous Motor Models Using Chaotic Ensemble Particle Swarm Optimizer : In this paper, novel variants for the Ensemble Particle Swarm Optimizer (EPSO) are proposed where ten chaos maps are merged to enhance the EPSO’s performance by adaptively tuning its main parameters. The proposed Chaotic Ensemble Particle Swarm Optimizer variants (C.EPSO) are examined with complex nonlinear systems concerning equal order and variable-order fractional models of Permanent Magnet Synchronous Motor (PMSM). The proposed variants’ results are compared to that of its original version to recommend the most suitable variant for this non-linear optimization problem. A comparison between the introduced variants and the previously published algorithms proves the developed technique’s efficiency for further validation. The results emerge that the Chaotic Ensemble Particle Swarm variants with the Gauss/mouse map is the most proper variant for estimating the parameters of equal order and variable-order fractional PMSM models, as it achieves better accuracy, higher consistency, and faster convergence speed, it may lead to controlling the motor’s unwanted chaotic performance and protect it from ravage. Introduction Permanent Magnet Synchronous Motor (PMSM) is one of the preferable motors due to its high efficiency, low cost, and simple structure [1]. However, PMSM performance may be disturbed considerably due to its chaotic behavior due to the disturbance of load, or the system parameters change [2,3]. This behavior is an important problem in the operation of PMSM because of torque inconstancy, low-frequency fluctuations in current, and speed oscillations, which may lead, in turn, to motor collapse. The whole system stability, safety as well as the economic operation of the industrial process may be influenced by this unwanted behavior [4]. Therefore, there is a persistent need for efficient and accurate modeling of the motor's dynamic behavior to improve the chaotic performance control and prevent the ravage of the motor [2,3]. Moreover, it is crucial to introduce an accurate and simple optimization technique to efficiently extract the parameters of these models with minimum execution time to restore the system's normal operation as soon as possible and prevent its damage [5]. Recently, fractional modeling becomes a new avenue in dynamic modeling. The fractional-order models provide a proper emulation of the system's physical response by adding new degrees of freedom for the mathematical model of the systems [6][7][8]. Two types of the fractional modeling of PMSM defined as an equal order and variableorder fractional models are currently published and validated to introduce more flexible and accurate modeling over the integer one [2,9]. Therefore, they are selected to be tested in this work. In literature, two approaches are introduced to define the parameters of PMSM irregular behavior. The first one is the numerical methods that have been proposed for determining the PMSM model's parameters at a specific operating condition [2,9]. The numerical techniques employed some simplifications and assumptions while handling such a nonlinear problem. Thus, the second approach has been proposed to identify the models' parameters using the meta-heuristic optimization techniques [10,11]. Yousri et al. [12] proposed two novel meta-heuristic optimization algorithms named Chaotic Grasshopper Optimizer (CGOA) and Chaotic Grey wolf optimizer (CGWO) in addition to their basic versions (GWO and GOA) for parameters estimation of both equal and variable-order fractional models. Nevertheless, seeking better accuracy and lower time consumption, which are significant factors in predicting and controlling the motor's unpredictable performance as fast as possible, reliable developed algorithms should be proposed for optimal parameters estimation. The Particle Swarm Optimization Algorithm (PSO) is considered as the simplest one to be implemented and it has performed well on several fields [13]. While, there are some disadvantages in the PSO technique such as trapping in the local minimum where its searchability is insufficient and its convergence speed is slow because each particle is learned from both of the personal best as well as the global best [14]. To minimize these drawbacks, several variants of PSO have been reported in the literature to compromise between the exploration and exploitation processes. One of these variants is the Selforganizing Hierarchical PSO (HPSO-TVAC) that has been developed to reinforce the global exploration in addition to the local exploitation and thereby avoiding earlier convergence as well as reaching the global optimum during the latter stages of the search [15]. another variant known as Comprehensive Learning PSO (CLPSO) has been proposed where each particle learns from the best experiences of the other particles over various dimensions [16]. Additionally, a variant named Fitness Distance Ratio based PSO (FDR-PSO) has been introduced to enhance the capability of the local search [17]. Moreover, the variant named Distance-based locally informed PSO (LIPS) has been employed to enhance searching for multiple local optima in multi-modal problems [18]. Lately, Ensemble Particle Swarm Optimizer EPSO [19] has been proposed to combine all features of PSO, CLPSO, HPSO-TVAC, and LIPS variants in only one variant to offer robust performance in various optimization problems with different features. It is worth mentioning that the EPSO variant has an effective performance in nonlinear complicated systems. While the free lunch theorem (NFL) states that no optimizer is perfect to be employed as the best method for any optimization problem at hand [20]. Hence, any algorithm may fail to converge and provide unexpected performance as a result of the impact of some control parameters on the exploration as well as the exploitation phases. Accordingly, the behavior of the algorithm may be changed according to the variations of these parameters. Therefore, several attempts are still performed aiming to achieve a compromise between the diversification and intensification during the search process of the algorithm to provide an appropriate behavior especially in the case of non-linear optimization problem [21][22][23][24][25]. Lately, merging the dynamic behavior of the chaos maps and the met-heuristic algorithms has affirmed its efficiency in improving the consistency and accuracy of the standard algorithms [26,27]. In that approach, the uniform or Gaussian distributions are replaced by chaos maps in the original algorithm to use their statistical and dynamical properties in adjusting control parameters of the basic versions of the algorithms. The Chaotic Grasshopper Optimizer [12,28], Chaotic Flower pollination algorithm (CFPA) in [29], Chaotic Differential Evolutionary algorithm [30], Chaotic Whale Optimization Algorithm (CWOA) [31] and Chaotic Salp Optimizer [26] are some examples for that approach. The proposed chaotic variants of the GOA, FPA, GWO, and SSA algorithms have shown their efficiency in various optimization problems. Therefore, In this manuscript, ten chaos maps are merged with EPSO to propose novel variants named chaotic Ensemble Particle Swarm optimizer (C.EPSO) to boost the EPSO performance while solving the non-linear optimization problem of PMSM modeling. Firstly, to validate the results of C.EPSO variants, they are compared with the results of the original EPSO intensively using different statistical analysis. Furthermore, an excessive comparison is established between the results of the novel variants and that of the previous techniques listed in literature. The comparison clarifies that C.EPSO proves its superiority in accuracy, consistency, higher convergence rate and lower execution time. The final recommendation is C.EPSO with Gauss/mouse map is the proper variant to estimate the parameters of equal and variable order fractional models. The manuscript is arranged as follows: Section 2 shows the fractional models of the PMSM motor. The optimization problem is mathematically modeled in Section 3. Section 4 presents the main equations of basic EPSO algorithm, the chaos maps functions and the proposed chaotic variants of EPSO. Simulation and results are emerged in Section 5. Section 6 summaries the conclusion. Fractional PMSM Model PMSM model has been mathematically formulated as a system of non linear differential equations. The fractional calculus is a new trend that is used to provide a better emulation of the motor physical performance and to increase the model flexibility. Thereby, the PMSM can be modeled mathematically as fractional differential equations with derivative orders lower or greater than 1 [2,9]. In this work, two fractional PMSM models are introduced. The first one is the variable order fractional model [2], while the other one is the equal order factional model [9]. The general system of differential equations of PMSM is described as in a system of equations (Equation (1)) [2]. where q i (i = 1,2,3) are the fractional derivatives, the i q and i d and ω are the state variables, which represent currents and motor angular frequency. The u q and u d are the quadrature and direct-axis stator voltage component [9,32]. The γ and σ are the system dimensionless operating parameters. The load torque is T L . When the inputs of the system are 0 where T L = u d = u q = 0, the equations of the system in 1 is reformulated as follows in the system of equations (Equation (2)) [2]. The PMSM behaves chaotically when the equal order fractional model parameters γ and σ have values equal to 100, 10 respectively and q 1 = q 2 = q 3 q = 0.95 as reported in Ref. [9]. While, for the variable order fractional model, the parameters γ, σ, q 1 , q 2 and q 3 is equal to 50, 4, 0.99, 1 and 0.98, respectively as in Ref. [2]. The initial conditions at the chaotic region for equal and variable order fractional models are [i d , i q , ω] = [2.5, 3, 1]. Problem Formulation Practically, the parameters of equal and variable order fractional models are unknown. Accurate estimation of these parameters is a problem of optimization which targets minimization of the gap between the original system and the estimated one. Therefore, developing new optimization variants to estimate the global unknown parameters optimally is a very crucial issue. The general fractional derivative differential equation of the PMSM are described as follows (3). where D t = d dt is the time derivative operator t, the original system state vector is X = (i d , i q , ω) T ∈ R n while its initial state vector is X 0 and θ = (θ 1 , θ 2 , ..., θ d ) T = (σ, γ) T is the original parameters and q = (q 1 , q 2 , q 3 ......q n ) T ∈ R n is fractional order derivatives. While, fractional derivative differential equation with identified parameters is (4) whereX = (î d ,î q ,ω) T is the estimation of the state vector,θ = (σ,γ) T the vector of the extracted parameters andq = (q 1 ,q 2 ,q 3 ......q n ) T is the estimated non-integer derivative orders. The objective function named Mean Square Error (MSE) is employed between original and identified state vectors as in Equation (5). The block diagram describing the process of the fractional model parameters estimation is emerged in Figure 1. where samples number is indicated by k. The core problem associated with the conventional optimization algorithms that may prevent them to converge to the global optimal parameters is occurred due to the local optima. Therefore, there is a persistent need to develop and test novel optimization techniques in this work to handle such these intricate non linear problems. Chaotic Ensemble Particle Swarm Optimizer (C.EPSO) In this section, the details of the proposed algorithm for PMSM models parameters estimation process are presented. Ensemble Particle Swarm Optimizer (EPSO) EPSO is the most recent variant of PSO that combine different variants of PSO such as CLPSO, inertia weight PSO, LIPS, HPSO-TVAC and FDR-PSO variants to create more robust algorithm that is able to solve various optimization problems of different applications [19]. The population size in EPSO is consisting of two groups of subpopulations, a small group and a large one. The small group utilizes CLPSO algorithm while the large one uses the other predetermined variants. In the large group, for updating a particle, one of PSO strategies is adaptively selected depending on the ratio of success of every methodology in the latest iterations [19]. Mathematical control equation of PSO variants that employed in EPSO are listed as follows: Control equations of inertia weight PSO basic PSO is combined with a control parameter named inertia weight w to compromise between the local as well as the global searches. w has value deceased linearly with time. The velocity of a particle is computed as follows where i indicates the agents (i = 1, 2, . . . , N) while d indicates the dimension (d = 1, 2, . . . , D). Z d i is the location of i th particle and V d i is the agent velocity in the population. pbest d i is the best location of the particle i at d dimension. The gbest d is the best location found by population of the swarm so far. The w is decreasing linearly in range of 0.9-0.2 during the run time [19] to enhance the balance between the global and local search. c 1 and c 2 are the acceleration coefficients varied with time, where c 1 = 2 and c 2 = 2 . rand1 d i and rand2 d i are random numbers in range of [0, 1] . Control equations of modified CLPSO CLPSO has been proposed for tackling the problem of trapping the original PSO in the local minimum and far from the global one for multi-model optimization problem [16]. In CLPSO, each particle has been learned from pbests of other particles at different dimensions. The new velocity employed in EPSO variant is as follows [16]: where pbest d f i (d) is the best place of a particle i, .., f i(D)] shows that ith particle moves to its own or to the other's pbest d i for each dimension d. c 1 and c 2 are time varying acceleration coefficients, c 1 is used in the range of 2.5-0.5 while c 2 is used in the range of 0.5-2.5. rand1 d i and rand2 d i are randomly generated numbers in the range of [0, 1]. Control equations of FDR-PSO FDR-PSO has been introduced to addressed the convergence propblem in PSO. In FDR-PSO, each particle learns from the neighboring particle's experience (nbest) that have a better fitness than itself. Consequently, the i th particle's velocity component in d th dimension is upgraded through the following equation [17]: where c 1 , c 2 and c 3 are time varying acceleration coefficients, they equaled to 1, 1 and 2, respectively. nbest is the experience of neighboring particle. Control equations of HPSO-TVAC HPSO-TVAC has been introduced to avoid premature convergence in the early stages and enhance the convergence to the global optimum solution [15]. In HPSO-TVAC technique, the particle's velocity is updated as follows [15]: where c 1 and c 2 are time varying acceleration coefficients, c 1 is used in the range of 2.5-0.5 while c 2 is used in the range of 0.5-2.5 . Control equations of LIPS In LIPS, neighboring particles' best experiences is utilized for particles' guidance instead of the best experience of the swarm. The velocity of the particle is as follows: where, nbest j is defined as the nearest neighborhood of j th particle to pbest of i particle. nsize is the neighborhood size that is dynamically increased n range of [2,5]. The constriction coefficient χ is equal to 0.7298. The ϕ j is a positive number drawn randomly from a uniform distribution in range of [0, 4.1/nize]. The acceleration weight ϕ is equal the summation of ϕ j . Chaotic Maps The randomization process in all the natural inspired optimization techniques is performed using Gaussian distribution. lately, a novel avenue has been created to improve this process by replacing Gaussian distribution by chaotic maps to avail from their randomization properties. In this approach, merging the features of chaos maps with the original algorithms achieves rapid convergence to the optimal solution with better accuracy especially in case of difficult problems such as multi-modal functions [33]. In this work, ten different one-dimensional chaos maps are utilized to adjust some control parameters of the basic version of EPSO algorithm and subsequently achieving a better convergence rate and more accurate results. The novel chaotic variants are named Chaotic Ensemble Particle Swarm Optimizer (C-EPSO). The utilized chaos maps are listed in Table 1. The distribution values is equal to the maximum number of iterations and the initial value is equal to 0.7. Table 1. Chaotic maps formulas [33]. Chaotic Ensemble Particle Swarm Optimizer (C.EPSO) In Equations (6)- (9) in the standared version of EPSO, there are different controlled variables selected randomly and they are dominant factors that affect the performance of the EPSO algorithm. In Equations (6)- (8), the key factor is w that has large impact on EPSO convergence and its value is varied in range of [0.99, 0.2]. Moreover, c 1 and c 2 coefficients are essential factors that are used in the range of 2.5-0.5 and 0.5-2.5, respectively as in Equations (7) and (9). In this paper, these variables can be adaptively tuned chaotically in the same interval as described as follows: In C.EPSO, w is deceased chaotically beginning from 0.99 until reaching to 0.2 proportional to the iterations based on Equation (11). In addition, c 1 , c 2 time varying acceleration coefficients are fluctuated chaotically in range of [2.5, 0.5] and [0.5, 2.5] respectively depending on Equations (12) and (13), respectively. where where (C − w) i is the chaotic inertia weight at k indexed chaos map. w I , w F are the initial value of the inertia weight as well as its final value. w I andw F are tuned as 0.99 and 0. Simulation and Results In this part, the proposed algorithms EPSO and C.EPSO are employed for estimation of fractional PMSM models' parameters. An excessive comparisons among the chaotic variants of these algorithms and the basic ones is held based on different statistical analysis. For more validation for the performance of the algorithms, another comparison is accomplished among the novel variants and the state-of-the-art algorithms. Equal Order-Fractional PMSM Model The data of equal order fractional PMSM Model is obtained from [9] where the PMSM system 2 behaves chaotically at parameters values σ, γ of 10, 100 respectively and the equal order fractional parameters q 1 = q 2 = q 3 of value 0.95 as well as the initial values of [i d , i q , ω] are [2.5, 3, 1]. The number of samples in the sampling vector are 100 with a step of 0.001 s. The introduced algorithms EPSO and C.EPSO are employed to identify the parameters at which the motor behaves chaotically at 200 iteration and population size of 20. While the minimum and maximum limits of variables are within range of [5,15] for σ, [80, 120] for γ and [0.9, 1] for all q. The obtained results by C.EPSO and EPSO variants are listed in Table 2. The table data shows that the effect of adding chaos maps into EPSO is significant on both of their accuracy and consistency. Where, EPSO provides mean ± STD of MSE equaled to 1.963 × 10 −16 ± 4.601 × 10 −16 While C.EPSO offers values in range of 10 −26 ± 10 −28 magnitude of orders especially with Gauss/mouse and tent maps. As the time factor and the algorithm convergence speed are essential factors, the mean convergence curves are plotted in Figure 2 to emerge the swiftness of convergence rate of the C.EPSO over the basic version. Where C.EPSO variants reach to lower MSE values compared to EPSO at 80 iteration however EPSO required more iterations trying to reach for closer values to that of C.EPSO as shown in the zoomed figure on Figure 2. Regarding for the execution time of the proposed algorithms are 4.951118 and 5.5194925 s for the C.EPSO and EPSO algorithms, respectively. Based on the previous results, it's obvious that merging chaos maps especially the Gauss/mouse and tent maps with the EPSO improves its accuracy, consistency of the results and convergence speed considerably. As C.EPSO with Gauss/mouse and tent maps provide mean ± STD values of MSE equal to 4.432 × 10 −26 and 4.985 × 10 −28 , respectively. Furthermore, the convergence curves indicate that the C.EPSO variants reach to the minimum values of MSE with lower number of iterations and execution time. Variable Order Fractional PMSM Model In this section, EPSO and novel C.EPSO variants are utilized to estimate parameters of variable order fractional model using data in [2]. Where the PMSM system 2 behaves chaotically at values of parameters γ, σ, q 1 , q 2 and q 3 equal to 50, 4, 0.99, 1 and 0.98 respectively as well as initial values of [i d , i q , ω] equal to [2.5, 3, 1]. The number of samples is 100 in the sampling vector with a step equal to 0.001 s. The number of iterations is 500, the number of seach agents are 50 while the minimum and maximum limits of variables are in range of [2,8] for σ, [40, 60] for γ and [0.9, 1] for all qs. A comparison is accomplished among the results of all variants over 20 independent runs to determine the most efficient one. The identified parameters' Mean values and STD as well as the cost function's value are tabulated in Table 3. Table 3 indicates that consolidating chaotic maps with the basic algorithms has a sensible impact on the accuracy as well as the homogeneity of the results. Where, the mean ± STD of MSE resulted by EPSO have been improved in the range of 10 −10 ± 10 −12 when C.EPSO version is used especially with Sine map. The mean convergence curves over 20 independent runs in Figure 3 illustrate that the convergence speed of EPSO has been improved as a result of the combination of chaos maps with the basic technique. Where, the C.EPSO decaying rate of convergence is faster than EPSO especially with Gauss/mouse map additionally C.EPSO variants started to converge nearly at 140 iteration while EPSO consumes larger number of iterations. To summarize the overall result, it is obvious that chaos maps have a preferable effect on the performance of EPSO from the points of accuracy, consistency and convergence speed to the optimal solutions. C.EPSO offers more consistent results especially C.EPSO7. From the convergence curves, C.EPSO outperforms EPSO with shorter execution time especially C.EPSO3. Comparison with the Latest Published Algorithms in Literature GOA, GWO algorithms and their chaos variants are the only published meta-heuristic optimization algorithms for estimating equal and variable order fractional PMSM models as in [12]. The authors in [12] modified GOA and GWO for improving both of the accuracy and the rapidity of convergence of the basic algorithms. In this subsection, an excessive comparison is accomplished among the results of the novel variants and that of the literature algorithms GOA, GWO and their chaotic variants. Based on the results in [12], the best variants in CGWO and CGOA in case of equal order fractional model are CGWO1 and CGOA5. These variants achieved 2.7590 × 10 −7 , ±8.7827 × 10 −6 and 1.7935 × 10 −14 , ±8.5292 × 10 −6 , respectively. While, C.EPSO, the C.EPSO3 and C.EPSO10 offer best, ±STD values of MSE equal to 4.432 × 10 −26 ± 4.985 × 10 −28 , respectively. These results prove that the newly developed variants provide more authenticity and consistency of the results than CGOW and CGOA best variants which subsequently affects on the veracity of identifying the corresponded parameters for the chaotic behavior in motor. Additionally, the convergence curves of the best variants of the proposed algorithm (C.EPSO) and that of the previously published ones (CGWO and CGOA) are plotted in Figure 4 for the two fractional models. Figure 4a is indicating the case of equal order fractional model where C.EPSO3 and C.EPSO10 show the lower mean values of MSE with faster convergence speed compared with CGWO1 and CGOA10. Likewise, for variable order fractional model as in Figure 4b, C.EPSO3 and C.EPSO7 exhibit the lower mean values of MSE with faster speed of convergence. Accordingly, the (C.EPSO3, C.EPSO10) and (C.EPSO3, C.EPSO7) achieve the most optimal and consistent solutions with the fastest convergence speed for both of equal and variable order fractional models compared to the other algorithms. The main outcome is that by introducing the C.EPSO, the optimal parameters at which the motor behaves chaotically are identified accurately and that will affect in turn on improving of the accuracy of the control process of the motor chaos and ensure the protection of motor from ravage as well. Conclusions Equal order and variable-order fractional PMSM models have currently introduced to increase the motor modeling's accuracy and flexibility. These models have enhanced the motor's physical depiction despite their complexity in parameter identification. Therefore, this work's main target is achieving a proper parameters estimation for these complicated fractional models to control motor behavior in the chaotic region. Lately, meta-heuristic algorithms are considered a powerful tool for identifying these intricate problems. Improving such algorithms using novel avenues as chaos maps to modify their characteristics has been regarded as a recent trend. The latest PSO variant named EPSO is proposed and developed by combining ten chaos maps to improve its accuracy and efficiency. As a result, the Chaotic Ensemble Particle Swarm Optimizer variants (C.EPSO) have been proposed. The novel variants' results were compared to that of the standard version of EPSO and that of the published algorithms using an intensive statistical analysis. The results have shown that the C.EPSO with Gauss/mouse map is the most recommended variant for identifying the parameters of equal order and variable-order fractional PMSM models, respectively. It has given less error, high convergence speed, and short execution time. This variant has achieved an accurate and rapid prediction of the parameters at which the motor chaos behavior. Subsequently, this may lead to a quick control of the motor and better protection from ravage.
6,027
2021-02-02T00:00:00.000
[ "Engineering", "Computer Science" ]
A module classification method for light industrial equipment based on improved NSGA2-FCM algorithm In response to the problem that it is easy to fall into local optimum when using the traditional clustering algorithm to divide the modules, this paper improves the initialisation strategy of the NSGA2 algorithm and combines it with the FCM algorithm to propose an improved NSGA2-FCM algorithm for clustering analysis. Firstly, the FBS mapping is used to model the functional structure of the product system and identify the relationship between the product functional structures. Secondly, a correlation synthesis matrix is constructed based on the relationships between the module division drivers. Finally, the improved NSGA2-FCM algorithm is applied to cluster analysis of the product to derive the best module division scheme. The algorithm avoids falling into local optima by optimising the initialisation strategy of the NSGA2 algorithm, while using the FCM algorithm to improve the accuracy of the clustering. This allows the algorithm to explore the solution space more effectively when finding the best module partitioning solution. Finally, the effectiveness of the algorithm for module classification of light industrial equipment is verified using beer fermenters as a case study. As an effective tool for rapid product design, proper modular classification can improve design efficiency and effectively address the challenges of product quality, manufacturing cycle time, and cost.Most of the current literature on the subject identifies modules by focusing on structural and/or functional aspects, but overlooks modular classification factors.For example, foreign researchers such as Eppinger et al. have studied product modularity development techniques and introduced a matrix-based formal approach for the first time to study modular product structures, indicating correlations between component modules 1 .Rong et al. proposed a product-oriented approach to modular division of the supply chain, establishing a relationship matrix through functional and structural interactions between product components 2 .Ramachandran K et al. investigated the effect of product modularization rate on rapid product launches and found that by locating performance improvements in product modularization, an innovative combination of product architecture, pricing, and timing decisions was achieved for rapid product launches 3 .Wei et al. proposed a predefined set of modules based on assembly and disassembly to generate various working levels or functions for creating multiple product variants and enhancing product utility for design phase decisions through modularity segmentation 4 .Domestic scholar Jia Yanlin conducted a systematic study on modularity and proposed a general process for modular design of mechanical products, dividing the product modular design process into four parts: modular planning, modular division, module creation, and module combination.Gu Xinxin divided the work of product modularity division into two major parts: modular product platform construction and order product modular design.Zhang Haiyan et al. used the function-principle-behavior-structure design model to map the correlation between system components and applied gap statistics and self-organizing neural network methods for cluster analysis to complete the modular division of special cylindrical gear machine tools, laying the foundation for the reconfigurable research of machine tools 5 .Liu Mingyuan et al. proposed a module classification method based on an improved genetic algorithm.On the premise of obtaining the component design structure matrix and module fitness function, the improved genetic algorithm of difference crossover and neighborhood variation was used to search for the best solution for module classification. Functional structure modeling of product systems based on FBS mapping FBS mapping can be considered a design specification process in which a designer transforms a set function into a product that implements that function.The FBS model mapping turns the function into a desired behavior that carries out the function.This desired behavior is employed to select and combine structures in a process called synthesis.During synthesis, the structures generate their own real behavior, allowing the range of desired behaviors to change and the function to be redesigned through them 6 . Products often consist of various functional modules, which are not only interconnected but also possess multiple properties and layers of characteristics within the modules themselves.The approach based on functional-behavioral structures aims to illustrate the connections between product function and structure using a hierarchical analysis of product function to behavior and behavior to structure.To accurately display the relationship between function and structure within the product and reduce the complexity of product analysis, this paper proposes the concept of system level and establishes a product decomposition model of system functional structure.This involves first breaking down the product into several smaller systems and then using the function pointing structure to determine the relationship between product function and structure.The product decomposition process is shown in Fig. 1. Improved NSGA2-FCM algorithms Improved NSGA2 algorithm.NSGA2 is one of the most efficient and popular evolutionary algorithms for optimization, generating Pareto optimal solutions by analyzing the solution domain.However, the NSGA2 algorithm also suffers from design flaws in its computation, such as the inability to effectively identify pseudoundominated solutions, low computational efficiency, and poor convergence and distribution of the solution set. Improving the NSGA2 algorithm primarily involves enhancing the algorithm's initialization strategy, which changes the initial population distribution and improves the algorithm's initial population quality.Since randomly generated initial populations cannot guarantee the validity of the initial scheme, the algorithm may fall into local optima, negatively affecting the optimization performance of the NSGA2 algorithm.To obtain a good initial population, it is crucial for the algorithm to optimally specify an appropriate chromosome size, which is influenced by the number of required modules.Therefore, this paper proposes Eq. (1) for estimating the number of ideal modules and components. where a and b are the number of modules and components respectively.a ≥ 2. 7 NSGA2 combined with FCM.(1) FCM algorithm model. The FCM algorithm minimizes the objective function by calculating the attribute function of the sample points to the class center, which is a function of the degree to which an object x belongs to set A. In most clustering problems, multiple objective functions need to be optimized, such as separation between clusters, tightness, etc.Given these challenges, NSGA2 can achieve satisfactory results in multi-objective optimization problems.Therefore, combining NSGA2 with FCM may offer a better solution for automatically solving fuzzy clustering problems.This hybrid approach preserves the advantages of both NSGA2 and FCM methods, making the system more robust and flexible. As required by the clustering classification of the FCM algorithm, obtaining the optimal classification result for product modules necessitates that the number of modules be within a reasonable range.If the number of modules is too large, production costs will increase; if the number of modules is too small, the difficulty of product design will increase.This paper is based on a statistical study by Ericsson.The range of module classification, derived from the statistical laws studied by Ericsson et al., is shown in Eq. ( 5) below.This puts the module division scheme between the optimal number of where √ n denotes rounding, l = 1or 2. The performance of the NSGA2-FCM algorithm is influenced by several parameters, including population size, crossover probability, mutation probability, and the number of iterations.Setting these parameters appropriately is crucial for obtaining high-quality clustering results.Therefore, data testing was employed to find the most suitable parameter settings for the algorithm.In this paper, the population size was set to 50, the crossover probability to 0.8, the mutation probability to 0.05, and the number of iterations to 200. To implement the NSGA2-FCM algorithm, the operations and data structures of the two algorithms need to be intertwined.In the process, the selection, crossover, and mutation operations of the NSGA2 algorithm are applied to the membership matrix in the FCM algorithm to achieve multi-objective optimization.Specifically, we can treat the membership matrix of the FCM algorithm as the chromosomes of the NSGA2 algorithm and adjust these chromosomes through genetic operations to find the best fuzzy clustering solution. As a result, the improved NSGA2-FCM algorithm demonstrates better convergence and stability, while being capable of finding solution sets that are close enough to the optimal solution within a limited number of iterations.Furthermore, these solution sets exhibit high consistency across runs. Improvement of NSGA2-FCM algorithm flow.The flowchart of the improved NSGA2-FCM algorithm is shown in Fig. 2, with the following steps. Comparative performance analysis of algorithms. To verify the optimization performance of the improved NSGA2-FCM algorithm, its performance was compared with that of the NSGA2 and FCM algorithms alone.The experimental data was simulated and pre-processed, and the results of the comparison are presented in Fig. 3. As illustrated in Fig. 3, blue represents the results obtained using the NSGA2 algorithm, red represents the results obtained using the FCM algorithm, and green represents the results obtained using the NSGA2-FCM algorithm.The results indicate that the solutions derived from the NSGA2-FCM algorithm perform better on both objective 1 and objective 2, and these solutions are closer to the Pareto optimal solution compared to those achieved by the NSGA2 and FCM algorithms alone.This suggests that the NSGA2-FCM algorithm exhibits superior performance when addressing the light industrial equipment module partitioning problem. Creation of the relevant integrated matrix The weights of the selected modular division factors need to be determined, and the submatrices should be integrated into the corresponding composite matrix using the matrix integration method.This paper employs hierarchical analysis to ascertain the weights of individual module division factors. Optimization of the product's functional structure and component recycling are the two primary goals in the product design process.Functional structure optimization involves the modular optimization of product decomposition and the reconfiguration of components into new modules, without reducing the original product's functionality.The recovery optimization objective aims to increase the proportion of product components recovered while maintaining functional optimization.Consequently, the product functional structure objective is as crucial as the recycling objective.Module segmentation drivers are analyzed separately according to the hierarchical criteria of the module segmentation hierarchy analysis presented in Table 1. Figure 4 illustrates the weight analysis of factors involved in module division. Depending on the percentage divided, the modular division factor should satisfy the following formula: (3) The above equation assigns weight values to modularity factors according to their importance to the target.P ij represents the correlation between component i and component j, i.e. the degree of influence of one component on the modularity of another component.From the above equation, the minimum value of the elements in the matrix is 0 and the maximum value is 1.Within this range, the greater the value of the elements, the greater the mutual influence between the components resulting from the combination of different dividing factors, and the more likely it is that two components will be grouped into the same module.Conversely, if the interactions between components under the division factors are smaller, the probability of two components being divided into the same module is correspondingly smaller. Finally, the relationships between product components are established through a correlation synthesis matrix.P[n × n] represents the correlation synthesis matrix between components, and Pij is the mutual synthe- sis between components. To satisfy later algorithmic optimisation the relevant synthesis matrix has the following properties: (1) The correlation synthesis matrix is a symmetric matrix, i.e.P(ij) = P(ji).www.nature.com/scientificreports/ (2) The correlation synthesis matrix uses the correlations between components and components as matrix elements.So it does not include the component's own relationship value.However, for the simplicity of the algorithm calculation, the component-self relationship value is set to 1. The relevant combined matrix is then as in equation: The correlation matrix is actually a quantification of the interactions between the components of a product and is the basis for modular division using intelligent algorithms.The elements of the correlation matrix are obtained by superimposing the correlations of the various segmentation factors, indicating the total influence of one component on another.The module segmentation method in this paper is a module segmentation method that uses the influence of components to complete the product.The algorithm is designed to optimize the configuration of modular components according to the influence of each component on other components. Example analysis Structural analysis of beer fermenters.In layman's terms, a beer fermenter is used to produce beer through the fermentation of grain.Its basic structure is shown in Fig. 4: As can be seen from Fig. 5, the beer fermenter implements the beer fermentation function with the tank, temperature sensor, pressure sensor and cleaning device.With the tank body and stirring device as the core of fermentation and the servo motor as the actuating component, the beer fermentation process is realized by controlling the beer fermenter temperature regulating device and the pressure regulating device.The main components of the beer fermenter are shown in Table 2. It is assumed that the range of beer fermenter products has been determined to be developed according to modular architecture.The specific type of architecture is uncertain, but the number of beer fermenter components (10) Modelling the decomposition of beer fermenters.According to the differences in the functions achieved, beer fermenters are mainly divided into fermentation systems, temperature control systems, pressure control systems and power systems. (1) Fermentation system.Fermentation system refers to the process of fermenting raw materials in the beer fermenter to produce beer.It includes the tank body, the tank roof, the legs, the sampling valve and the discharge valve.The body and roof components are used to store raw materials to achieve the function of fermenting raw materials and storing semi-finished products.The sampling valve takes samples, the discharge valve discharges semi-finished products and waste when fermentation is complete, and the legs support the fermenter. (2) Temperature control system.The temperature system is the fermentation process in the fermenter.The internal temperature rises.This paper only selects the thermometer to achieve the function of displaying the internal temperature of the fermenter. (3) Pressure control system.Pressure control system is the fermentation process in which the internal pressure changes.Including pressure gauges, positive and negative pressure valves, exhaust valves, air distributors.Pressure gauge to achieve the function of pressure display, positive and negative pressure valve and exhaust valve to achieve the function of adjusting pressure, air distribution is mainly to control the internal oxygen content of fermenter components. The power system provides power for fermenter mixing and cleaning.Including motor, agitator, cleaner, defoamer, shaft seal, coupling.The motor is the power source, the stirrer is to stir the raw material during the fermentation process to improve the fermentation efficiency, the cleaner is to clean the inside of the fermenter after the fermentation is completed, and the antifoamer is to eliminate the air bubbles generated during the fermentation process to improve the fermentation efficiency.The shaft seal realizes the function of sealing and the coupling plays the role of coupling. Through the above analysis of the beer fermenter, the final product breakdown model shown in Fig. 6 is obtained. Division of modules. (1) Establishing the relevant integrated matrix. According to the module division factors and methods proposed in this paper, the function-related submatrices, structure-related sub-matrices, material-related sub-matrices and recyclability-related sub-matrices between beer fermenter modules were evaluated as shown in Tables 3, 4, 5 and 6.According to the characteristics of beer fermenters, the weights of function, structure, material and recyclability are obtained in the order of 36%, 30%, 14% and 20% by hierarchical analysis.The relevant integrated matrix was obtained from Equation as shown in Table 3. (2) Performing module classification.According to Tables 7, the clustering analysis of the components was performed by the modified NSGA2-FCM algorithm.The jQuery parameters were set as follows: population size N is 30, maximum number of iterations 250, crossover probability 0.9, variation probability 0.03, number of clusters m is 3, fuzzy index w is 2, upper bound module 6 and lower bound 0. According to the parameter settings, the algorithm was run 250 times to obtain the affiliation curves of the components and each module, and the results of partitioning clustering of beer fermenter modules are shown in Figs. 7, 8, and 9 respectively. For different values of the number of clusters, the module partitioning scheme is obtained as shown in Table 8.Among them, it can be seen from Fig. 8 that the module division scheme x 1 indicates that when the number of modules is 4, components 1, 2, 3 and 4 have higher affiliation in module 1, i.e. module 1 includes components {1, 2, 3, 4}, similarly, module 2 includes components {5, 6, 7}, module 3 includes components {7, 8, 9, 10, 11} and module 4 includes components {12, 13, 14, 15, 16}.Based on the above rules and Fig. 8, the final division scheme can be obtained as shown in Table 8. Conclusions In this paper, we propose an improved NSGA2-FCM algorithm by combining the NSGA2 initialization strategy with the FCM algorithm.The product functional and structural relationships are identified according to customer requirements and hierarchical models, and a comprehensive numerical matrix is obtained using the hierarchical analysis method for module division factors.A comparative analysis of the algorithm's performance demonstrates that the NSGA2-FCM algorithm outperforms the NSGA2 and FCM algorithms alone in terms of convergence algebra and objective function values, and is closer to the Pareto optimal solution.Using a beer fermenter as an example, a modular partitioning scheme is obtained by employing a combination of the improved NSGA2-FCM algorithm and the related integrated matrix cluster analysis.It is shown that constructing a comprehensive relationship matrix of components based on module partitioning drivers and conducting a comprehensive relationship matrix analysis using the improved NSGA2-FCM algorithm to obtain an optimized modular partitioning scheme improves the issue of the clustering algorithm tending to fall into local optima during partitioning.This approach makes the light industrial equipment module partitioning method more accurate and efficient. Figure 1 . Figure 1.Product modular decomposition process based on FBS mapping. Figure 6 . Figure 6.Decomposition results of functional structure of beer fermentor system based on FBS mapping. Table 1 . Hierarchical analysis standard for module division. Table 2 . Main parts of beer fermentor.is known to be determined.The production of beer fermenter component candidates and the manufacture of composite modules, as well as the decision to assemble, transport and recycle the product, are made by the producer based on the design developed by the designer. Table 3 . Function related sub matrix.
4,163.6
2023-08-23T00:00:00.000
[ "Computer Science" ]
Interpretation of Coupled-Cluster Many-Electron Dynamics in Terms of Stationary States We demonstrate theoretically and numerically that laser-driven many-electron dynamics, as described by bivariational time-dependent coupled-cluster (CC) theory, may be analyzed in terms of stationary-state populations. Projectors heuristically defined from linear response theory and equation-of-motion CC theory are proposed for the calculation of stationary-state populations during interaction with laser pulses or other external forces, and conservation laws of the populations are discussed. Numerical tests of the proposed projectors, involving both linear and nonlinear optical processes for He and Be atoms and for LiH, CH+, and LiF molecules show that the laser-driven evolution of the stationary-state populations at the coupled-cluster singles-and-doubles (CCSD) level is very close to that obtained by full configuration interaction (FCI) theory, provided that all stationary states actively participating in the dynamics are sufficiently well approximated. When double-excited states are important for the dynamics, the quality of the CCSD results deteriorates. Observing that populations computed from the linear response projector may show spurious small-amplitude, high-frequency oscillations, the equation-of-motion projector emerges as the most promising approach to stationary-state populations. INTRODUCTION Providing unique time-resolved insights into electronic quantum dynamics and with the exciting prospect of detailed manipulation and control of chemical reactions, 1 increasing experimental and theoretical research efforts have been directed toward attosecond science in the past couple of decadessee, for example, ref 2 for a recent perspective. While the initial step usually involves ionization induced by extremeultraviolet or near-infrared laser pulses, Hassan et al. 3 have demonstrated that optical attosecond pulses may be used to observe and control the dynamics of bound electrons with little or no ionization probability. Whether ionization plays a role or not, the rapid development of experimental methodology creates a strong demand for explicitly time-dependent quantum chemical methods that can accurately simulate the ultrafast many-electron dynamics driven by ultrashort laser pulses. While real-time time-dependent density functional theory 4−6 is a highly attractive option from the viewpoint of computational efficiency, it suffers from a number of deficiencies caused largely by the reliance on the adiabatic approximation in most practical applications. 7 Improved accuracy can be achieved with wave function-based methods at the expense of increased computational costs. 7 In a finite basis, the exact solution to the time-dependent electronic Schrodinger equation is the full configuration interaction (FCI) wave function whose computational complexity, unfortunately, increases exponentially with the number of electrons. We are thus forced to introduce approximations. The perhaps most widely used time-dependent wave function approximation for simulating many-electron dynamics is multiconfigurational time-dependent Hartree− Fock (MCTDHF) theory 8−11 and the related time-dependent complete active space self-consistent field and restricted active space methods. 11−14 Restricting the participating Slater determinants to those that can be generated from a fixed number of electrons and a carefully selected active space of (time-dependent) spin orbitals, these methods still have the FCI wave function at the heart, eventually facing the exponential scaling wall as the number of active electrons and orbitals is increased. Coupled-cluster (CC) theory offers a gentler polynomialscaling hierarchy of approximations that converge to the FCI wave function. Besides the differences in computational complexity, the two methods differ in the sense that MCTDHF captures static (strong) correlation, whereas single-reference CC theory aims at dynamical correlation effects. Yielding energies, structures, and properties with excellent accuracy for both ground and excited states of weakly correlated systems, it has become one of the most trusted methods of molecular quantum chemistry. 15 Recent years have witnessed increasing interest in time-dependent CC (TDCC) theory 16−20 for numerical simulations of many-body quantum dynamics in nuclear 21 and atomic and molecular 22−38 systems. In addition, TDCC theory has played a key role in a recent work on finitetemperature CC theory for molecular 39,40 and extended 41 systems. While the papers by Christiansen and co-workers 35,36 are concerned with vibrational CC theory and those of Pigg et al. 21 with nucleon dynamics, the remaining papers are focused on the dynamics of atomic and molecular electrons exposed to electromagnetic fields such as ultrashort laser pulses. In many cases, the main goal is to compute absorption (or emission) spectra [24][25][26][27][28]37,38 by Fourier transformation of the induced dipole moment. This requires the calculation of the induced dipole moment for extended periods of time after the perturbing field or laser pulse has been turned off. While decisive for the features observed in the final spectrum, the dynamics during the interaction with the laser pulse is rarely analyzed in detail. Processes that occur during the pulse, such as high harmonic generation and ionization, are studied using TDCC theory in refs. 31,34 . Since energy is the physical quantity associated with time translations, textbook analyses of such interactions are naturally performed in terms of the population of the energy eigenstatesthe stationary statesof the fieldfree particle system, see, for example, ref 42. However, manybody theories such as TDFCI, MCTDHF, and TDCC theories do not express the wave function as a superposition of stationary states, making the analysis difficult to perform in simulations. Moreover, when approximations are introduced (truncation of the many-body expansion), the stationary states are hard to define precisely for nonlinear parameterizations such as MCTDHF and TDCC theories. The problem is particularly pronounced for approximate methods where the orbitals are time-dependent such that a different subspace of the full configuration space is spanned in each time step of a simulation, leading to energies and eigenvectors of the Hamilton matrix that vary depending on the laser pulse applied to the system. 43 This implies, for example, that identification of stationary-state energies by Fourier transformation of the postpulse autocorrelation function leads to pulse-dependent results. Still, several reports of population transfer during interaction with laser pulses have been reported recently 44−46 using MCTDHF theory. The natural approach would be to define the stationary states from the zero-field Hamiltonian and zero-field wave function using, for example, linear response theory 47 or orthogonality-constrained imaginary time propagation. 48 The latter approach was investigated recently within the framework of MCTDHF theory by Loẗstedt et al., 49 who found that the stationary-state populations oscillate even after the pulse is turned off unless a sufficiently large number of active orbitals is included in the wave function expansion. In this work, we use both CC linear response (CCLR) theory 19,50,51 and equationof-motion CC (EOMCC) theory 52−55 to propose projectors whose expectation values yield stationary-state populations. Test simulations are presented with different laser pulses, and the TDCC results are compared with the exact (TDFCI) results. The paper is organized as follows. In Section 2, we briefly outline the exact quantum dynamics on the basis of energy eigenstates and use analogies to propose projectors whose expectation values can be interpreted as stationary-state populations within TDCC theory. Technical details of the numerical simulations are given in Section 3, and numerical results are presented and discussed in Section 4 for atoms and diatomic molecules in few-cycle laser pulses, including chirped pulses. Concluding remarks are given in Section 5. THEORY 2.1. Recapitulation of Exact Quantum Dynamics. Laser-driven quantum dynamics of a particle system is usually interpreted in terms of stationary states |n⟩ defined as solutions of the time-independent Schrodinger equation where H 0 is the time-independent Hamiltonian of the particle system and E n is the energy of the stationary state |n⟩. The stationary states evolve in time according to and are assumed to form a complete orthonormal set such that P n n n t n t P ( ) ( ) , 1 where 1 is the identity operator. Note that the continuum is formally included in the summation over states. The time evolution of the particle system is determined by the time-dependent Schrodinger equation [using atomic units (a.u.) throughout], where |Ψ(t)⟩ is the normalized state of the system with a known initial value |Ψ 0 ⟩ and the dot denotes the time derivative. Within semiclassical radiation theory, the time dependence of the Hamiltonian, stems from the operator V(t) describing the interaction between the particle system and external electromagnetic fields. Expressing the time-dependent state as a superposition of stationary states, where C n (t) = ⟨n(t)|Ψ(t)⟩, the time-dependent Scrodinger equation may be recast as an ordinary differential equation with the initial conditions C n (0) = ⟨n|Ψ 0 ⟩. The population of stationary state |n⟩ at any time t ≥ 0 may be determined as the expectation value of the projection operator P n Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article 8) which is real and non-negative. Since the state is assumed normalized, ⟨Ψ(t)|Ψ(t)⟩ = 1, the stationary-state populations sum up to one, n ∑ p n (t) = 1, and, hence, are bound from above as well as from below: 0 ≤ p n (t) ≤ 1. It follows from the time-dependent Schrodinger equation that the expectation value of some operator, say B, evolves according to the Ehrenfest theorem, where the last term vanishes when B is a time-independent operator such as a stationary-state projector. Hence, stationarystate populations are conserved in the absence of external forces as the projectors commute with the time-independent Hamiltonian, [P n ,H 0 ] = 0. Stationary-state populations are required to identify transient phenomena such as Rabi oscillations in simulations and may be used to determine the composition of the quantum state resulting from the application of a short pump laser, facilitating interpretation of spectra recorded by means of a subsequent probe laser. The stationary-state populations can be controlled by varying the pump laser parameters such as peak intensity, shape, and duration. Predicting final populations with varying laser parameters thus becomes a central computational goal. Unfortunately, computing all stationary states of a particle system followed by integration of the time-dependent Schrodinger equation presents an insurmountable challenge, even if the number of stationary states is kept finite through the use of a finite set of basis functions. In practice, the quantum state is parameterized in a finite-dimensional Hilbert space spanned by well-defined basis vectors (Slater determinants, in the case of electronic systems) rather than stationary states. Populations of a few selected (low-lying) stationary states can then be computed as a function of time with varying pump laser parameters. This procedure is easily implemented for any electronic-structure method with explicit parameterization of orthogonal ground-and excited-state wave functions and has been used recently by, for example, Peng et al. 56 within time-dependent configuration-interaction theory to predict populations of stationary electronic states of the rigid decacene molecule with varying laser parameters. In the following, we will present an approach to the calculation of stationary-state populations within the framework of TDCC theory. 2.2. TDCC State Vector. Our starting point is Arponen's time-dependent bivariational formulation of CC theory 20 within the clamped-nucleus Born−Oppenheimer approximation. This allows us to parameterize the CC ket and bra wave functions as independent approximations to the FCI wave function and its Hermitian conjugate. The quantum state of an atomic or molecular many-electron system at time t is then represented by the TDCC state vector 29 i k j j j j j j y where the component functions are defined by Although the reference determinant |Φ 0 ⟩ should be constructed from time-dependent orthonormal 16,31,57 or biorthonormal 23,30,58 orbitals to capture the main effects of interactions between the electrons and external fields, we shall in this work use the static Hartree−Fock (HF) ground-state determinant for simplicity. As long as the external field does not lead to nearly complete depletion of the ground state, we found in ref 30 that the results obtained with static and dynamic orbitals are virtually identical. In addition, using static HF reference orbitals allows us to exploit well-known CC theories for excited states, as discussed in more detail below, and we avoid the complexity of computing overlaps between determinants in different nonorthogonal orbital bases. The cluster operators are defined as where μ > 0 labels excitations out of the reference determinant, i.e., If the cluster operators include all excited determinants, the CC state becomes equivalent to the exact wave function, the FCI wave function. Approximations are obtained by truncating the cluster operators after singles to give the CCS method, after singles and doubles to give the CC singles-and-doubles (CCSD) model, and so on. Since the HF reference determinant is static, the time dependence of the cluster operators is carried by the amplitudes τ μ (t) and λ μ (t) only. The amplitude τ 0 (t) is a phase parameter related to the so-called quasi-energy, 59,60 and λ 0 (t) determines the normalization of the state, as discussed in more detail below. In the closed-shell spin-restricted CCSD model, the singlesand-doubles excitation and de-excitation operators are defined as 61 where i, j and a, b refer to occupied and virtual spatial HF orbitals, respectively, and is a unitary group generator expressed in terms of the elementary second-quantization spin−orbital (α and β here refer to the spin-up and spin-down states) creation and annihilation operators. The equations of motion for the amplitudes are derived from the time-dependent bivariational principle and are given by 20,29 i Ht (19) where the dot denotes the time derivative and where While H 0 is the time-independent molecular electronic Hamiltonian in the clamped-nucleus Born−Oppenheimer approximation, V(t) describes the interaction of the electrons with explicitly time-dependent external fields in the semiclassical approximation. Note that the normalization amplitude λ 0 is constant. The equations of motion ( 18 ) and ( 19 ) must be integrated with suitable initial conditions. In this work, we use the CC ground-state i k j j j j j j j y where T X e , The ground-state amplitudes satisfy the stationary CC equations (25) and τ 0 (t = 0) = 0 such that, in the absence of external perturbations, the time-dependence of the TDCC state vector correctly becomes |S(t)⟩⟩ = |S 0 ⟩⟩ exp(−iE 0 t), where E 0 is the CC ground-state energy with respect to which the TDCC state vector is normalized, i.e., provided we choose Re(λ 0 ) = 1. In practice, we choose λ 0 = 1. The indefinite inner product induces the expectation value expression 29 where the two-component form of the quantum mechanical operator C reads While the expectation value of an anti-Hermitian operator C † = −C is imaginary, the expectation value of a Hermitian operator C † = C is real. This symmetrized form of the CC expectation value was first introduced by Pedersen and Koch 62 in order to ensure correct symmetries, including time-reversal symmetry, of CC response functions. Using the expectation value expression to compute the electric dipole moment induced by an external laser field, absorption spectra can be obtained by Fourier transformation. Given a set of orthonormal excited-state vectors |E n ⟩⟩, which are orthogonal to the ground state with respect to the indefinite inner product, we may define the projection operator and compute the population p n (t) of excited state n at time t as the expectation value This would provide a time-resolved picture of the populations of excited states within TDCC theory. Unfortunately, a fully consistent set of CC excited-state vectors is not known. There are two distinct approaches to excited states in common use within CC theory today. 63 One is the EOMCC 52−55 approach where the excited states are parameterized explicitly in terms of linear excitation and deexcitation operators, which generate the excited-state vectors from the ground state. While making it straightforward to express the projection operator in eq 31, the linear Ansatz of EOMCC theory leads to size-intensivity issues in transition moments and response properties such as polarizabilities. 64−66 The other approach is CC response theory 19,50,51 where the time-dependent Schrodinger equation is solved in the frequency domain using a combination of Fourier transformation and adiabatic perturbation theory. The amplitudes at each order are then used to express linear, quadratic, and higher-order response functions. This leads to the identification of excitation energies and one-and multiphoton transition moments by analogy with response theory for exact stationary states. 47 It does not, however, lead to explicit expressions for the excited-state vectors, making it impossible to construct projection operators of the form ( 31 ). While the excitation energies from EOMCC theory and CC response theory are identical and properly size-intensive, the transition moments differ and no size-extensivity issues are present in CC response theory. 64,65 We note in passing that Hansen et al. 36 offer a detailed discussion of size-extensivity (and, by extension, sizeintensivity) and size-consistency issues within CC theory using the more concise and accessible concepts of additive and multiplicative separability. 2.4. Projection Operators from EOMCC Theory. Exploiting that EOMCC theory provides an explicit parameterization of "left" (bra) and "right" (ket) excited states, we investigate the EOMCC projector defined as (34) and the linear excitation and de-excitation operators are given by The EOMCC excitation and de-excitation operators are truncated at the same level as the underlying CC ground-state cluster operators T 0 and Λ 0 . The ground-state component reads The EOMCC amplitudes are determined from the non-Hermitian eigenvalue problem where 1 is the unit matrix and ΔE is a diagonal matrix with the excitation energies ΔE n = E n − E 0 as elements. The elements of the non-Hermitian Jacobian matrix are defined by Note that the components of the EOMCC excited-state vector are biorthonormal n m nm δ ⟨Ψ ∼ |Ψ ⟩ = (39) and orthogonal to the CC ground state in the sense of This implies that the EOMCC projectors in eq 33 are Hermitian with respect to the indefinite inner product, annihilate the ground state, Pn|S 0 ⟩⟩ = 0, and are idempotent and orthogonal, P P P n m nm n δ̂=̂ (41) In the limit of untruncated cluster operators, it is readily verifiedusing the orthonormality of the left and right eigenvectors in eq 37that the EOMCC projectors satisfy the completeness relation If the cluster operators are truncated, the right-hand side must be corrected for excited determinants beyond the CC truncation level (e.g., triples, quadruples, etc. for EOMCCSD). The one-photon transition strength obtained from the EOMCC projector is given by where B and C are Hermitian operators representing electric or magnetic multipole moments and B̂and Ĉare their twocomponent forms defined in eq 30. The transition strength is properly symmetrized with respect to simultaneous permutation of the multipole operators and complex conjugation, ⟨⟨S 0 | ĈPnB̂|S 0 ⟩⟩* = ⟨⟨S 0 |B̂PnĈ|S 0 ⟩⟩ and agrees with the commonly used expression in EOMCC theory, which is based on a configuration-interaction-like interpretation of the bra and ket states. This expression yields the correct FCI limit but, as mentioned above, the transition strength is not properly sizeintensive when the cluster operators are truncated. 64−66 We may now use eq 33 to extract excited-state populations from the TDCC state vector according to eq 32, While the EOMCC excited-state populations are manifestly real, they are neither bounded above by 1 nor below by 0. Lack of proper bounds is common in CC theory, but problems are rarely experienced in practical calculations as long as the CC state vector is a sufficiently good approximation to the FCI wave function. This, in turn, is related to the distance in Hilbert space between the reference determinant and the FCI wave function, as discussed in more detail by Kristiansen et al. 30 for TDCC theory. Consistent with this analysis, we compute the ground-state population via the ground-state projector i k j j j j j j j j y 2.5. Projection Operators from CC Linear Response Theory. In lieu of explicitly defined excited states in CC response theory, we investigate the CCLR projector i k j j j j j j j j y { z z z z z z z z P where we have introduced the notation (f and g are arbitrary functions) The functions |Ψ n ⟩ and ⟨Ψ̃n| are defined in eq 34, and n n n 0 0 The amplitudes n ̅ μ are determined by the linear equations 51 where the superscript T denotes matrix transposition, 1 is the unit matrix, A is the CC Jacobian matrix defined in eq 38, ΔE n is the eigenvalue (excitation energy) corresponding to the right eigenvector n (cf. eq 37), and the symmetric matrix F is defined by The main arguments in favor of the CCLR projector ( 47 ) are as follows: Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article 1. it becomes identical to the EOMCC projector in the FCI limit, and 2. it yields the correct size-intensive CCLR ground-toexcited state transition strength. We demonstrate in the Appendix that ⟨Ψ̑n| = 0 in the FCI limit, implying that the CCLR projector ( 47 ) becomes identical to the EOMCC projector ( 33 ) in this limit. The CCLR projector ( 47 ) gives the correct transition strength, i.e., To make the equivalence evident, we note that in the notation used by Christiansen et al. 51 for "right" and "left" This allows us to recast the transition strength as which is identical to the expression obtained as a residue of the CCLR function in eq 5. The correction term only vanishes in the FCI limit. They do, however, project onto the orthogonal complement of the CC ground state as i k j j j j j j j j y Excited-state populations extracted by the CCLR projectors according to eq 32 become While real, the CCLR population is neither bound below by 0 nor bound above by 1. The ground-state population is given by eq 46. 2.6. Conservation Laws. As noted in Section 2.1, exact stationary-state populations are conserved in time intervals where no external forces act on the particle system. Conservation laws in the framework of TDCC theory have been discussed at various levels of detail previously. 21,29,35,38,67,68 To this end, we recast the TDCC equations of motion in the Hamiltonian form 29 where the Hamiltonian function t ( , , ) τ λ = is defined by Introducing the Poisson-like bracket for any analytic function f = f(τ,λ,t) and g = g(τ,λ,t), the relation is readily obtained from the Hamiltonian eq 61. which shows that energy is conserved whenever the Hamiltonian operator is constant in time, including before and after the application of external forces such as laser pulses. Note that this is true regardless of the truncation of the cluster operators and regardless of the initial conditions of the amplitudes. This observation was used in ref 29 to propose symplectic numerical integration as a stable method for solving the TDCC equations of motion. The time evolution of the TDCC expectation value in eq 29 is obtained by choosing f = ⟨Ψ̃|C|Ψ⟩. Using a derivation analogous to that of Skeidsvoll et al., 38 we find where we have introduced the projected commutator with Here, the summation over excited determinants is truncated at the same level as the cluster operators. The time evolution of the TDCC expectation value thus is Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article This is not quite the form of a generalized Ehrenfest theorem since, in general, Consequently, constants of motion in exact quantum dynamics are not necessarily conserved in truncated TDCC theory. In the FCI limit, however, P ∥ = 1 and the Ehrenfest theorem is recovered from eq 69. The EOMCC stationary-state population, eq 44, is of the form ( 29 ) with C = |Ψ n ⟩⟨Ψ̃n|, and the time evolution, therefore, is given by eq 69. The proposed EOMCC projector thus breaks the conservation law of stationary-state populations in truncated TDCC simulations, although we note that it is properly restored in the FCI limit since C = |Ψ n ⟩⟨Ψ̃n| commutes with H 0 according to eqs 81 and 82 in the Appendix. It seems reasonable to expect that the conservation law is approximately fulfilled whenever the many-electron dynamics predominantly involves stationary states that are well approximated within (truncated) EOMCC theory. The CCLR stationary-state population, eq 60, is not of the form ( 29 ). Instead, the time evolution is given by eq 64 with which only depends on time through the amplitudes (∂f/∂t = 0). Hence, the CCLR projector also breaks the conservation law of stationary-state populations in truncated TDCC simulations. It is restored in the FCI limit where the CCLR and EOMCC projectors are identical. Again, it seems reasonable to expect that the conservation law is approximately fulfilled whenever CCLR theory provides a sufficiently good approximation to the FCI states. COMPUTATIONAL DETAILS Explicitly time-dependent simulations are performed with a closed-shell spin-restricted TDCCSD Python code generated using a locally modified version of the Drudge/Gristmill suite for symbolic tensor algebra developed by Zhao and Scuseria. 69 The static HF reference orbitals and Hamiltonian integrals are computed by the Dalton quantum chemistry program 70,71 along with the response vectors required for the EOMCC and CCLR projectors using the implementations described in refs. 72−75 Tight convergence criteria are employed: 10 −10 a.u. for the HF orbital gradient norm (implying machine precision for the HF ground-state energy) and 10 −8 a.u. for the CCSD residual norms (both ground state and response equations). Dunning's correlation-consistent basis sets, 76−78 downloaded from the Basis Set Exchange, 79 are used throughout. We also perform TDFCI simulations using the contraction routines implemented in the PySCF package. 80 The TDCCSD and TDFCI equations of motion are integrated using the symplectic Gauss−Legendre integrator 81 as described in ref 29. To test the proposed EOMCC and CCLR projectors, TDCCSD and TDFCI simulations are carried out for the He and Be atoms placed at the coordinate origin. Further tests are performed for the LiH molecule placed on the z-axis with the Li atom at the origin and the H atom at z = 3.08 a.u. and for the CH + ion placed on the z-axis with the C atom at the origin and the H atom at z = 2.13713 a.u. Finally, we study the time evolution of stationary-state populations during the optical pump pulse applied by Skeidsvoll et al. 38 to investigate transient X-ray spectroscopy of LiF. As in ref 38, we place the LiF molecule on the z-axis with the F atom at the origin and the Li atom at z = −2.9552749018 a.u. All electrons are correlated, and point group symmetry is not exploited in these simulations. We assume the systems are initially in the ground state and expose them to a laser pulse described by the semiclassical interaction operator in the electric dipole approximation where d is the electric dipole operator of the electrons, u is the real unit polarization vector of the electric field, and ε(t) is the time-dependent electric-field amplitude. Two forms of the electric-field amplitude are used in this work. One is the sinusoidal pulse where ε 0 is the field strength, ω 0 is the carrier frequency of the pulse, and t 0 is the time at which the pulse is turned on. The time-dependent phase ϕ(t) may effectively alter the instantaneous carrier frequency, creating a chirped laser pulse, if it depends on time at least quadratically. 82 In this work, we use the quadratic form which, for b ≠ 0, creates a linearly chirped laser pulse with instantaneous frequency ω(t) = ω 0 + a + b(t − t 0 ). The envelope G(t) controls the shape and duration of the pulse and is defined by where t d is the duration of the pulse and Θ(t) is the Heaviside step function. The second form of the electric-field amplitude used in this work is the Gaussian pulse with the envelope where t 0 is the central time of the pulse and σ is the Gaussian root-mean-square (rms) width and N defines the start time (t 0 − Nσ) and end time (t 0 + Nσ) of the pulse through the Heaviside step functions. Note that N thus introduces discontinuities of the Gaussian pulse at each end. Unless a sudden disturbance of the system is intended, one must choose N large enough that the discontinuities are negligible. 29,30 Dynamic orbitals, such as those of orbital-adaptive TDCC theory, 23 are required for a numerically stable integration of the equations of motion. 30 Consequently, in this work, we will focus on Rabi oscillations between excited states. Rabi oscillations between two excited states can be achieved by the application of two consecutive laser pulses, the first of which is resonant with a dipole-allowed transition from the ground state, while the second is resonant with a dipoleallowed transition between the resulting excited state and another one. The intensity and duration of the first pulse must be such that the ground-state population is significantly reduced but not entirely depleted. Nonlinear optical processes are thus involved, making it an ideal test case for the CCLR and EOMCC projectors, which are constructed on the basis of first-order perturbative arguments (first-order perturbation theory in the case of CCLR and linearization of the cluster exponential in the case of EOMCC), which cannot necessarily be expected to correctly capture higher-order optical processes. In particular, transition moments between excited states are quadratic response properties, which cannot be expressed solely in terms of linear response parameters. 51 It is, therefore, important to test if the proposed projectors correctly capture the effects of nonlinear optical processes. RESULTS Results for the He atom with the aug-cc-pVTZ basis set are presented in Figure 1. The integration of the TDCCSD equations of motion was performed with time step Δt = 0.1 a.u. = 2.42 as using the eighth-order (s = 4) Gauss−Legendre integrator and a convergence threshold of 10 −10 (residual norm) for the fixed-point iterations. The ground-and excitedstate energy levels shown in the left panel of Figure 1 were computed using CCSD linear response theory. In total, 14 excited states were computed, several of which lie above the ionization energy, which is estimated to be 0.902 a.u. using the total ground-state energy difference between the neutral and ionized atoms at the (spin unrestricted) CCSD/aug-cc-pVTZ level of theory. 83 Although the states above the ionization energy are unphysical, we keep them for the purpose of comparing with regular TDFCI simulations with the same basis set. The right panel of Figure 1 shows the total energy-level populations computed using the CCLR projector as a function of time during the application of two consecutive laser pulses. Energy levels that are never populated above 0.01 are excluded. The level populations are computed by summing up the populations of all states belonging to each energy level, thus avoiding ambiguities arising from the arbitrariness of the basis of a degenerate subspace. As expected, the first laser pulse causes significant population of the 2 1 P energy level. The highlying 4 1 D level, which is dominated by the 1s 1 3d 1 electron configuration and located 0.972 a.u. above the 2 1 P level, also becomes populated toward the end of the first pulse because of the dipole-allowed transition 2 1 P → 4 1 D. The length-gauge oscillator strength of this transition is f = 0.484 compared with f = 0.355 for the 0 1 S → 2 1 P transition. The population of the 4 1 D level is still modest, however, since the transition can only occur once the 2 1 P level is significantly populated. The second pulse induces several cycles of Rabi oscillations between the 2 1 P and 1 1 S levels. The Rabi oscillations are slightly perturbed by weak transitions between the 2 1 P level and the 0 1 S ground state, as witnessed by the increasing oscillation of the ground-state population when the population of the 2 1 P level is close to its maximum value. We also observe an even weaker perturbation caused by the higher-lying 4 1 D level. The CCLR populations agree both with TDFCI populations and EOMCC populations: the rms deviation for the entire simulation is 10 −3 between the CCLR and TDFCI populations and 3 × 10 −7 between CCLR and EOMCC populations. We have previously demonstrated that discrepancies with TDFCI simulations for the He atom can be reduced by tightening the computational parameters such as convergence thresholds and, most importantly, by reducing the time step of the numerical integration. 30 We thus ascribe the small discrepancies between the CCLR and TDFCI populations to the rather coarse discretization (Δt = 0.1 a.u.) employed in the numerical integration scheme and conclude that the proposed CCLR and EOMCC projectors behave correctly in the FCI limit. It is of interest to compare the TDCCSD simulation with a much simpler model based on an eigenstate expansion propagated according to eq 7. Letting |n⟩,n = 0,1,2,...,14 represent the stationary states computed with CCSD linear response theory, all we need to integrate eq 7 in the presence of external laser pulses of the form ( 72 ) is the dipole matrix in the energy eigenbasis. This model is essentially identical to that employed by Sonk et al., 84 except that we use CCSD linear and quadratic response theories rather than EOMCCSD theory to build the dipole matrix. (Note, however, that the CCSD response and EOMCCSD approaches yield identical results for He.) It is also similar to the EOMCCSD model employed by Luppi and Head-Gordon, 85 who propagated both bra and ket states. The only obstacle is that the "right" and "left" transition moments from CC response theory, cf. eqs 54 and 55, are not related by complex conjugation, yielding a spurious non-Hermiticity of the dipole matrix. Sonk et al. 84 and Luppi and Figure 1. Energy-level populations computed with the aug-cc-pVTZ basis set through CCLR projectors plotted as functions of time for He exposed to two consecutive laser pulses, the first resonant with the 0 1 S → 2 1 P transition and the second resonant with the 2 1 P → 1 1 S transition, with peak intensity 87.7 TW/cm 2 . The black curves show the populations during the TDFCI simulation. Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article Head-Gordon 85 circumvented this issue by only using the Hermitian part of the matrix. In the present case, due to symmetry, the CCSD electric-dipole transition moments are either zero or may be chosen parallel to one of the three Cartesian axes, making it easy to define the off-diagonal dipole matrix elements as the negative square root of the dipole transition strength (cf. eq 56 with B = C a Cartesian component of the position operator). With the dipole matrix thus constructed for the 15 states, we have integrated eq 7 with the initial condition C n (t = 0) = δ n0 and the same consecutive laser pulses as in the He/aug-cc-pVTZ TDCCSD simulations. The Gauss−Legendre integrator was used with the same parameters and time step. The rms population deviation between the model and the full TDCCSD simulations is about twice that between the TDFCI and TDCCSD simulations. The maximum absolute deviation is an order of magnitude greater (0.02 vs 0.002), however, indicating that a potentially large number of states, including states above the ionization energy, may be required for the simple model to agree quantitatively with full TDCCSD simulations in general. Moving away from the FCI limit, we repeat the study of excited-state Rabi oscillations for the Be atom with the aug-cc-pVDZ basis set. The integration parameters were the same as those used for He above. The ground-and excited-state energy levels shown in the left panel of Figure 2 were computed using CCSD linear response theory. In total, 21 excited states were computed, and the CCSD excitation energies agree with those of FCI theory to within 3 × 10 −4 a.u. Several of the computed excited states lie above the ionization energy, which is estimated to be 0.341 a.u. using the total ground-state energy difference between the neutral and ionized atoms at the (spin unrestricted) CCSD/aug-cc-pVDZ level of theory. 83 The highlying excited states are retained in order to compare with regular TDFCI simulations with the same basis set. The right panel of Figure 2 shows the total energy-level populations computed using the CCLR projector as a function of time during the application of the two consecutive laser pulses. Energy levels that are never populated above 0.01 are excluded. The first laser pulse causes significant population of the 1 1 P energy level by excitation from the ground state (oscillator strength f = 0.478), although the transition is quenched by further excitation to the high-lying 7 1 D level from the 1 1 P level (f = 0.685). The 7 1 D level, which is dominated by the 1s 2 2s 1 3d 1 electron configuration, is located 0.195 a.u. above the 1 1 P level, and hence, the 1 1 P → 7 1 D transition is nearly resonant with the first laser. Consequently, the 7 1 D level population increases once sufficient population of the 1 1 P level is achieved toward the end of the first pulse. The second pulse induces a single-cycle Rabi oscillation between the 1 1 P and 2 1 S levels (f = 0.118), which is quite significantly perturbed by the transition between the 2 1 S and 4 1 P levels (f = 0.211). The population of the 4 1 P level drops to zero as the Rabi oscillation enters the final stage where the population of the 2 1 S level decreases. The CCLR populations are in close agreement with TDFCI populations and with EOMCC populations: the rms deviation for the entire simulation is 7.4 × 10 −3 between the CCLR and TDFCI populations and 9.1 × 10 −5 between the CCLR and EOMCC populations. Increasing the basis set to aug-cc-pVTZ, the CCSD levels, the higher-lying ones in particular, move down in energy as seen by comparing the left panel of Figure 3 with that of Figure 2. The right panel of Figure 3 shows the variation of the level populations as the Be atom is exposed to the same sinusoidal laser pulses as in Figure 2, albeit with the carrier frequencies adjusted to match the 0 1 S → 1 1 P and 1 1 P → 2 1 S transitions at ω 0 = 0.195 a.u. and ω 0 = 0.0539 a.u., respectively. The duration is 10 optical cycles for each pulse, as above. The populations obtained from the CCLR and EOMCC projectors are virtually identical, with an overall rms deviation of 1.6 × 10 −4 . The lowering of the 7 1 D level, which is now 0.162 a.u. above the 1 1 P level, implies that the probability of the 1 1 P → 7 1 D transition (f = 0.648) diminishes, resulting in a very low population of the 7 1 D level an and increased population of the Figure 2. Energy-level populations computed with the aug-cc-pVDZ basis set through CCLR projectors plotted as functions of time for Be exposed to two consecutive laser pulses, the first resonant with the 0 1 S → 1 1 P transition and the second resonant with the 1 1 P → 2 1 S transition, with peak intensity 0.877 TW/cm 2 . Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article 1 1 P level (compared with the aug-cc-pVDZ simulation) during the first pulse. Although the populations of the states involved thus are different, the perturbed Rabi oscillation induced by the second pulse is essentially the same as in Figure 2. While a full TDFCI simulation is too costly with the aug-cc-pVTZ basis set, we compare with the much simpler model introduced for He above. We use the same consecutive laser pulses and the Gauss−Legendre integrator with the same parameters as in the Be/aug-cc-pVTZ TDCCSD simulation for the model simulation with 22 states included. The resulting energy-level populations, plotted as dotted lines in Figure 3, are remarkably similar to the full TDCCSD results. The maximum absolute deviations between the model and TDCCSD populations are just 15% for the 1 1 P level, 2% for the 2 1 S and 7 1 D levels, and below 1% for the remaining levels, including the ground state. Such good results can only be expected from the simple model when all, or very nearly all, participating CCSD states are included. How many states are needed will in general be very hard to estimate a priori. Control by Chirped Laser Pulses. Control by shaped laser pulses is an important challenge to theoretical simulations and requires information about population of energy levels. For the LiH molecule described with the aug-cc-pVDZ basis set, which is small enough to allow TDFCI reference simulations, we use chirped sinusoidal laser pulses to further test the proposed CCLR and EOMCC projectors within TDCCSD simulations. The laser pulses are polarized along the x-axis, perpendicular to the molecular axis, and the duration is kept fixed at t d = 378.4 a.u. = 9.152 fs, corresponding to 10 optical cycles of radiation resonant with the lowest-lying electric-dipole allowed transition from the ground state, the x-polarized 1 Σ + → 1 Π transition at ω 0 = 0.166 a.u. The laser pulses are turned on at t 0 = 0 a.u. with peak intensity 3.51 TW/cm 2 (ε 0 = 0.01 a.u.). The oscillator strength of this transition is estimated to be f = 0.208 by CCSD linear response theory with the aug-cc-pVDZ basis set. The phase of the laser pulse is defined such that the instantaneous frequency is 78) and the chirp rate b is varied between −0.513 and +0.513 fs −2 . A few such laser pulses with different chirp rates are shown in Figure 4. The 31 lowest-lying states, organized into 21 energy levels in the left panel of Figure 5, were computed with CCSD linear response theory (aug-cc-pVDZ basis) and used to construct CCLR and EOMCC projectors for the simulations. The highest-lying energy level is 0.603 a.u. above the ground state, well beyond the ionization energy estimated by CCSD/ aug-cc-pVDZ total-energy difference to be 0.281 a.u. 83 The most important energy levels in the simulations are marked by their term symbols. These states are all predominantly singleexcited states, with at least 90% contribution from singles in the EOMCC excitation amplitudes. While the 1 Π level at 0.166 a.u. is well below the estimated ionization energy, the 1 Δ and 1 Σ + levels at 0.291 and 0.312 a.u., respectively, are slightly above. With x-polarized laser pulses, one-photon transitions from the ground state to the 1 Δ and 1 Σ + levels are electricdipole forbidden, implying that these excited levels can only become populated by nonlinear optical processes. The final populations of these levels, computed immediately after the interaction with the chirped sinusoidal laser pulse, are shown in the right panel of Figure 5 along with a few reference TDFCI results. The TDCCSD (and TDFCI) equations of motion were integrated using the sixth-order (s = 3) Gauss-Legendre integrator with time step Δt = 0.1 a.u. and convergence threshold 10 −6 . The sum of the populations of the remaining 27 energy levels is labelled "Rest" and is seen to be insignificant for all but the most up-or down-chirped pulses. At b = 0 fs −2 , the pulse is resonant with the ground-state 1 Σ + → 1 Π transition and, therefore, other levels are hardly populated. The maximum population of the 1 Π level is observed at a slightly up-chirped pulse (at b = 0.023 fs −2 ), which prevents further excitation from the 1 Π level to the higher-lying 1 Σ + and 1 Δ levels. As the chirp rate increases, the laser pulse becomes increasingly off-resonant, and the groundstate population increases. The population of the excited 1 Σ + level and, in particular, of the 1 Δ level increases with moderately down-chirped pulses because of transitions from the 1 Π level, whose probability increases as the laser frequency decreases. At b = −0.102 fs −2 , the population of the 1 Π level is just 9.3 × 10 −4 , while the excited 1 Σ + and 1 Δ populations are close to their maximum values. These nonlinear optical processes can easily be understood by studying the populations during interaction with the laser pulse in Figure 6. During the first half of the pulse, the instantaneous laser frequency is nearly resonant with the ground-state 1 Σ + → 1 Π transition at 0.166 a.u. (f = 0.207), causing the population of Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article the 1 Π level to increase. As the instantaneous frequency decreases, it comes closer to the transition frequencies of the 1 Π → 1 Δ and 1 Π → 1 Σ + transitions at 0.138 and 0.146 a.u., respectively. Although the instantaneous frequency is closer to the 1 Π → 1 Σ + transition than to the 1 Π → 1 Δ transition, the latter level is considerably more populated. This is explained by the difference in the oscillator strength for these transitions: f = 0.363 for the 1 Π → 1 Δ transition compared with f = 0.076 for the 1 Π → 1 Σ + transition. The differences between the EOMCC and CCLR projectors are, again, utterly insignificant, the typical rms population deviation between them being approximately 10 −5 regardless of the chirp rate. As can be seen from Figure 5, the TDCCSD simulations are in excellent agreement with TDFCI results. This is not unexpected since all states participating in the dynamics are single-excited states. The maximum absolute deviation in the 30 excitation energies, including the excited states with significant double-excited character (down to 11% singles contribution in the EOMCC excitation amplitudes), between CCSD linear response theory and FCI theory is just 0.0013 a.u. 4.3. Dynamics Involving Double-Excited States. In general, CCSD linear response theory performs poorly for states dominated by double-excited determinants relative to the HF ground state. For such states, excitation-energy errors are typically an order of magnitude greater than for singleexcited states, 86,87 roughly 0.01 a.u. for double-excited states compared with 0.001 a.u. for single-excited states of small molecules where FCI results are available. 87 In all examples presented above, the states participating significantly in the dynamics are all single-excitation dominated, explaining the close agreement observed between TDFCI and TDCCSD simulations. With a ground-state wave function dominated by the HF ground-state determinant, one-photon transitions to excited states dominated by double-excited determinants are either electric-dipole forbidden or only weakly allowed. Accordingly, we expect double-excited states to influence laser-driven manyelectron dynamics mainly through nonlinear optical processes. In order to test the influence on TDCCSD dynamics, we consider the CH + molecule, which is a classic example of the relatively poor performance of CCSD linear response and EOMCCSD theory for such states, see, for example, refs. 88−90 The 1 Σ + ground state of CH + is dominated by the 1σ 2 2σ 2 3σ 2 electron configuration with some nondynamical correlation contribution from the double-excited 1σ 2 2σ 2 1π 2 configuration. The two lowest-lying excited states form the 1 Π energy level and are dominated by the single-excited 1σ 2 2σ 2 3σ 1 1π 1 configuration. The three subsequent states form two energy levels, 1 Δ and 1 Σ + , and are almost purely double-excited states stemming from the 1σ 2 2σ 2 1π 2 electron configuration. Transitions from the ground state to these levels are either electricdipole forbidden ( 1 Δ) or very weak with oscillator strengths on the order of 10 −3 . Significant population of these levels, therefore, can only be achieved through nonlinear optical processes, requiring rather intense laser pulses. In order to make TDFCI simulations feasible for CH + , we use a reduced aug-cc-pVDZ basis set where the diffuse p functions on hydrogen and the diffuse d functions on carbon have been removed. While removing these diffuse functions has little effect on the 5 lowest CCSD linear response excitation energies (rms deviation 0.001 a.u.), the effect is significant on the following 25 excitation energies with an rms deviation of 0.021 a.u. The 31 lowest-lying states, forming 21 energy levels, computed with CCSD linear response and FCI theory are shown in the left panel of Figure 7. The ionization energy of CH + is estimated to be 0.878 a.u. using ionization-potential EOMCCSD theory and 0.876 a.u. using FCI theory with the 6-31G* basis set. 91 Hence, all computed states are below the estimated ionization energies. We expose the CH + ion to a sinusoidal laser pulse with intensity 2654 TW/cm 2 (ε 0 = 0.275 a.u.) and carrier frequency ω 0 = 0.212 a.u., which is resonant with the 1 Π → 1 Σ + transition between excited states at the CCSD level of theory. The pulse is polarized along the y-axis, perpendicular to the bond axis, with duration t d = 66.7 a.u. = 1.61 fs, corresponding to 2.05 optical cycles. The TDCCSD and TDFCI equations of motion were integrated with the sixth-order (s = 3) Gauss-Legendre integrator with time step Δt = 0.05 a.u. and convergence threshold 10 −6 . The resulting energy-level populations are presented in the right panel of Figure 7. Populations of the 4 lowest-lying levels, including the ground state, are shown along with the sum of the populations of the remaining 17 levels, labelled "Rest." The total population of all computed levels is labelled "Sum." The effect of poorly described double-excited states at the CCSD level of theory is evident, although we do observe a qualitative agreement with FCI theory. We first note that the 21 levels included in the analysis only account for about 80% of the norm of the FCI wave function at the end of the pulse, implying that a physically correct description must also take Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article ionization processes into account. This, of course, is not surprising, considering the high intensity of the pulse. The TDFCI and TDCCSD populations agree reasonably well during the first 0.75 fs of the simulation, whereas the TDCCSD errors increase as the simulation progresses. Involving a large number of states, the dynamics is considerably more complex than the dynamics of the cases presented above. For simplicity, we use a classification of the 30 excited states based on their single-or double-excitation character. Excited states with more than 90% contribution from singles in the EOMCCSD amplitude norm are classified as single-excited states, while states with less than 10% singles contribution are classified as double-excited states. States with 10−90% singles contribution are mixed states, classified as singles-dominated (>50% singles contribution) or doublesdominated (<50% singles contribution). Thus, the 30 excited CCSD states can be grouped into 7 single-excited states, 4 singles-dominated states, 5 doubles-dominated states, and 14 double-excited states. The total population of each class of states is presented in Figure 8. After about 0.4 fs, the laser pulse induces transitions from the ground state into singles-dominated states, followed by transitions (from both the ground state and the singlesdominated excited states) into single-excitation states. Doubleexcited or doubles-dominated states are barely populated at this stage, explaining the reasonable agreement with TDFCI populations. Roughly half-way through the simulation, doubleexcited states and, to a smaller extent, doubles-dominated states become populated, mainly due to transitions from singles-dominated states. As soon as these processes occur, the agreement between TDCCSD and TDFCI deteriorates. 4.4. Population Conservation. As discussed in Section 2.6, stationary-state populations in the absence of external forces are strictly conserved in the FCI limit but may vary when the cluster operators are truncated. In order to investigate the breaking of the population conservation law within TDCCSD theory, we have conducted simple numerical experiments with several of the systems presented above. We apply the same sinusoidal laser pulses as above but continue the propagation after the pulses have been turned off, recording stationary-state population using the CCLR and EOMCC projectors. Figure 9 shows the conservation of TDCCSD populations after the laser pulses have been turned off for the He atom with the aug-cc-pVTZ basis set. A maximum absolute deviation in the populations of 1.5 × 10 −3 is observed for the 1 1 S and 2 1 P levels, whereas the deviations for the remaining levels are at least 1 order of magnitude smaller. These deviations from exact conservation are likely caused by the discretization of the numerical integration. Slightly larger deviations from strict conservation are observed for the Be atom with the aug-cc-pVTZ basis set in Figure 10. The maximum absolute deviation of 0.005 is observed for the 1 1 P level. Caused by the truncation of the cluster operators in conjunction with discretization, this deviation is thrice greater than that observed for He above. Only weak oscillatory behavior is observed, indicating that the states involved in the dynamics are very well approximated at the CCSD level of theory. Energy-level populations for LiH during and after interaction with a chirped laser pulse are plotted in Figure 11. The chirp rate b = 0.03078 fs −2 yields a final state dominated by the 1 Π level, and the populations remain constant to an excellent approximation after the interaction ceases. The Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article maximum absolute deviation is observed for the 1 Π level and is on par with that observed for the He atom: 1.7 × 10 −3 . As we saw above, the TDCCSD method is a much poorer approximation to TDFCI theory in the case of CH + , where double-excited states participate significantly in the dynamics. On these grounds, we expect a much poorer conservation of energy-level populations after interaction with the laser pulse and, indeed, large deviations can be seen in Figure 12. The depletion of the ground state appears to continue after the pulse, and irregular weakly oscillatory behavior is observed for the 1 Π level with a maximum absolute deviation of 0.072, which is an order magnitude greater than the deviations found for He, Be, and LiH above. The double-excited 1 Δ and 1 Σ levels show maximum absolute deviations of 0.009 and 0.019, respectively. 4.5. Pump Spectrum of LiF. Skeidsvoll et al. recently reported a theoretical TDCCSD study of transient X-ray spectroscopy of the LiF molecule. 38 They applied a pump− probe laser setup with an optical pump pulse resonant with the lowest-lying dipole-allowed transition from the ground state, followed, at various delays, by an X-ray probe pulse resonant with the first dipole-allowed core excitation. The resulting time-resolved spectra were interpreted by means of excitation energies from EOMCC theory and core−valence separated EOMCC theory. 92 The pump absorption spectrum reported in Figure 7 of ref 38 contains weak unassigned features, one weak absorption above the two low-lying valence excitations and two very weak features below, which the authors speculated were due to two-photon absorptions. We will now use the EOMCC and CCLR projectors to investigate what might cause these weak features of the pump absorption spectrum. We use the same basis set, denoted aug-cc-p(C)VDZ, as in ref 38: the augcc-pVDZ basis set for Li and the aug-cc-pCVDZ basis set for F. The closed-shell TDCCSD equations of motion were integrated using the sixth-order (s = 3) Gauss−Legendre integrator with time step Δt = 0.025 a.u. = 0.60 as and convergence threshold 10 −6 for the fix-point iterations. Initially in the ground state, we expose the LiF molecule to a shortened but otherwise identical z-polarized Gaussian laser pulse to that in ref 38, with field strength ε 0 = 0.01 a.u. (peak intensity I = 3.51 TW/cm 2 ), carrier frequency ω 0 = 0.2536 a.u., and Gaussian rms width σ = 20 a.u. The shortening consists in choosing the central time t 0 = 80 a.u (compared with t 0 = 160 a.u. in ref 38) and N = 4 (compared with N = 8 in ref 38). This implies that the electric-field amplitude jumps from zero to 3.3 × 10 −6 a.u. at t = 0 a.u. and from 3.3 × 10 −6 a.u. to zero at t = 160 a.u., whereas virtually no discontinuities can be observed with the pulse parameters used in ref 38 (they are on the order of 10 −16 a.u.). Since the pump pulse is quite weak, the effects of these discontinuities on the populations are negligible, as can readily be verified using a simple eigenstate expansion analogous to the one used for He and Be in Section 4.1. Our TDCCSD results are presented in Figure 13. The first 30 excited states (20 energy levels, left panel of Figure 13) are all single-excitation dominated states, with 94.9−95.4% singles contribution to the norm of the EOMCCSD amplitudes. The highest-lying states are somewhat above the first ionization energy of LiF, which we estimate to be about 0.4 a.u. (11 eV) based on data available in ref 83. The modest intensity of the pump pulse results in fairly little excitation from the ground state, well within reach of a perturbation-theoretical (Fermi's golden rule) treatment. In agreement with ref 38, the projectors predict the absorption to be dominated by the B 1 Σ + and E 1 Σ + states. The final population of the latter is roughly 53% of the former, in good agreement with the relative intensities of the pump spectrum reported in Figure 7 of ref 38. The population of the E 1 Σ + state reaches its maximum value 0.0025 at t = 2.26 fs; at the same time, the ground-state population reaches its minimum value 0.9945, indicating that the ensuing decay of the E 1 Σ + population is caused by transition back to the ground state. The weak feature at higher frequency (at about 9.1 eV) in the pump spectrum of ref 38 is seen to be consistent with onephoton transition from the ground state to the H 1 Σ + state, whose final population is about 5% of that of the B 1 Σ + state. As speculated by Skeidsvoll et al., 38 the two very weak features below the B 1 Σ + line in ref 38 are indeed seen to arise from direct two-photon absorptions from the ground state to the L 1 Σ + and R 1 Σ + states. The only alternative explanation would be excitations between excited states, but this mechanism can almost certainly be ruled out since the population of these states starts before other excited levels are significantly populated and since no other excited states are depleted as the populations of these states increase. Although the CCLR and EOMCC populations largely agree with an overall rms deviation of 7 × 10 −6 , the CCLR populations show spurious high-frequency oscillations in Figure 13. The oscillations are caused by the off-diagonal contributions from ⟨Ψ̑n| (eq 49) to the CCLR projector (eq 47). While these contributions are required to ensure proper Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article size-intensivity of one-photon transition moments, they also cause nonorthogonality of the CCLR excited-state representation as expressed by eq 57. Since the CCLR and EOMCC projectors provided virtually identical results in the cases above, this observation serves as a recommendation of the EOMCC projector for the calculation of stationary-state populations. Figure 14 depicts a normalized pump spectrum of LiF generated from the final EOMCC populations using where γ = 0.01 eV is an artifical Lorentzian broadening of the excited levels. This approach implicitly assumes that the excited states only become populated through one-photon absorption from the ground state, thus excluding all nonlinear optical processes. The population-based pump spectrum agrees remarkably well with that reported by Skeidsvoll et al., 38 which was properly generated from Fourier transformation of the induced dipole moment. This supports the conclusion that the low-frequency features are two-photon absorptions and further strengthens the confidence in the proposed EOMCC projector for the calculation of stationary-state populations in TDCC simulations. CONCLUDING REMARKS We have proposed projectors for the interpretation of manyelectron dynamics within TDCC theory in terms of the population of stationary states. Two conditions are used to define suitable projectors from CCLR theory and from EOMCC theory: (i) the projector must reproduce the correct form of one-photon transition strengths and (ii) the projectors must yield populations that converge to the FCI results in the limit of untruncated cluster operators. The CCLR and EOMCC projectors are tested numerically at the TDCCSD level of theory for the laser-driven dynamics in the He and Be atoms and in the molecules LiH, CH + , and LiF. It is demonstrated that the populations provide valuable insight into the linear and nonlinear optical processes occurring during the interaction of the electrons with laser pulses. For the He atom, it is verified numerically that the populations computed from both CCLR and EOMCC projectors agree with those computed from TDFCI simulations. For Be and LiH, the CCLR and EOMCC populations show excellent agreement with TDFCI popula-tions since the excited stationary states involved in the dynamics are dominated by single-excited Slater determinants. Such states are generally well described by CCSD theory. For CH + , we deliberately design the laser pulse such that doubleexcitation dominated states become populated, which reduces the agreement between TDFCI and TDCCSD populations. This is also reflected in the studies of the conservation of populations after the laser pulses are turned off. Theoretically, the TDCC populations will only be strictly conserved in the FCI limit. Numerically, we find that they are nearly conserved as long as the participating stationary states are well approximated at the CCSD level of theory. Finally, for LiF, we use the CCLR and EOMCC projectors to explain unassigned weak features in a theoretical TDCCSD pump spectrum reported recently. 38 Overall, the CCLR and EOMCC projectors yield very similar excited-state populations with typical rms deviations on the order of 10 −5 . For LiF, however, we observe smallamplitude high-frequency oscillations of the excited-state populations computed with the CCLR projector. Originating from a contribution that vanishes in the FCI limit, we speculate that such oscillations may increase for larger and more complex systems where TDCCSD theory may be further from TDFCI theory. Not showing signs of such spurious behavior, the EOMCC projector appears more attractive than the CCLR projector. This has the added benefit that the additional response eq 51 need not be solved, thus making the EOMCC projectors less computationally demanding than the CCLR projectors. These findings call for further research aimed at a fully consistent definition of excited states in CC theory, and work in this direction is in progress in our labs. ■ APPENDIX FCI Limit The CCLR projector, eq 47, becomes identical to the EOMCC projector, eq 33, in the FCI limit. To show this, we will now demonstrate that in the FCI limit. In the FCI limit, the EOMCC wave functions satisfy the time-independent Schrodinger equation and its Hermitian conjugate where the ground-and excited-state wave functions constitute a biorthonormal set according to eqs 39 and 40. The resolution-of-the-identity reads where 1 is to be understood as the identity operator. To verify the resolution-of-the-identity, one simply inserts the definitions of the wave functions and exploits the biorthonormality of the Jacobian eigenvectors, eq 37, along with the completeness of the underlying determinant basis. According to eq 51, the amplitudes n ̅ μ can be recast as where we have used eq 37. Expanding the nested commutator and using 81−83 and, 39 we find
14,708.4
2020-09-21T00:00:00.000
[ "Physics" ]
Implementation architecture of signal processing in pulse Doppler radar system based on FPGA : A kind of high speed and parallel hardware architecture is proposed and designed for digital signal processing of high frequency Pulse Doppler radar here. The platform is based on one XC7K410T FPGA, two XC7K325T FPGAs, one TMS320C6678 DSP, and four sets of MT41J256M8 DDR3. The details of implementation including pulse compression, moving target detection and constant-false-alarm rate are described. The simulation results and resource consumption are presented to demonstrate the advantages of the proposed FPGA implementation architecture. Introduction Pulse Doppler (PD) radar is widely applied to many occasions such as moving target detection (MTD), range measurement, and velocity measurement [1], since it has promising anti-interference ability and the echo is not easy to generate velocity ambiguity in high pulse repetition frequency (HPRF) mode [2]. The PD radar system can obtain attractive time resolution and high range resolution by adopting the linear-frequency modulation (LFM) waveform as the transmitted signal [3,4]. Complete radar signal processing is complex and flexible, which involves a significant amount of data and instructions. Moreover, it needs to be implemented in real time. The filedprogrammable-gate array (FPGA) has a large number of hardware resources such as logic gates and triggers. Meanwhile, this high integration device has high parallel processing speed and it can be reconfigured flexibly to meet the demand of different applications. Therefore, the FPGA has great advantages on radar signal processing [5]. This paper proposes a new complete hardware architecture to implement digital signal processing. The platform utilises FPGA as the core devices, equipped with digital-signal-process (DSP), analogue-digital-converter (ADC) and DDR3. The FPGA implements digital-down-conversion (DDC), digital-beam-forming (DBF), pulse compression, MTD, and CFAR, while the DSP implements channels calibration and azimuth angle merging. The processed data of FPGA are transmitted to DSP via RapidIO interface and then to the host computer via Ethernet. At the end of the paper, we analyse the designed hardware architecture and the resource consumption of the FPGA in the system. In addition, the simulation results of the signal processing are presented followed by analysing the problems appear in the practical experiments. Radar system The proposed PD radar system is mainly used for surveillance, with varying scope in different modes. At the same time, it measures the distance, speed, altitude, and azimuth of the targets. Its detection targets are low and slow small objects such as fixedwing or rotor-based unmanned aerial vehicles (UAV), sea-going boats, pedestrians, and vehicles and so on in coastlines, airports and other open space. The radar supports three types of scanning in azimuth, i.e. circular sweep, fan sweep, and fixed-point sweep. Totally, 13 beams with beamwidth of 5 degrees scanning from −5 degrees to 55 degrees in elevation are formed by signals from 16 receiver channels. Six kinds of waveforms are designed to meet the different detection requirements. Each transmitted waveform is comprised of two pulses to reduce blind zone, where first one is the cosine signal and last one is the LFM signal, while the pulse width and pulse repetition time of each waveform is different from each other. The radar system is comprised of seven modules, including one signal generator, one set of substrate integrated waveguide-based phased-array antennas, one set of compact integrated transceivers, a signal processing board integrated primarily with three FPGAs and one DSP, a power supplier, a servo, and a computer. Fig. 1 shows the complete signal processing procedure, including the pre-processing, pulse compression, MTD, and constant-falsealarm rate (CFAR). In total, 16 radar receivers changes all of the received 16 echoes from radio frequency to intermediate frequency (IF) before they are transmitted to ADCs, on which the 16 channels of IF analogue signals are converted to be digital signals. Then, the 16 IF digital signals are transferred to the FPGA for being converted into orthogonal baseband signals by the DDC module, as shown in Fig. 1. Pre-processing implementation on FPGA After the above frequency conversion, calibration of 16 channels is performed to eliminate the effect of phase and time deviations between them for the subsequent DBF processing. When the radar is powered on or the host computer sends the channel calibration command, the DSP will process the 16 baseband signals and obtain the channel calibration coefficients. Then, the DSP sends the obtained channel calibration coefficients together with the fixed DBF coefficients to the FPGA through external memory interface (EMIF). The baseband signals for each channel are multiplied by the channel calibration coefficients. Then, the calibrated signals are fed into DBF sub-module so that the 13 digital beams are formed. Pulse compression implementation on FPGA Pulse compression can achieve the long-range detection performance of long pulse without compromising on range resolution, which is important in small target detection, especially for the transmitted LFM signal [6]. The expression of the LFM is and A is the signal amplitude. τ is the pulse width, ω 0 is the carrier frequency of the signal, and μ is the frequency modulation rate. In general, there are two implementation methods for pulse compression. One is time-domain convolution method which is implemented by the finite impulse response (FIR), and the other is frequency-domain multiplication method which is implemented by fast Fourier transform (FFT). The proposed radar system adopts the second one in order to achieve large compression ratio and reduce the computational complexity. Pulse compression can improve signal-to-noise ratio (SNR) for target detection, but the side lobes of large targets may overwhelm the small ones. Hence, the approach of windowing is adopted to suppress side lobes. The implementation block diagram of pulse compression is shown in Fig. 1. First, the complex conjugate of the frequency spectrum of the transmitted LFM signal is obtained as the matched filtering coefficients in Matlab. Then, the coefficients are processed by the Hamming window corresponding to the number of points. The obtained results are defined as the modified matched filtering coefficients in this paper, which are stored in the read-only memory (ROM) of the FPGA. At last, the spectrum of the LFM signal is multiplied by the calculated modified matched filter coefficients, and inverse fast Fourier transform (IFFT) is performed to obtain the pulse compression result. MTD implementation on FPGA Pulse compression is to increase the SNR so that the target can be detected from noisy signals more efficiently. However, when the target and the clutter appear in the same distance unit, the targets may not be successfully detected, especially for the small ones. The most efficient way to distinguish moving targets from clutter is by using the Doppler frequency shift produced by the relative motion between the radar and the targets. Due to the unpredictable Doppler frequency of the target, a set of adjacent and partially overlapping narrow-band filters are needed to cover the entire Doppler frequency range to filter the moving target echo. Determining whether the presence of each narrow-band filter output can effectively detect the existence of moving target, and the obtained Doppler frequency from the filter corresponds to the speed of the moving target. Doppler filter bank is the key technique of MTD, which can be achieve by the FIR filter in the time domain or the FFT filter in the frequency domain. The latter is adopted in the proposed system for reducing the computation complexity. The block schematic of MTD is also shown in Fig. 1. The data received from the pulse compression module are stored in DDR3. Then, the data of slow time dimension is read from DDR3, which is multiplied with the hamming window. The multiplied data is input to the FFT filter banks, and the modulus of the I/Q data are calculated from the output of the set of narrow-band filters. CFAR implementation on FPGA The CFAR technique is a signal processing algorithm that provides a detection threshold for the detection strategy in an automatic radar detection system and minimises the effect of clutter and interference on the false alarm probability [7]. Different types of background clutter distribution will lead to different CFAR detection performances. The proposed system adopts the cellaveraging CFAR (CA-CFAR) detection method with seven reference cells and five average cells which has good performance when the clutter follows the Rayleigh distribution. The value of detected cell should be divided by a threshold value and then compared with the average of the five average cells. The threshold value is determined by the command with an integer, whose value ranges from 1 to 9 and is sent from the host computer. The CFAR detection process is illustrated in Fig. 1. The results of MTD for seven adjacent beams are stored in DDR3 at first. Then, results of six sum-beams are obtained by adding each two of the adjacent beams together. Then, CFAR detection is performed on the slow time dimension and the fast time dimension data of the sum beams. Finally, the distance unit number and speed unit number of the cells detected by CFAR are transmitted to the DSP, together with the corresponding magnitude. Implementation platform This radar system uses FPGA to implement signal processing instead of DSP to cope with huge amount of computation, and the architecture of the processor platform is given in Fig. 2. Fig. 3 shows the fabricated signal processing board, and three FPGAs are integrated on it. One of FPGAs is with series number of Xilinx Kintex-7 XC7K410T denoted by F1 and the other two are Xilinx Kintex-7 XC7K325T FPGA denoted by F2. The DDC, the channels calibration, the DBF and the pulse compression modules are implemented on F1, while the MTD and CFAR modules are implemented on F2. When the pulse compression is completed, the data of a series of cascaded Pulse Repetition Time (PRT) is transmitted to the MTD module by through Gigabit Transceiver (GTX). In every PRT, there are 13 beams which are transmitted serially. Due to the limited resources of each F2, 13 beams are processed on two F2s, one of which handles beams indexed from 0 to 6 and the other processes those indexed from 6 to 12. In order to make the programs of the two F2s identical, the No. 6 beam is processed repeatedly. Each F2 is equipped with two sets of MT41J256M8 DDR3, whose storage capacity is 1 Gb with eight banks, and each bank includes 16k rows and 1k columns. The whole pulses in a frame should be read from the DDR and processed in 1/4 frame time in this radar system. Therefore, the DDR3 should store at least 1.25 frames of data and the read bandwidth should be greater than the write bandwidth. Resource consumption The Xilinx Kintex-7 FPGA has a large improvement compared to the previous generation. Tables 1 and 2 show the resource consumption of F1 and F2, respectively. It can be observed that the LUT, BRAM, IO, and DSP are heavily consumed. In fact, F2 is equipped with two sets of DDR3, which can store 1.25 frames of data, and it can reduce the resource consumption of F2. Simulation results In the radar system, a received signal coupled directly from the transmitted channel without sending out through the antennas is used as the echo signal for radar self-test. After the aforementioned signal processing, the results are displayed on the host computer for the target with the distance of 2 km and the speed 20 m/s as shown in Fig. 4. Fig. 4a shows the pulse compression result, and it is found that the peak of the waveform is appeared at 2 km, corresponding to the distance of the simulated target. Then, MTD is performed and the result is shown in Fig. 4b and c in speed and distance dimensions, respectively. We can see that the SNR of the distance waveform is improved and the peak is still at 2 km. Finally, Fig. 4d shows the CFAR result represented by the dot moving trace when the radar works in sweep mode with detection results of 100 frames in one circle, and one dot corresponds to the target detected by the CFAR module in a frame. Problems in the experiments During the experiments, some problem appears causing incorrect MTD results and give rise to the wrong results of detection. Fig. 5 shows the waveforms before and after MTD processing in different incorrect conditions. The correct MTD result with normal signal as a comparison is shown in Fig. 5a. While the I and Q signals generated by the DDC module are not orthogonal, a signal comes out at the image frequency corresponding to the signal frequency as shown in Fig. 5b. Fig. 5c shows the result for the I and Q signals generated from DDS with discontinued phase between adjacent frames, and it leads to two peaks with very close frequencies and widening of the main lobe. Fig. 5d shows the glitches in the signal due to the program with poor timing. It is shown that when there are few small glitches, the noise floor will increase. Conclusion In this paper, a real-time architecture of the PD radar processor based on the FPGA is introduced. Then, the implementation of the signal processing algorithms is presented with detailed procedures. In addition, the resource consumption of the FPGA and the problems in experiments are analysed. The simulation results show that the radar system can detect the target and accurately obtain the distance, speed, azimuth and altitude of the target. Acknowledgments This work was supported by the National Natural Science Foundation of China under Grant no. 61501033.
3,133.2
2019-07-11T00:00:00.000
[ "Engineering", "Computer Science" ]
GC-MS analysis of Hemidesmus indicus (L.) R. Br. roots and quantification of 2-hydroxy-4-methoxy benzaldehyde through RP-HPLC Hemidesmus indicus is a commercially important medicinal crop that catches the attention of both pharmaceuticals as well as the food and flavoring industry. The current investigation intends to reveal the possibilities associated with the morphologically distinct H. indicus plants. The GC-MS analysis carried out with broad-leafed H. indicus root extracts resulted in the identification of 40 various compounds. Among all the compounds 2-hydroxy-4-methoxy benzaldehyde (MBALD) was found to be the major chemical entity. Further, RP-HPLC analysis was carried out to decipher the amount of MBALD present in roots of two morphologically distinct H. indicus plants were found to be 0.1827 and 0.1537 mg/gram of tissue of slender and long leafed and broad-leafed plants, respectively. INTRODUCTION Hemidesmus indicus (L.) R. Br. belongs to the Apocynaceae family and is a perennial twining semierect woody shrub, has been represented by 264 vernacular names across the world and is popularly known as Indian sarsaparilla (Kher et al., 2020). Indian sarsaparilla differs from true sarsaparilla (Smilax febrifuga) and exists as Hemidesmus indicus var. indicus and Hemidesmus indicus var. pubescens. This medicinal plant is majorly found in South Asian countries, specifically India, Pakistan, Sri Lanka, Bangladesh, Maldives (George et al., 2008). The ethnobotanical knowledge about H. indicus has been serving people for a long ago, and its mention in Ayurveda under the name of Anantmool can also be seen among the Rasayana plants (Ved and Goraya 2007). Due to the presence of an aromatic compound, 2-hydroxy-4-methoxy benzaldehyde (MBALD), the roots of this plant have been attributed with a strong smell, which makes it an ingredient in sherbet or flavored sweet drinks and bakery products (Patnaik and Debata, 1996). The roots of these plants make it more commercially valuable and medicinally useful as they possess highly desirable therapeutic properties (Kawlni et al. 2017). The root decoction of this plant have shown anti-cancerous activity against colorectal cancer (Turrini et al., 2018) and breast cancer (Suryavanshi et al., 2019). This study briefly describes the extraction methodologies used in the preparation of solvent root extracts, reviews the utility of Gas chromatographymass spectrometry for the analysis of compounds present in the H. indicus and the procedure for quantification of the commercially valuable compound, MBALD. Gas chromatography-Mass spectroscopy analysis Collection of plant sample: Broad-leafed H. indicus plants were collected from the foot-hills of Rajapalayam (9 o 43'13.9''N,77 o 44'10.3''E) and were maintained at the greenhouse, TNAU, Coimbatore. The plants were kept at acclimatizing conditions for normal growth. Two-month-old transferred plants were uprooted, roots were separated and washed under running tap water, and kept for drying until further use. Preparation of solvent root extract: For the preparation of solvent extract from the roots of H. indicus, modified method of Alade and Irobi (1993) was used. The dried roots were ground to a fine powder. Ten gram of root powder was soaked in methanol (100%) for overnight and was filtered through Whatman filter paper No. 1. The filtrate was concentrated using a Soxhlet extractor and was used for GC-MS analysis. GC-MS condition: One ml of methanolic root extract was injected for GC-MS analysis using Perkin Elmer GC-MS Clarus® SQ 8 equipped with DB-5MS (Agilent, USA) capillary standard non-polar column (0.25mm OD x 0.25 μm ID x 30 m length). The instrument was set to an initial temperature of 40˚C and the injection port temperature was ensured at 220˚C, interface temperature set 250 °C, source kept at 220°C, oven temperatureprogrammed as 75°C for 2 min, 150°C @ 10 °C/min, up to 250°C at 10°C per min. The GC conditions were: 1:12 split, helium carrier at 20 psi. The MS conditions were: positive ion mode, electron impact spectra at 70 eV. The mass spectral scan range was set at 50 to 600 Da. Reverse-phase high-performance liquid chromatography Explant material: In the current investigation, 2 morphologically different H. indicus plants slender and long leafed (SLL) and broad-leafed (BL) (figure 1 and 2, respectively) were taken for the quantification of 2-hydroxy-4-methoxy benzaldehyde from the roots. The dried roots of two morphologically different plants were obtained from Rajapalayam (9 o 43'13.9''N, 77 o 44'10.3''E). The roots were finely powdered and kept separately in airtight containers. Reagents and Chemicals: HPLC grade Methanol and Trifluoracetic acid (TFA) were used during the study and other reagents were prepared using Milli Q water. Sample extract preparation: For the preparation of methanolic root extract protocol by Sircar, Dey, and Mitra (2007) was followed. One gram of powder was extracted with 20 mL of aqueous methanol (50:50, v/v). The extract was incubated for 2 days with continuous shaking and was subjected to centrifugation at 10,000 rpm for 10 minutes. The supernatant taken was first filtered with Whatman filter paper No. 1 and further filtered through a 0.22 µm filter. 20µl of the sample was injected into the HPLC system. Solvent preparation: HPLC grade (98%) 2-hydroxy-4-methoxy benzaldehyde standard was used for the study. 1 mg of standard MBALD purchased from Sigma-Aldrich (Catalogue No.160695, Molecular Weight 152.15 and PubChem Substance ID 24849887) was added 1mL of aqueous methanol (50:50, v/v) to prepare a 1000 ppm standard stock solution. The working standard solution was prepared with a concentration of 10 ppm by dilution of standard stock solution with aqueous methanol (50:50, v/v). Isocratic solvent mixture (mobile phase) was prepared by adding 1 mM aqueous TFA and methanol in a 70:30 ratio. All stock solutions were stored at 4 O C. HPLC Condition: Shimadzu HPLC system equipped UV-Vis detector was used for RP-HPLC analysis. The separation of compounds was achieved by the C18 reversed-phase column (INNO column, 5µm, 120Å, 4.6×250mm). Shimadzu CLASS-VPTM software was used for data acquisition, processing, and reporting on the Windows XP platform. The wavelength was set to 280 nm for monitoring chromatograms. The mobile phase was maintained at a flow rate of 1 ml min -1 . The sample was identified based on the comparison of retention time with those of the standards with keeping the same conditions. Quantification of MBALD was done by using the retention time and peak area obtained from the chromatogram. The formula for calculating the percentage of 2-hydroxy-4-methoxy benzaldehyde in the dried root extract is as follows: Phytochemical analysis through GC-MS The gas chromatogram obtained after GC-MS analysis defined the relative concentrations of different compounds being eluted as the function of retention time. The peak heights indicated the relative concentrations of various compounds present in the methanolic extract, whereas the mass spectrometer analysed the compounds and identifies the nature and structure of the compounds based on the time of their elution (Krishnamoorthy and Subramaniam, 2014). The methanolic root extract of broad-leafed H. indicus had shown the presence of a variety of compounds as compared to that of published in earlier literature of H. indicus. A total of 40 compounds were identified (figure 3) in the GC-MS analysis and the identification of the compounds in the extract were based on the peak area which represented the percentage of that particular compound which was deciphered from the chromatogram and molecular weight of the compound. The list of all the compounds is given in table 1. Out of all the compounds, it was found that the major volatile compound, 2-hydroxy-4-methoxy benzaldehyde (peak area 11.071%) was present in abundance forming the major entity. Due to the presence of MBALD in large proportion, it could be utilized in various sectors like pharmaceuticals, food industry, anti-microbial formulations, etc. Anti-bacterial and anti-fungal activity of MBALD was deduced by Mehmood, Dixit, and Singh (2016) who reported the hexane extract prepared from the roots H. indicus to be effective with the maximum zone of inhibition (MIC) of 22 mm against Staphylococcus aureus and the MIC for Candida Volume xxx | Issue xxxx | 2 GC-MS condition: One ml of methanolic root extract was injected for GC-MS analysis using Perkin Elmer 40 GC-MS Clarus® SQ 8 equipped with DB-5MS (Agilent, USA) capillary standard non-polar column (0.25mm 41 OD x 0.25 μm ID x 30 m length). The instrument was set to an initial temperature of 40 ˚C and the 42 injection port temperature was ensured at 220 ˚C, interface temperature set 250 °C, source kept at 220 43 °C, oven temperature-programmed as 75 °C for 2 min, 150 °C @ 10 °C/min, up to 250 °C at 10 °C per 44 min. The GC conditions were: 1:12 split, helium carrier at 20 psi. The MS conditions were: positive ion 45 mode, electron impact spectra at 70 eV. The mass spectral scan range was set at 50 to 600 Da. 46 Reverse-phase high-performance liquid chromatography 47 phase was maintained at a flow rate of 1 ml min -1 . The sample was identified based on the comparison of 72 retention time with those of the standards with keeping the same conditions. 73 Quantification of MBALD was done by using the retention time and peak area obtained from the 74 chromatogram. The formula for calculating the percentage of 2-hydroxy-4-methoxy benzaldehyde in the 75 dried root extract is as follows: 76 77 10-Acetoxy-2-hydroxy-1,2,6a, 6b,9,9,12a-heptamethyl-1,3,4,5,6,6a,6b,7,8,8a,9,10,11,12,12a,12b albicans to be 18 mm with zero cytotoxicity. MBALD is a non-toxic aromatic benzaldehyde that confers antimicrobial, anti-aflatoxigenic potency, antioxidant properties (Harohally et al., 2017). It has good water solubility and is a potent tyrosinase inhibitor (Ley and Bertram 2001). The Hepatoma HepG2 cells on incubation with root decoction of H. indicus in the dose range of 0-50 mg/ml caused significant inhibition on the growth of the Hep2G cell line, making it amenable for its further use in cancer studies (Thabrew et al., 2005). Figure 1. Slender and long leafed plant of H. indicus Quantification of MBALD through RP-HPLC The percentage of MBALD present in morphologically distinct 2 plants of H. indicus (SLL and BL) was quantified through HPLC using methanolic root extract. figure (Figures 4, 5, and 6, respectively). Quantification of MBALD was done by using the formula mentioned in materials and methods. It was found that the methanolic root extract of slender and long leafed (SLL) and broadleafed (BL) Hemidesmus indicus plants contains As the compound 2-hydroxy-4-methoxy benzaldehyde is an isomer of vanillin and is an aromatic compound, the demand for its supply in the food and flavoring industry is high (Rathi et al., 2017). In an earlier report, the quantity of MBALD was found to be 0.2638 mg /gram of tissue (Prathibha Devi et al., 2016), which seems more as compared to the values obtained in the current investigation. Methanolic root extracts of seven ecotypes showed variation in HPLC analysis and 2-hydroxy 4-methoxy-benzaldehyde concentration recorded was higher in ecotype 6 and lesser in ecotype 3 (Rathi et al., 2017). The variations in the accumulation of a particular compound in the specific part depend on various reasons such as genotype, plant physiology, climate, environmental conditions, and pathogens, and in some cases, the secondary metabolites are only produced during certain developmental stages (Gonçalves and Romano, 2018). Perhaps, comparatively, the SLL type H. indicus was found more promising for extracting MBALD as a source for commercial production. CONCLUSION From the current investigation, it could be concluded that Hemidesmus indicus contains pharmaceutically important and industrially desirable compounds, which makes it stand in the frontiers of medicinally important plants. 2-hydroxy-4-Methoxybenzaldehyde is an astounding food flavoring metabolite widely used as flavors for food preparation, besides known to inhibit the activity of the enzyme acetyl choline esterase, hence can be used for curing neurodegenerative disorders like Alzheimer's disease. The exact role of MBALD in cancer studies needs to be determined to reveal its effects on the mechanism of growth inhibition of cancer cells, which might be used as natural and low-cost drugs to fight cancer with minimal or no side effects. Further, other compounds present in the roots of H. indicus serve wide spectra of utilities, which need to be highlighted in future studies. Moreover, to meet the growing demand of MBALD, more studies on the secondary metabolite production is need of an hour. Ethics statement No specific permits were required for the described studies because no human or animal subjects were involved in this research. Originality and plagiarism The submitted article is original and has not been submitted to any other journal. Consent for publication All the authors agreed to publish the content. Competing interests There was no conflict of interest in the publication of this content. Data availability All the data of this manuscript are included in the document. No separate external data source is required. If anything is required from the document, certainly, this will be extended by communicating with the corresponding author through corresponding official mail.
2,930.2
2020-09-01T00:00:00.000
[ "Chemistry", "Environmental Science", "Medicine" ]
A Structural Algorithm for Complex Natural Languages Parse Generation In artificial intelligence, the study of how humans understand natural languages is cognitive based and such science is essential in the development of a modern day embedded robotic systems. Such systems should have the capability to process natural languages and generate meaningful output. As against machines, humans have the ability to understand a natural language sentence due to the in-born facility inherent in them and such is used to process it. Robotics requires appropriate PARSE systems to be developed in order to handle language based operations. In this paper, we present a new method of generating parse structures on complex natural language using algorithmic processes. The paper explores the process of generating meaning via parse structure and improves on the existing results using well established parsing scheme. The resulting algorithm was implemented in Java and a natural language interface for parse generation is presented. The result further shows that tokenizing sentences into their respective units affects the parse structure in the first instance and semantic representation in the larger scale. Efforts were made to limit the rules used in the generation of the grammar since natural language rules are almost infinite depending on the language set. Keywords—Natural Language; Syntax; Parsing; Meaning Representation I. INTRODUCTION Natural languages [1,2] are used in our everyday communication.They are commonly referred to as human languages.Humans are able to process natural languages easily because it is their basic language of communication since birth.The human system has the capability to learn and use such languages and improve on it over time.Recently, there has been renewed effort in developing systems that emulate human due to increased service rendering requirements including several efforts in [3]. A major factor to be considered in such system is that, they must have the capability to act like human.The need includes the ability to process human speech,( Speech Recognition, an area that has had great research attention) in a way that it can receive speech signals, converts it into text, processes the text and provides a response to the user.The user is obviously more comfortable using his or her natural language to present such speech.However, natural language is a very complex language due to the high level of ambiguity existing in it.This is one of many factors, others include the availability of large set of words in several unstructured order.Thus, to make a functional system, these issues must be clearly addressed.Processing natural languages involves the concept of interpretation and generalization [4].In Interpretation, the process involves understanding the natural languages while generalization is a next to interpretation handles the representation of the interpreted language.The process of representation will only be functional if the language of presentation is understood by the system.In understanding such languages, several stages of operations are involved.They include morphological analysis (how words are built from morphemes, a morpheme is the smallest meaningful unit in the grammar of a language), chunking (breaking down sentences into words known as tokens, a token is a symbol regarded as an individual concrete mark, not as a class of identical symbols, it is a popular alternative to full parsing), syntactic analysis (analyzing the sentences to determine if they are syntactically correct) and semantic analysis (looking into the meanings).One can consider the importance related to the representation in morphemes as stated above, using the following example, Consider the word "Unladylike" This word consists of three morphemes and four syllables.The Morpheme breaks into: un-'not', lady '(well behaved) female adult human', like 'having the characteristics of'.None of these morphemes can be broken up any more without losing all the meaning the word is trying to convene.Lady cannot be broken up into "la" and "dy," even though "la" and "dy" are separate syllables.Note that each syllable has no meaning on its own.Thus, our representational framework can be determined by the morphology existing in any given word.This process can be manually interpreted but as the set of terms to be considered increases, the manual interpretation has greater tendencies to fail.Thus an appropriate scheme is to introduce algorithms that can handle such complex representation of natural language in a way that appropriate parse needed for machine translation of natural language can be generated.Such algorithm will generates syntactic structures for natural language sentences by producing a syntactic analysis of any given sentence correctly whose output is the syntactic structure represented by a syntax tree.The syntax tree shows how words build up to form correct sentences.Children learn language by discovering patterns and templates.We learn how to express plural or singular and how to match those forms in verbs and nouns.We learn how to put together a sentence, a question, or a command.Natural Language Processing assumes that if we can define those patterns and describe them to a computer then we can teach a machine something of how we speak and understand each other.Much of this work is www.ijarai.thesai.orgbased on research in linguistics and cognitive science.A sentence then has to be parsed for syntactic analysis.Thus, the need for the appropriate algorithm that can handle the parsing of complex natural language sentences.In this paper, the discussions above were considered and we present an algorithm using the UML (unified modelling language) to parse natural language sentences.This model depicts various aspects of the algorithm which includes:  An association diagram that shows the major components in our system and how they associate with one another. A dependency diagram that shows how each component depends on the other in order to be able to carry out its own work. A class diagram that depicts each component in terms of classes showing its members and methods. A pseudo code to show the major steps involved in each component. A scalable interface showing the implementation of the algorithm was developed and tested to determine the level of correctness of the output. II. BACKGROUND AND EARLIER WORK Natural Language Processing (NLP) is the capacity of a computer to "understand" natural language text at a level that allows meaningful interaction between the computer and a person working in a particular application domain [5].The natural language processing concepts involves the use of many tools which are essentials of developing a man-like machine.This tools includes some programming languages such as Prolog, Ale, Lisp/Scheme, and C/C++.The tools are formulated appropriately within some defined concepts using Statistical Methods -Markov models, probabilistic grammars, text-based analysis and also Abstract Models such as Contextfree grammars (BNF), Attribute grammars, Predicate calculus and other semantic models, Knowledge-based and ontological methods [6]. In this paper, we focus on the generation of appropriate parse structure for any natural language sentence.This step is considered as a major step in the natural language research domain.Syntactic analysis majorly involve the structure of a given natural language sentence presented by retrieving it in a structural manner with the rules of forming the sentences, and the words that make up those sentences.This is also includes the grammar and lexicon.It involves morphology that is the formation of words from stems, prefixes, and suffixes.Syntactic analysis shows the legal combination of words in a sentence.Generating syntactic structure involves the use of grammar, that is, the rules for forming correct sentences.Natural languages have to be parsed to obtain the syntactic information encoded in them.But natural language is ambiguous which necessitated the intervention of the use of an algorithm.This structure will present the analysis of a sentence by showing how words combine correctly to form valid phrases and how this phrase legally build up sentences.A parsing algorithm operates based on some set of rules known as grammar that tells the parser valid phrases and words in a sentence.The ambiguity of natural languages leads to a complex analysis of it, so it is more suitable to use a parsing algorithm in situations where the natural language sentence is complex.In such cases, a sentence generates multiple parse trees for the same natural language.As natural language understanding improves, computers will be able to learn from the information online and apply what they learned in the real world.Combined with natural language generation, computers will become more and more capable of receiving and giving instructions.Ambiguities are a problem because they can lead to two or more different interpretations of the same word.They are often part of the subconscious knowledge, so requirements writers will not necessarily recognize these potential sources of misunderstandings.There are different kinds of ambiguity.Lexical ambiguity refers to single expressions that may be reasonably interpreted in more than one way. The study of natural language processing also incorporates other fields such as linguistics and statistics.The knowledge of linguistics provides the grammars and vocabularies needed and the knowledge of statistics provide mathematical models that the algorithm for processing natural languages uses.Various algorithms had been developed in time past for natural language processing and more algorithms are currently under development to solve more of natural language processing problems.In 1950, Alan Turing [7] proposed "Turing Test" in his famous article "Computing Machinery Intelligence".Turing test is a test that is used to know the ability of computer systems to impersonate humans.In 1954, the George Town experiment came up which involved a full automatic translation of more than sixty Russian sentences into English.In addition, in 1960s, some successful natural language processing systems were developed.These systems majorly include: ELIZA, [8,9].SHRDLU [10]., a system that works in restricted blocks with restricted vocabularies that can be used to control robotic arms.Many programmers began to write conceptual ontologies in 1970, they are structured to appropriate real-world information into computer system.Up to the 1980s, most natural language processing systems were based on complex sets of hand-written rules.Furthermore, in 1980s [4], the first "statistical machine translation systems" was developed.At this time, there was a great revolution in natural language processing with the introduction of "machine learning algorithms" for language processing.This is as a result of the increase in computational power resulting from the application of Moore's law [11].Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things.NLP researchers aim to gather knowledge on how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform the desired tasks.Statistical methods are used in NLP for a number of purposes, e.g., for word sense disambiguation, for generating grammars and parsing.At the core of any NLP task there is the important issue of natural language understanding.The process of building computer programs that understand natural language involves three major problems: the first one relates to the thought process, www.ijarai.thesai.org the second one to the representation and meaning of the linguistic input, and the third one to the world knowledge.Thus, an NLP system may begin at the word levelto determine the morphological structure, nature (such as part-ofspeech, meaning etc.) of the wordand then may move on to the sentence levelto determine the word order, grammar, meaning of the entire sentence, etc.-and then to the context and the overall environment or domain.A given word or a sentence may have a specific meaning or connotation in a given context or domain, and may be related to many other words and/or sentences in the given context.Automatic text processing systems generally take some form of text input and transform it into an output of some different form.The central task for natural language text processing systems is the translation of potentially ambiguous natural language queries and texts into unambiguous internal representations in which matching and retrieval can take place.Masaru Tornita (1984) [3]," proposed that "When a parser encounters an ambiguous input sentence, it can deal with that sentence in one of two ways.One way is to produce a single parse which is the most preferable.Such parsers are called one-path parsers.On the other hand, parsers that produce all possible parses of the ambiguous sentence are called all-paths parsers".A suitable parser for parsing natural languages is one that generates several parses or parses trees for a natural language sentence because a sentence can have a syntax and different meaning.NLP systems, in their fullest implementation, make elegant use of this kind of structural information.They may store a representation of either of these sentences, which retains the fact that Chelsea won Benfica or vice versa.They may also store, not only the fact that a word is a verb, but the kind of verb it is. One-path parsers are, naturally, much faster than all-paths parsers because they look for only one parse.There are, however, situations where all-paths parsers should be used.MLR is an extension of LR.The LR parsing algorithm, however, has seldom been used for natural language processing, because the LR parsing algorithm is applicable only to a small subset of context-free grammars, and usually it cannot apply to natural languages.Though the efficiency of a LR parsing algorithm is preserved, MLR parsing algorithm can apply to arbitrary context-free grammars, and is therefore applicable to natural languages.We cannot directly adopt the LR parsing technique for natural languages because not all context-free phrase structure grammars (CFPSG's) can have an LR parsing table.Only a small subset of CFPSG's called LR grammar can have such an LR parsing table.Every ambiguous grammar is not LR, and since natural language grammars are almost always ambiguous, they are not LR therefore we cannot have an LR parsing table for natural language grammars.Liddy (1998) and Feldman (1999) [5] suggest that in order to understand natural languages, it is important to be able to distinguish among the following seven interdependent levels, that people use to extract meaning from text or spoken languages:  Phonetic or phonological level that deals with pronunciation  Morphological level that deals with the smallest parts of words, that carry a meaning, and suffixes and prefixes  Lexical level that deals with lexical meaning of words and parts of speech analyses  Syntactic level that deals with grammar and structure of sentences  Semantic level that deals with the meaning of words and sentences  Discourse level that deals with the structure of different kinds of text using document structures and  Pragmatic level that deals with the knowledge that comes from the outside world, i.e., from outside the contents of the document. The above justification seems sufficient for the development of an appropriate model for implementing an algorithm for parse generation of natural language sentences. III. MODEL We present a model to show the major components in our algorithm and how they interact in order to generate effective parse results for complex natural language sentences.To parse a natural language sentence (syntactic analysis), the most important things to consider are:  The parser (the algorithm)  Set of grammars for the language (the rules of correct syntax) Based on our model, the major components used are: Tokenizer -which breaks down a given sentence into words usually known as tokens, Part of speech tagger -represented as those whose function is to tag each word to their respective part of speech.Parse-which analyses the sentence to check if it conforms to some sets of grammar (English grammar) for the language of the input sentence and finally the ParseTree, The parse tree represents the graphical nature of the natural language.The UML Association diagram is necessary to visualize the association between the classes.The UML Class diagram is used to visually describe the problem domain in terms of types of object (classes) related to each other in different ways.There are three primary inter-object relationships: association, aggregation, and composition.Using the right relationship line is important for placing implicit restrictions on the visibility and propagation of changes to the related classes. Following from above, the formulated PSEUDOCODE is then presented as: Class Tokenizer //variable declaration String sentence Int i,tokenLength // i is a counter Sentence=get sentence from user Break sentence to tokens//break sentence to words tokenLength=get number of words in sentence www.ijarai.thesai.org Figure 2 shows another generation of the parser interface where the rule implements the sentence format "noun verb gerund".Noun Gerund parse If a user enters an invalid sentence or a sentence whose grammar is not present in our list of grammars, the system will output the following: The algorithm presented in this paper can be extended based on the required complexity of the system.The defined process for tokenization and the use of a natural language interface in solving parse generation has shown the effectiveness of a well posted algorithm in solving the natural language parse generation problem.An extension of this work is in its ability to generate one parses tree even when the observed ambiguity is high.Parsing natural language is a complex task, an efficient algorithm for parsing natural language has been shown in this work as a necessary tool even within the inherent complexity observed. An extended form of the LR parsing algorithm though not discussed in this paper will also be an efficient algorithm as it will generate multiple parses for natural language sentences.Such is similar to multiple application of the algorithm presented in this paper and can be called the MLR parsing algorithm.
3,956.4
2013-01-01T00:00:00.000
[ "Computer Science" ]
Si/SiC heterojunction optically controlled transistor with charge compensation layer A novel n-SiC/p-Si/n-Si optically controlled transistor with charge compensation layer has been studied in the paper. The performance of the device is simulated using Silvaco Atlas tools, which indicates excellent performances of the device in both blocking state and conducting state. The device also has a good switching characteristic with 0.54μs as rising time and 0.66μs as falling time. With the charge compensation layer, the breakdown voltage and the spectral response intensity of the device are improved by 90V and 33A/W respectively. Compared with optically controlled transistor without charge compensation layer, the n-SiC/p-Si/n-Si optically controlled transistor with charge compensation layer has a better performance. Introduction With electronic products are wide application, electromagnetic interference (EMI) effects become a critical issue for safety and reliability in power electronic systems. Photoelectric isolation method is an effective way that used to resolve electromagnetic interference problem [1] . Although Si-based light controlled transistor has been developed for many years, its physical properties have obvious limitations in high temperature, high frequency and high power applications. With large wafer-size silicon carbide (SiC) wafer localization and mature of high quality 6H-SiC and 4H-SiC epitaxial technology, SiC power devices set off an upsurge study. However, SiC is not optically active at the near IR wavelength range, where light sources are readily available for optical communication [2] . In order to realize light-activation of SiC power switching devices, a hybrid approach that combines a silicon photo-receiver module with a SiC power darlington transistor have been proposed [3] . In this hybrid Si-SiC device, the darlington transistor is activated by photocurrent output from an individual Si photodiode. Therefore, the EMI problem may still exist because the direct input of the darlington is actually an electronic signal. To prevent the EMI problem absolutely, we proposed light-activated SiC transistors in earlier work [4][5] . In which by using narrow bandgap material SiCGe as base layer, the device can realize lightactivated power switches for common light source. In this paper, a novel optically controlled SiC power transistor structure based on the 4H-SiC/Si heterojunction and charge compensation layer is proposed to improve spectral response range and efficiency of the device. By employing the device simulator Silvaco Atlas, we have simulated the performances of this device based on SiC/Si heterojunction and charge compensation layer. Device structure The structure schematic of n-SiC/p-Si/n-Si heterojunction optically controlled transistor with and without compensation layer are shown in figure 1. The emitter electrode and collector electrode are locating at left and right side respectively. The optical window between emitter and collector is passivated by Si 3 N 4 to reduce reflectivity of light. The heavily doped 4H-SiC is used as emitter region material. The emitter region of device is 3.0μm in thickness and 2.0μm in width. The n-SiC/p-Si heterojunction in the device working as emitter junction can improve the injection efficiency of the emitter and reduce the energy of the triggering light. To wider the light response range, silicon is used as material of base region and collector region. The base region is 3.0μm in thickness and 2.0μm in width. The base current is supported by photocurrent of collector junction. The width of collector region is 17μm. In order to improve the light absorption efficiency, the thickness of the collector region must be longer than the depth of light absorption. As shown in figure 1(b), the thickness and doping concentration of compensation layer under the silicon collector region should be designed basing on chargebalance theory. Under the effect of compensation layer, there is a two-dimensional electric field in silicon collector region. The two-dimensional electric field can help improve the blocking voltage and spectral responsivity of the transistor. To avoid charge imbalance affected by the substrate-assisted depletion, the semiinsulating SiC is used as substrate. Simulation and discussion In order to achieve the realistic results, several important material parameters used in Silvaco Atlas were adjusted to obtain the closest agreement with published material data of 4H-SiC [6][7][8] . The major parameters used in the paper are shown in table 1. Gcb 2 eV When the device is under blocking condition, there is a forward bias voltage fall over the collector junction. As the doping concentration of collector region is lighter than the doping concentration of base region, the depletion layer is mainly broadening in collector region. The three-dimensional electric field distributions of the optical transistors under the blocking condition both with and without compensation layer are shown in figure 2. As the comparison between the electric field of two devices in figure 2(a) and figure 2(b), the electric field distributes more uniformly in figure 2(b). Due to the existence of charge compensation layer, the depletion region in figure 2(b) is broadening in two directions. The charge compensation layer helps extend the collector junction in horizontal direction, which is helpful to form a more electric field in vertical direction. As the device with the charge compensation layer is designed according with the charge-balance theory (qN D =-qN A ), the 2D Poisson equation can be write as where U is the charge density. The electric field in drift region can be write as constant I E (2) which means the drift region is completely exhausted and the distribution of the electric field should be more uniform. As two-direction distributed electric field of optical transistor with charge compensation layer is more figure 3. As the existence of charge compensation layer, the uniformly distributed electric field makes the breakdown voltage of transistor with charge compensation layer be higher than transistor without charge compensation layer. And the breakdown voltage of transistor with charge compensation layer is more stable than transistor without charge compensation layer when the doping concentration of collector region is changing from 5x10 14 cm -3 to 1x10 16 cm -3 . From the simulation results, the spectral response of the optically controlled transistor under 5V bias voltage with and without charge compensation layer are shown in figure 4. The device with charge compensation layer has higher spectral response than the device without charge compensation layer. With charge compensation layer, the peak value of optically controlled transistor's spectral response is improved to 130A/W at the wavelength of 550nm. As silicon is used as light absorption material, the response range contains almost all range of the visible spectrum, which is from 400nm to 1100nm. Spectral responsivity depends on several inherent factors including material refractive index, absorption coefficient, device structure, doping and temperature of operation. Other external factors such as bias voltage and power of incident light also influence the spectral responsivity. Diffusion process is the main transporting mechanism of photon-generated carrier in neutral region. But the drift process in depletion region is the main process for photocurrent. The structure schematic of light absorption layer in the device with charge compensation layer is shown in figure 5. The photon-generated carrier is a function of the incident distance. The number of the photon-generated carrier in vertical direction and horizontal direction can be calculated respectively as where P is the incident power, h is the Planck constant, v is the incident frequency, D is the absorption coefficient of material and R is the reflectivity. The responsivity of the device with 2D electric field in drift region can be write as where E is the gain of the transistor. The responsivity of the device with one-dimensional electric field in drift region is The responsivity of the device with a 2D electric field in drift region is higher than the device with a onedimensional electric field in drift region obviously. The reason is that the two-dimensional electric field formed by charge compensation layer in the transistor can help create one more light absorption layer in horizontal direction. The light absorption layer in horizontal direction improves the spectral response of the optically controlled transistor. To demonstrate the turn-on characteristics of device with charge compensation layer, the I-V characteristic curve of the device is simulated. The wavelength of incident light is 550nm and the light power density is changed from 0W/cm 2 to 1W/cm 2 . The optical characteristics of 4H-SiC/Si heterojunction optically controlled transistor with charge compensation layer for different incident light power density are shown in figure 6. When the power density of the incident light is 0W/cm 2 , the device is in blocking state. The device could not be triggered on, unless the power density of the incident light was larger than critical value. The photo current in the device triggered by the light is different, when the power density increased. The light with higher power density can trigger higher photo current, which can support higher base current. As the gain of the transistor is a constant value, increasing the photo current is the only way to obtain higher collector current. Conclusion The optically controlled transistor with charge compensation layer based on 4H-SiC/Si/Si heterostructure has wide enough spectral response range and big enough spectral response intensity to visible light. The charge compensation layer helps improve the breakdown voltage and spectral responsivity of the optical transistor obviously. The peak value of the spectral response is 130A/W at the wavelength of 550nm. The excellent optical and electrical performance of Si/SiC heterojunction optically controlled transistor makes it attractive for SiC-based optically controlled power devices' development.
2,158.8
2016-01-01T00:00:00.000
[ "Physics" ]
evALLution: making basic evolution concepts accessible to people with visual impairment through a multisensory tree of life People with visual impairment have benefitted from recent developments of assistive technology that aim to decrease socio-economic inequality. However, access to post-secondary education is still extremelly challenging, especially for scientific areas. The under representation of people with visual impairment in the evolution research community is connected with the vision-based communication of evolutionary biology knowledge and the accompanying lack of multisensory alternatives for learning. Here, we describe the development of an inclusive outreach activity based on a multisensory phylogeny representing 20 taxonomic groups. We provide a tool kit of materials and ideas that allow both the replication of this activity and the adaptation of others, to include people with visual impairment. Furthermore, we provide activity evaluation data, a discussion of the lessons learned and an inclusive description of all figures and visual data presented. The presented baseline data show that people with visual impairment indeed have lack of access to education but are interested in and apt to understand evolutionary biology concepts and predict evolutionary change when education is inclusive. We show that, with creative investment, basic evolutionary knowledge is perfectly possible to be transmitted through multisensory activities, which everyone can benefit from. Ultimately, we hope this case study will provide a baseline for future initiatives and a more inclusive outreach community. a physical barrier to the individual's learning experience due to the lack of multisensory alternatives to widespread learning activities (Salleh and Zainal 2010). Such lack of accessibility to knowledge is highly present in the fields of science, technology, engineering and mathematics (STEM) where instruction relies heavily on graphically conveyed information. Efforts to develop assistive technology and promote the inclusive education of people with visual impairment are significant in some scientific areas (Cryer 2013), like chemistry (e.g. Fantin et al. 2016;Garrido-Escudero 2013;Supalo et al. 2008;Supalo and Kennedy 2014) and physics (e.g. Arcand et al. 2019;Ediyanto and Kawai 2019). However, the biological sciences seem to lag behind (but check Jones et al. 2006). More specifically, the teaching of evolution relies on visual media as its primary communication mechanism for conceptual understanding. Classical evolution case studies commonly used in formal education and outreach activities-such as beak morphology evolution in Darwin's finches (Grant and Grant 2002) or the industrial melanic peppered-moth selection (Cook et al. 2012)-are based on phenotype-environment associations and selective forces that demand a priori understanding of variability in shape, size and colour traits. All of these are visual characteristics of information, which are hard to grasp by people with reduced vision and inaccessible to people born blind. The understanding of natural patterns is further compromised by the fact that the research of such case studies is communicated through 2-dimensional tables, plots and diagrams, available solely on screen or paper, all of which are particularly challenging to access for those with severe vision impairment (Karshmer and Bledsoe 2002;McCarthy and Shevlin 2017). Consequently, although teenagers with visual impairment show high interest in STEM areas, their motivation to pursue a carrier in such areas is reduced by the barriers felt while trying to learn (Bell and Silverman 2018). In fact, only one percent of STEM doctorate recipients has any sort of reported disability (data for the U.S. population; (Moon et al. 2012). It is thus clear that there is an urgent need for improved accessibility to scientific knowledge in order to promote equity in education and a more diverse and inclusive scientific community. Outreach activities that transmit knowledge with a multisensory approach can be an important first step towards that end (Pérez-Montero 2019) and are known to benefit both the audience and the scientific community (Clark et al. 2016). Our project aims at contributing to equity in accessibility to evolutionary biology knowledge by eliminating physical barriers to the understanding of the basic mechanisms of evolution and the resulting biodiversity pattern. We here describe the development of a multisensory phylogeny, designed as an introduction to basic concepts in evolution for people with severe visual impairments. We provide a tool kit that enables the repeatability of this activity together with guidelines that can be adapted and applied to several other outreach initiatives. We propose a two-step rationale to approach inclusive evolution teaching: The public needs to first (I) experience the pattern of biodiversity so that then we can (II) discuss the processes that led to such diversification. In addition to activity design, baseline data on the evaluation of the activity are presented. Reproducibility framework Touch as the main sense of communication raises challenges: natural history collections are usually too unique or fragile to be freely manipulated, live specimens pose animal welfare concerns, and commercially available models can be inaccurate and do not portray the real and detailed textures and patterns of biodiversity. Our activity was developed during a whole year, which encompassed a great deal of communication with the blind community, psychologists, science communicators, museologists, evolutionary biologists and pedagogic institutions. Based on the experience acquired from that process and the results of our activity, we here provide a theoretical framework to organize similar activities (Fig. 1). There are three main steps to conveying evolution's patterns and processes to a public with visual disability: (1) make biodiversity accessible, (2) give an evolutionary and ecological context to the displayed biodiversity and (3) discuss the processes and evolutionary forces through which that biodiversity evolved. 1. First, we want to convey biodiversity as the resulting pattern of evolution. Sighted people can easily grasp the diversity of living beings and ecosystems through images; in order to include people with blindness we have to represent as much biodiversity as possible in an inclusive way. For this, communication with the blind community and creativity are essential. Sighted people tend to use auditory explanatory cues to convey information. However, this does not only create an immediate barrier for people with hearing disabilities, but our consultant from the blind community was quick to explain that touch had to be the main sense used. He made it clear to us with the following example: "If you tell me that a zebra is a horse with black and white stripes, and I was born blind, I will most likely not know what horses look like; stripes and colors might also be concepts that I do not understand". Thus, all collected materials should have the potential to be touched and then accompanied with auditory instruction by the teaching volunteers. To include people with hearing loss, the instruction should also be translated into sign language. Collections such as Mollusks' shells and fossils are easily obtainable in great number and diversity-it was one of the resources more museums were willing to lend. The shape diversity within the phylum is accessible through touch, but the diversity of color patterns needs to be translated to haptic cues. When posed with such challenges we recurrently used hot glue to trace the pattern we wanted to be accessible. Hot glue is a great resource to turn 2D patterns into 3D (see butterfly wing models in Additional file 1: Fig. S1f; and mollusks table phylogeny in Fig. 1). Another easily accessible taxon is plants. Gardening centers in general have a great diversity of worldwide plants that can be easily sourced. We found that, despite it being the most familiar, participants responded very well to the plant branch, spending a lot of time there, and were excited by learning about plants' adaptations to specific environmental conditions. Fig. 1 Basic framework to construct evolution outreach activities inclusive for people with blindness. Two types of materials are depicted: a collection of general easy access (Mollusk shells) and a classic evolution textbook example of adaptive speciation (the adaptive radiation of cichlids in the Tanganyika lake). Following this framework, all gathered materials are able to be adapted for inclusive outreach activities, independently of amount of branches represented and logistics. The pictures of the mollusk phylogeny on the round table are from a second activity, at an inclusive school, where the available room was significantly smaller than the original 125 m 2 space, thus more activities per floor-phylogeny branch (8 instead of 21) were included 2. Then, to approach biodiversity and evolution, the collected materials should be displayed in evolutionary context. For this, the specimens representative of the main branches can be displayed in several informative ways: (a) following the phylogeny of the species or orders depicted (e.g., mollusks organized as Monoplacophora, Bivalvia and Gastropoda; Fig. 1, left); (b) according to niche within an environment (e.g., Lake Tanganyika cichlids were arranged according to the depth (height) and substrate (sand, stones) they inhabit in the lake; Fig. 1, right); or (c) across different environments (e.g., plants were organized by climate regime including desert, tropical forest, Mediterranean forest and Taiga (Additional file 1: Fig. S1 t-w). These meaningful displays, together with oral pedagogic information, then allow us to become mindful of important concepts such as speciation, shared characters and evolutionary novelties, which in turn inform our understanding of phylogenetic patterns and common ancestry. Specific adaptations such as mouth position between benthic and limnetic fish, leaf shape and texture depending on climate are also a great way to introduce adaptation and natural selection. In sum, any material gathered is capable of being used in a meaningful way, whether a classic textbook example or commonly found specimens. The activity is thus completely adaptable to available material and space, since all phylogenies can be simplified to have more or less branches, and activities can be designed for the participants to spend more or less time in contact with each branch (Fig. 1). Multisensory tree of life toolbox We provide a list of all materials used to represent taxa across the 20 branches (Additional file 2: Table S1) specifying which ones belonged to pedagogic collections from research institutes and education institutions such as museums or aquaria. Photographs of all branches and material display are also available (Additional file 1: Figure S1). All printable 3D models developed from scratch, by scanning real specimens, are available in their final form at MorphoSource. The tree topology was based on reference phylogenies comprising the taxa of interest (Field et al. 2014;Hedges et al. 2015;Dos Reis et al. 2015) and on the interactive phylogeny OneZoom Tree of Life Explorer (Rosindell et al. n. d). At each branch a volunteer educator provided information to the participants while assisting them in the exploration of the branch-specific materials. Prior to the activity, the educators were provided scripts containing information on what the branch-specific material illustrates, how to guide the people with visual disability to touch the materials and, for the branches where data collection was conducted, the branch-specific activity questions (see Additional file 4: Branch exercises). The scripted branch-specific questions that were given to participants for data collection on the predictability of evolution and basic evolution concepts-like adaptation and natural selection are provided (see methods sections below; Additional file 4: Branch exercises). To allow for a general perception of the room display and guide the participants with visual impairment independently through the exhibition room, we designed individual haptic hand-maps. These consisted of a blueprint of the room drawn in hot glue on a thin wood plate with 3D information on the phylogenetic path on the floor together with the blueprint of the table display. While this resource was not useful in our implementation, since all participants with visual impairment preferred to be guided trough the activity by a staff member or by their accompanying sighted person; we think that in other contexts, such as museum exhibitions, this might be an inclusive resource that allows the visitor to independently explore the space. Aspects important to take into account when building a multisensory phylogeny are the fact that partial specimens (like teeth or fur) should always be accompanied by a full model of the organism to make sure that people with visual disability can locate the specific material and make sense of it. Such models, like detailed toy animals, provide a general sense of scale that can help perceive certain biodiversity patterns-e.g. a lynx is bigger than a house cat, as are its teeth, skull and footprints despite the shapes of those structures being extremely similar. Logistics of the room are also very important. In our case, there was not enough space for echinoderms and amphibians to reach the periphery of the phylogeny. These two branches were thus shorter ( Fig. 2) which is not optimal as it might inadvertently convey that these taxa are not extant species or that they are somehow 'less evolved' than those at the other branch tips. Another aspect to take into account is that, by placing mammals and especially hominids towards the exit and at the top of the room, we might involuntarily contribute to the wrong notion that evolution is a linear process towards humanization so often reinforced by images. However, it's worth noticing that, in our case, different participants followed completely different routes along the phylogeny, as the movement was mainly dictated by the available educators at the time of branch visit change. Prior to the MSToL (multisensory tree of life) activity, the volunteer educators received information from the in-house psychologist at the Portuguese Association for the Teaching of the Blind (APEC-Associação Promotora do Ensino dos Cegos) on effective communication with people with visual impairment and the basics of assistance in orienting people with visual impairment. Participant data collection In order to evaluate our activity, data were collected from 25 participants with visual impairment (15 women and 10 men) and 23 sighted participants (17 women and 6 men; Additional file 3: Table S2). All data were collected in loco at the Portuguese Association for the Teaching of the Blind (APEC) on the day of the outreach activity (12 of March, 2019), before and after participation, following the questionnaire provided (Additional file 5: Questionnaire). The questionnaire included personal data, an exercise of true or false and a word association exercise. Both exercises were scored to allow us to assess and compare the participants' knowledge on basic concepts of evolution, before (basal knowledge) and after the activity. We recorded, for each participant, demographic dataage, gender-and also their education level, interest in biology/evolution and visual capacity. Education level of the participants was coded according to the Portuguese education system as follows: Primary School (PS) 1st to 4th grades; Middle School (MS) 5th to 9th grades; High School (HS) 10th to 12th grades; bachelor's degree (B) and master's degree (M). B and M are referred to as 'post-secondary education' throughout the manuscript. The level of visual impairment of the participants was assessed by asking each individual which one of the following 4 levels they identified with: 'No disability' , which includes people with glasses correcting for standard vision levels; 'Moderate vision loss', which refers to people with low vision with perception of shapes and colors (includes people with corrective lenses); 'Deep vision loss with residual light perception' , which refers to people with extremely low vision but that can still perceive some light variation; and 'Profound vision loss without any light perception' , which refers to people can not receive any visual cues. Throughout the manuscript, when we refer to 'people/ participants with visual impairment' we are referring to the whole spectrum of visual impairment, specifically defined above for our panel of participants. When we refer to 'people/participants who are blind' we refer to people who identify with the fourth group describedprofound vision loss without any visual cues. We additionally recorded how long the person has lived with visual impairment. Data were also collected, through an online questionnaire, from 15 out of 24 volunteer educators to assess Table S1 for a complete list of materials and Additional file 2: Fig. S1 for detailed branch photos) their emotional response to the activity and understand if the inclusive activity was mutually beneficial: for participants and educators. The resulting data allowed us to (1) generally evaluate the effectiveness of the designed haptic activities, (2) establish baseline data about evolution knowledge and interest within a subset of the Portuguese community with visual impairment, (3) determine whether participants enjoyed the activity, and (4) begin to determine how such activities might improve the learning of evolution. Data collection was approved by the University of Bath, Department of Psychology, Research Ethics Committee (code 17-273). All participants provided informed consent prior to participating, participated voluntarily and were informed of their right to withdraw participation at any point during data collection. All data collected is presented anonymously in Additional file 3: Table S2. Branch-specific exercises In nine of the 21 branches, participants were asked branch-specific questions (Additional file 4: Branch exercises). These questions related to the haptic materials available on the table and were primarily exercises of prediction of phenotypic change, or adaptation scenarios about environmental change. In total, there were 29 questions focused on organisms' evolutionary responses to certain environmental changes, designed to assess the participants' understanding of adaptation, fitness, environment-phenotype associations, gradualism and natural selection, and their predictive ability of evolutionary change. Participants were not given possible answer options. They were asked the scripted questions (Additional file 4: Branch exercises) and the volunteer educator scored their answers in the record sheet according to the level of evolutionary thought: Answers in the 'maybe'/'I don't know' category were scored as 0; if the answer was not the known outcome of the evolutionary process but involved plausible evolutionary outcomes (extinction, mutation) it was scored as 1; if the answer took into account natural selection and adaptation it was scored as 2, with one extra point for the four questions in which gradualism or selection strength was considered. Finally, if the answer was in the 'nothing changes' category and did not consider any evolution outcome, it was scored as − 1. Since not all participants answered all branch-specific questions, the standardized branch-score was calculated for 24 participants with visual impairment and 17 sighted participants as the sum of individual answer scores divided by the number of questions answered by the participant. The 'prediction score' regards the subset of 18 questions, spanning 6 branches (three questions on the plant branch, three on corals, four on bony fish, three on molluscs, one on hominids and four on turtles) that require the participant to predict the outcome of an environmental change and was calculated and standardized as described above, for 23 participants with visual impairment and 17 sighted participants. Word association exercise We wanted to know if people were familiar with the scientific terms necessary to understand the basics of the theory of evolution and to what extent terms that are essential or might promote its misunderstanding were commonly associated with the concept of 'evolution' . For this we designed a word association exercise where participants were read a list of 33 words, one at a time, and upon hearing each one reported whether a given word was instinctively associated with evolution by responding 'True' or 'False' (Refer to Fig. 4 for the complete list of words and Additional file 5: Questionnaire). Words were scored as -1 if the word is usually associated with misunderstanding of the evolutionary process (e.g. perfecting); as 0 if the word is neutral and unnecessary to explain the theory of evolution (e.g. science); as 1 if the word is not necessary to explain the theory of evolution but it is related to it (e.g. Darwin) and as 2 if a word is necessary and fundamental to explain and understand evolution (e.g. natural selection). Individual scores were calculated by summing the word scores of a participant. Because we wanted to evaluate instinctive answers, a participant had on average 3 s to provide an answer and if hesitation was long, it was recorded as 'non-association' . When participants declared that they didn't know it was recorded as 'not applicable-NA' . We recognize that the experimental design of this exercise has flaws: ideally, the number of incorrect terms should be similar to the one of correct and essential ones and the order in which the words were presented to each participant should have been randomized (all participants were read the same words in the same order). However, we believe that the report of the results might inform future outreach activities and thus present it as basal data and not as a proof of principle. Data analysis All data analyzed for comparisons (mean differences) and correlations were checked for deviations from assumptions using a Shapiro test (α = 0.05). If data distribution did not significantly differ from normal distribution, parametric versions of the relevant statistics were used, otherwise non-parametric statistics were used (Student's t-test Vs. Mann-Whitney U test for mean differences; and Parametric (bivariate) Vs. Non-parametric (Spearman) for correlations). Each result is accompanied by the specific test performed in the results section. Data were always standardized for the specific number of participants comprising the dataset. We emphasize that our sample sizes are small which translates into data limited in its power and thus we provide these as explorative baseline data regarding our specific activity, which might inform future research questions on inclusive pedagogy. All analysis and data visualization were performed in R version 3.6.0 (R Core Team 2019) and all raw data and analysis code are available to allow full analysis replication (Additional files 3 and 8, respectively). A description of all figures and images presented below, accessible for people with visual impairment, is presented in Additional file 7. The multisensory tree of life for all The multisensory Tree of Life (referred to as 'MSToL' from here on) occupied 125 square meters and was composed of branches representing 21 extant taxa-plus eight fossil species. The majority of materials consisted of real biological samples ( Fig. 2; Additional file 1: Fig. S1 for detailed photos per branch; Additional file 2: Table S1 for a list of materials, source, sense stimulated and evolution concepts explored, per branch). All five basic senses-hearing, smelling, touching, tasting, seeing-were stimulated across the phylogeny, with touch being stimulated by all displayed material, becoming the main source of information acquisition for people with visual impairment. Each consenting participant-23 sighted and 25 with visual impairment-contributed to data collection through a questionnaire applied before and after experiencing the MSToL room, and branch-specific quizzes. The resulting data allowed us to generally evaluate the success of the activity in terms of learning, and to establish baseline data on evolution knowledge for a subset of the Portuguese community with visual impairment. Implementation: the hardest and easiest taxa to represent Upon consulting with blind members of the community of the Portuguese association for the teaching of the blind (Associação Promotora do Ensino dos Cegos, APEC), it became clear that touch would be the most inclusive sense to explore the phylogeny. Thus, for the first step-allowing participants with visual impairment to assess biodiversity accurately-haptic communication was essential. Because we wanted all participants to experience real biodiversity patterns, we mainly acquired real specimens and biological samples that could be touched. This can, however, raise challenges for the representation of some taxa: museum collections with scientific value are usually unique and fragile, which hinders their free manipulation; live specimens pose animal welfare concerns; and commercially available models can be inaccurate and lack the detail needed to fully comprehend the range of biodiversity patterns. Several of these difficulties applied to the arthropods, which proved to be the most challenging taxa to translate into multisensory communication. We addressed this difficulty by incorporating 3D-prints of in-house μ-CT scanned specimens, insect sounds, edible insects, exuviae and structures built by arthropods, such as hives (Additional file 1: Fig. S1 e to g). Collaboration with education centers with available pedagogic collections was imperative to obtain material that accurately depicts biodiversity (Additional file 1: Fig. S1; Additional file 2: Table S1). The most effective branches in terms of degree of 'effort to find material' with respect to the magnitude of 'activities and information that can be extracted from it' were plants, birds and primates. A great variety of plants are commercially available and they are ideal to develop activities focused on plant-pollinator coevolution and phenotype-climate adaptations (Additional file 1: Fig. S1 t to w). For birds, songs, calls and full specimens of game species are easily obtainable and optimal for activities focused on phenotype-environment adaptations (Additional file 1: Fig. S1 n). The evolution of humans and related primates holds, generally, particular interest for the public (Pobiner 2012). Anthropological model collections can be expensive but are usually available at universities where anthropology is taught, which can be invited to lend this resource. A collection of hominid skulls is a great resource for discussion of human evolution and for understanding common ancestry while dismantling the myth that "Homo sapiens descends from monkeys". While teeth and different types of fur can be very interesting resources to discuss adaptation, without complete spatial or morphological information they can be confusing for participants with visual impairment, especially for those born blind. To avoid this we made sure that models of the full organism were available for any type of partial specimen, which the participants reported to be extremely useful. Public attendance During the 12 h of activity, we received an estimated total of 100 participants, 60 of which had visual impairment. We did not restrict the amount of time to explore the MSToL, allowing the participants to do so at their own pace. Participants, especially those with visual impairment, tended to remain in the room more than the predicted one hour, with some remaining for as long as four hours. This should thus be a full day activity at minimum, and ideally a multiple-day activity. A subset of consenting participants responded to a standardized questionnaire which included general, as well as branch-specific, questions (Additional file 5: questionnaire), administered both before and after experiencing the MSToL room. The sample comprised 25 adults with visual impairment, with average age 62 (range: 18 to 82) and 23 sighted adults, with average age 58 (range: 24 to 90). The first assessment of the data showed that the majority of participants with visual impairment had profound vision loss without perception of any visual cue, referred to as blindness throughout the manuscript (13/25) and only a minority (4/25) had moderate vision loss, with perception of shapes and some colors, and eight out of 25 had deep sight loss with residual light perception (see methods for details of the participants' self-assessment). The majority of participants with visual impairment had the impairment for more than half of their lifetime, with only seven living with it for less than that period. Preferences: touch and birds The sense preferences for experiencing the MSToL were consistent across participants, regardless of visual ability. Touch was listed as the most informative sense by 83.3% of participants with visual impairment and 50% of sighted participants (Additional file 3: Table S2). On the other end, gustation was listed as the least informative sense by 93.7% of participants with visual impairment and 53.3% of sighted participants (Additional file 3: Table S2), which was also the least stimulated sense across the phylogeny (Fig. 2b, Additional file 1: Table S1). Despite an overall scattered preference across taxa, people with visual impairment showed slight predilection for the bird branch: five out of 19 people with visual impairment chose the birds as their favorite branch (Additional file 3: Table S2). This tendency might have been influenced by the communication skills of the educator responsible for the birds' branch, but it is worth noting that even for pragmatic reasons birds are a good taxonomic group to invest in further, more focused, activities. This taxon allows multisensory activities on adaptation and evolution to be design relatively easily due to the ease of acquiring diverse feather types, the abundant availability of taxidermy specimens with different bill shapes among game species, and also the widespread availability of bird song recordings, easily adding an auditory component. Despite lack of access to education, participants with visual impairment are interested in, and understand, evolution We found that participants with visual impairment had a lower level of educational attainment (Fig. 3a). The majority of participants with visual impairment had not enrolled in high-school education (56%, 14 out of 25), only seven participants with visual impairment had attained post-secondary education, and none held a master's degree. In comparison, the majority of sighted participants had attained bachelor's degrees (59.1%, 13 out of 22; Fig. 3a). Only two of the seven participants with visual impairment (28.6%) with post-secondary education had taken courses in biology and evolution (at the bachelor's degree level), compared to six of the 17 sighted participants (35.3%; Table S2). Interestingly, when enquired regarding their interest for evolution (for questionnaire scale description refer to Additional file 5: questionnaire; and for individual data refer to Additional file 3: Table S2), a similar percentage of participants from both groups described themselves as having a lot of interest in evolution (28% with visual impairment; 39% sighted). Indeed, two participants with visual impairment shared with the educators that they wanted to become biologists, having given up because "it was too visual", making it clear that the biological sciences are not equally accessible to everyone. However, basal knowledge, which was measured by the amount of correct answers scored before the activity, was quite high for both groups: all participants tended to score high both in the true and false questions and in the word association exercise (Additional file 6: Fig. S3). We did not find statistical differences in average performance between sighted people and people with visual impairment (Fig. 3; Additional file 6: Fig. S2 and S4), based on learning ( Fig. 3a; Pearson correlation: r = 0.79 for participants with visual impairment; Spearman correlation rho = 0.81 for sighted) and prediction scores ( Fig. 3b; t-test: p-value = 0.77). However, the higher 'learning' and 'prediction' scores tended to belong to sighted participants, while the lowest scores tended to be recorded for participants with visual impairment (Fig. 3; Additional file 6: Fig. S2 and S4; refer to Methods for a detailed description of all score metrics). Based on the results from the branchspecific activities, it is also notable that, when enquired about the consequences of environmental changes, both sighted people and people with visual impairment could successfully predict evolution outcomes in terms of expected phenotypic changes ( Fig. 3b; refer to Additional file 4 for detailed description of prediction exercises), which shows a general basic understanding of the mechanisms of natural selection and adaptation. The fact that we have a small sample size, together with the substantially high basal knowledge (Additional file 6: Fig. S3) found for both participant groups (Fig. 3a), makes it difficult to assess the true effectiveness of the activity based exclusively on in loco data. An inclusive description of all figures and plots above-mentioned is presented in the Additional file 7. 'Common ancestor' and 'Natural Selection' become more familiar terms, but artificial selection and neutral evolution might be hard to grasp without specific activities At the beginning and end of the activity, participants were read the same list of 33 terms and asked if they instinctively associated them with the concept of 'evolution' (Fig. 4; refer to the methods section for details). This meant to assess whether terms that are essential to understand evolution-like adaptation and common ancestor-were clearly present in the participant's minds. After the activity, association of terms increased in general. Overall, the terms that increased the most were 'Common ancestor' and 'Natural selection' for the participants with visual impairment; and 'Ecology' and 'Natural selection' for sighted participants (Fig. 4, Additional file 6: Fig. S5). All these terms were heavily used on the scripts provided to the educators. However, for the participants with visual impairment, the association of terms that can lead to misunderstanding evolutionsuch as perfecting and progress-increased more (Fig. 4, Additional file 6: Fig. S5). All participants associated all neutral terms-such as tree or ramification-with 'evolution' both before and after the activity (Additional file 6: Fig. S5). Interestingly, the term "artificial selection" decreased in association for both groups. This is not surprising since there were no artificial selection activities or mentions. This term was not used during the workshop, as the majority of the examples and predictive exercises were based on well-studied responses to natural selection, such as the stickleback fish plating reduction or the food-availability-driven beak morphology of birds. Also interesting is that while 'mutation' was highly associated with evolution by both groups, already before the activity (84% of sighted and 86% of participants with visual impairment), 'chance' was not. Together with the fact that 'Progress' and 'Perfecting' tended to increase, this might signal that a linear view of evolution towards perfection and humanization is rooted in the participant minds and calls for the need of including neutral forces in outreach activities to avoid promoting wrong or extremely adaptationist views of evolution. Fig. 3 Participant education data and scores, based on basic knowledge of evolution before and after the activity, and prediction of evolution outcomes. Blue data refer to sighted participants and black data to participants with visual impairment. a Depicts the relationship between participants age and education level. Lines visualize the linear model per participant category and vertical dashed lines mark the average age of each group. b Refer to the scores of true or false questions on basic evolution concepts applied before and after the MSToL activities. Lines show the correlation between before and after score. For sighted participants (blue) and visually impaired participants (black), R squared values are depicted from a regression analysis with 'Score after the activity' dependent on 'Score before the activity' . c Visualizes participants' predictive ability in scenarios where environments shape phenotypic responses, following the same color code Volunteer educators' emotional experience reveals no discomfort in communicating with participants with visual impairment and that through teaching they also learn a lot, while having a lot of fun. To assess the benefits of inclusive outreach activities for the educators, we conducted a brief post-activity questionnaire on the volunteer educators. Despite the majority of volunteers (93.3%; 14/15) not having previous experience communicating with people with visual impairment, during the activity, 53.3% felt more at ease communicating with people with visual impairment, especially with those participants above 60 years old (Fig. 5a). When asked what was their favorite aspect of communicating with participants with visual impairment, the volunteers expressed feelings of empathy and mentioned discovering the world from a new perspective. Furthermore, the great majority of volunteer educators reported that they learned immensely (Fig. 5b) while educating, especially communication skills, and reported they had a lot of fun while teaching evolution (Fig. 5c). Discussion The development of the multisensory tree-of-life and the data collected during the activity provide evidence for three main arguments: (1) there is a clear lack of accessibility to evolutionary biology education for people with visual disability, despite their interest in the matter; (2) the classic examples of evolution are capable of being transformed into multisensory activities; (3) touch and haptic models are essential for people with visual disability and a plus for sighted participants, making haptic activities the most powerful resource to increase accessibility and inclusion, benefiting everyone, independently of physical impairment. Lessons learned to provide a comfortable and stimulating experience for people with visual impairment As a pioneer activity, we were faced with a lot of hurdles due to inexperience. With this publication we hope to reduce those for anyone reproducing this activity or creating similar ones. However, our main and most valuable lesson concerns communication. As sighted organizers Fig. 4 Terms associated with the concept of evolution before and after the MSToL activity. Metric shown is the difference of percentage of people associating the word with evolution after the activity minus the percentage of people associating the word with evolution from the beginning. In dark green are terms essential to understand evolution, in blue are words related, but not essential; in grey neutral terms, and in red terms that can be misused or promote the misunderstanding of evolution. Notice that neutral term bars are absent because they were equally associated before and after the activity (see Additional file 6: Fig. S5). Vertical dashed lines depict the average difference of each word group, following the same color scheme we needed to make sure that our ideas and translation of visual into haptic were indeed accessible to people with visual impairment; and that the MSToL room was comfortable for everyone. Consulting with people with visual impairment or blindness is absolutely indispensable at every step. Initially, we had designed the activity as having braille instructions at the tables, which greatly diminishes the amount of volunteer educators needed. However, our blind consultant informed us that in our local community of people of visual impairment, only a small minority knew how to read braille. Another very important insight was the fact that people who have lived with visual impairment for different lengths of their lives will have different sensitivities to more subtle textures, might be more or less comfortable with the volume of sound in the room, and might be more or less experienced in navigating a room with floor textures. Thus, tripping hazards should be avoided when planning the floor phylogeny texture and the disposition of the branches-flat carpet for the phylogeny and tables closer to the walls of the room were our optimal design. If there are participants with motor disabilities or on wheelchairs, the height and shape of the display tables also need to be considered for accessibility. A big part of offering a safe and stimulating environment for people with visual impairment is the acoustics of the room: a lot of echo and noise easily becomes overwhelming. To avoid this, it is important to control the flow of people in the room and the volume of the sounds within. Following the available guidelines together with communicating and consulting with the local community of people with visual disability are crucial to ensure a comfortable learning experience for everyone. Be careful with the 'evolution ladder towards humanization' The aspect that we believe is the most urgent to improve in this and other activities, is the fact that the public easily retains the misconception of evolution as a linear processes towards humanization. Despite directly countering this idea through the branching phylogeny patterns presented, a question that seemed to hold low scores, across all participants, both before and after the activity, asks if 'mammals are more evolved than fish' (Refer to Additional file 6: Fig. S3 for individual question scores). In future activities, attention should be paid to the fact that having mammals and, more specifically, the Homo branch 'higher' on the phylogeny display-in our case towards the end of the experience for someone who started at the root (Fig. 2)-might promote the incorrect, and quite common, notion of evolution 'progressing' linearly towards humanization and taxa that are evolutionarily closer to our species. Thus, as something to improve, topology display (e.g. display mammals more to the side instead of at the top, do not make Homo the last visited branch) and language use (e.g. never use 'more primitive' or 'basal' for any extant taxon) should be mindful of inadvertent contribution towards the adaptationist and directional evolution narratives. Towards more effective and inclusive outreach activities Global estimates are clear regarding the under-representation of people with disability in STEM, both in the classroom and in the academic community (Moon et al. 2012). Under this status quo, people with visual impairment are denied access to knowledge and to participation in scientific communities. Consequently, we lose diversity of thought and experiences that could promote more stimulating ways of teaching that everyone could benefit from. In evolutionary biology, besides displaying data and concepts, images are a source of interest and wonder for biodiversity, which fuels curiosity. Being so, the promotion of scientific literacy demands the translation of evolution's patterns to senses other than vision. Although text and audio descriptions of graphical representations are useful, students with visual disability have reported that many important details are left out or misinterpreted by the translator (Shute et al. 2005). Furthermore, as the complexity of visual content intensifies so does the challenge of presenting it through auditory cues (Shute et al. 2005). In a comparative study on sighted science students, tactile learners retained and understood concepts better, while also enjoying their lessons much more (Pashler et al. 2009). The preference of both MSToL participant groups (with and without visual impairment) for 'touch' as a learning sense, and the overall positive global learning scores reported in our activity, further suggest 'touch' as a generally inclusive and powerful vehicle of information delivery. In fact, evolution concepts can be ideal for tactile learning, because much of the visual content represents descriptions of morphological and environmental variation, easily translated into 3D haptic images. When these resources are incorporated into science teaching, interest can increase for both sighted and blind students (Hasper et al. 2015). Thus, as shown by growing evidence from life sciences (Fraser and Maguvhe 2008), communication should be multisensory to increase teaching effectiveness for all students. Conclusion Without inclusive approaches, students with visual impairment often lose motivation due to real or perceived physical barriers to knowledge acquisition (Bell and Silverman 2018). However, when knowledge is made accessible they can realize their potential just as sighted students do (Sahin and Yorek 2009). Therefore, the inclusion of multisensory activities in outreach, which we have shown to be quite accessible for a lot of branches of the tree-of-life, can have important academic and social impacts. Just like museums, outreach activities should be 'inclusive and polyphonic spaces that address present social challenges and promote active partnerships with and for diverse communities, contributing to human dignity and social justice, global equality and planetary wellbeing' (Sandahl 2018). Involving complementary senses on future activities will not only promote equity for those with disability, but also move us faster towards an inclusive and diverse scientific community, and towards a public more aware of biodiversity, evolution and our connection to it.
9,776.2
2021-03-11T00:00:00.000
[ "Biology", "Psychology" ]
Beam Emittance Measurement with Laser Wire Scanners in the ILC Beam Delivery System Accurate measurement of the beam phase-space is essential for the next generation of electron accelerators. A scheme for beam optics optimization and beam matrix reconstruction algorithms for the diagnostics section of the beam delivery system of the International Linear Collider based on laser-wire beam profile monitors are discussed. Possible modes of operation of the laser-wire system together with their corresponding performance are presented. Based on these results, prospects for reconstructing the ILC beam emittance from representative laser-wire beam size measurements are evaluated. I. INTRODUCTION Future electron machines will need accurate determination and monitoring of their transverse phase space in order to meet their challenging performance specifications. In this paper, prospects for the transverse emittance measurements at the International Linear Collider (ILC) are presented, with special emphasis on the beam delivery system (BDS). The main parameters of the ILC [1], [2] are presented in Tab. I The ILC luminosity L is given by [3] where the asterix denotes the value at the e + e − interaction point (IP). H D is the disruption parameter due to *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>the mutual attraction of electrons and positrons in the collision and has value H D 2. Although the beam sizes at the IP depend strongly on the aberrations in the final focus system, accurate measurement of beam parameters upstream of the final focus is required to tune the main linac performance. The information about the transverse beam phase space is gained by putting beam profile scanners at several locations along the beam-line. The vertical beam sizes in the diagnostics section of the beam delivery system of the ILC are of order of 1µm which is too small to measure with a solid wire, so the transverse beam profile measurements will be performed by the laser-wire (LW) system [4], [5]. A similar situation applies to portions of the RTML (ring-to-main-linac) and the main linac sections of the ILC. In contrast to a ring machine, where an individual bunch can be measured many times as it passes around the ring, the emittance measurement in the ILC BDS will need to be performed on a single pass basis. This will require laser-wire scans that sample across successive bunches within a train, necessarily involving both a projection of any bunch position jitter and an averaging over successive bunches. The analysis described below of the extraction of the emittance from the bunch dimension measurements applies equally to circular and linear machines, although additional allowances for variations between ILC trains may also be necessary, for instance when performing quadrupole scans or measurements of linear dispersion. Throughout this paper, the electron bunch is assumed to be pure Gaussian; an extension of the analysis presented here to more realistic post-linac ILC bunch profiles will be included in a future publication. In section II the beam matrix reconstruction using a series of beam profile monitors is described. This section deals mostly with an analysis of how the emittance measurement error depends on the precision of the beam profile measurement. The general ideas behind the beam matrix reconstruction method described here are well known ( see e.g. [6], [7], [8], [9] ). After presenting the framework we discuss numerical algorithms for matrix reconstruction and introduce a numerical criterion allowing beam optics optimisation. Simulations of emittance reconstruction using ILC parameters are presented. The methods described in this section are independent of the type of the beam profile monitor or beam sizes. In section III the laser-wire (LW) beam profile monitor is introduced and its use in the measurement of transverse beam profiles is described. The LW is useful as a non-invasive device to measure electron beam profiles ranging from a few tens of microns down to the micron scale. Issues of Gaussian beam optics that influence the measurement are discussed and quantitative results presented. We conclude by quoting the requirements on the laser-wire system, plus associated laser specifications necessary to achieve the desired emittance measurement precision of a few percent. Laser-wire specifications are given for the beam sizes relevant to the 500 GeV beam (1 TeV centre of mass) machine upgrade, which is more challenging because the bunches are smaller. Other issues such as beam matrix reconstruction methods are independent of the beam energy and the results shown are normally for the 250 GeV beam. II. BEAM MATRIX RECONSTRUCTION In this section we first describe the standard approach to reconstructing the 4d coupled beam matrix with the least-squares fit method [6]. In the presence of coupling the emittance reconstruction precision falls dramatically with the beam size measurement error. The Cholesky decomposition method is analysed as an option to reduce this effect. We further introduce a criterion which allows numerical optimisation of beam-line lattice parameters to minimise the error of the emittance measurement. The contributions to the beam profile scan from effects such as beam jitter is discussed towards the end of the section. The described methods were used to simulate the emittance reconstruction process with the ILC lattice and with the beam size measurement precision predicted in section III. A. Beam matrix reconstruction from measured beam sizes One is generally interested in reconstructing the (x, x , y, y , ∆p p , ∆t) beam phase space. To the first order it is given by correlations like xx , x ∆p p etc. The transverse coordinates r can be represented as the sum of the betatron oscillations r β and dispersive trajectory η ∆p p r = (x, x , y, y ), r β = (x β , x β , y β , y β ) where the dispersion vector is Ideally dispersion should be zero in the diagnostics section. But in reality some residual dispersion can be present. The dispersion at the beginning of the transfer line is defined as [10] is the RMS momentum spread. The transverse beam envelope matrix is then defined as x 2 x y x y xy x y y 2 yy xy x y yy The equations for dispersion and betatron coordinates are [10] r β + K(s)r β = 0 η + K(s)η = 1 ρ where 1/ρ is the orbit curvature. We assume that in the diagnostics section the beam orbit is first aligned sufficiently close to the magnet axis so that the additional dispersion created there can be neglected. Then one can write down the coordinates at each scanner location i in terms of transfer matrices [11] as and thus This definition of the beam matrix already includes effects of the dispersion. The phase-space occupied by a ( generally coupled ) beam can be quantified by the intrinsic emittances ε 1,2 [12]. They are recovered from the beam matrix constructed from the betatron coordinates only by bringing it to a diagonal form: In what follows, "vertical" emittance usually denotes the smaller of the intrinsic emittances. The projected (vertical) emittance is defined as 2 y = det y 2 yy yy y 2 It will coincide with the intrinsic emittance if the beam is uncoupled but will be larger if the coupling is present. We can rewrite the beam matrix as and we need at least ten measurements to accomplish the task. At a scanner location in the beam-line it is possible to measure three values, x 2 , y 2 and xy , with the help of a horizontal, a vertical, and a tilted wire scanner. The ten values can be obtained either by changing the optics in a controlled manner at the wire location [8], [6], [7] or by locating the wires at different positions in the beam-line. For the ILC one aims at fast intra-train scanning for which the former method is not possible. The elements of the beam matrix can be obtained analytically when the coupling elements are neglected and the total number of wire scanners is six (three for each plane, two scanners at each location) that are suitably spaced in the betatron phase [9]. For more general cases it is however convenient to have a numerical procedure for the beam matrix reconstruction, which will be now described. Let the measured values of x 2 , y 2 and xy beσ with i = 1 : N scanners . Let σ k be the elements of the beam matrix at the location where the beam transfer matrices R are evaluated from. Assuming that the transport matrices are uncoupled in the diagnostics section (coupling introduced by misalignment errors is neglected) one obtains by equating coefficients in Eq.2 The problem then reduces to three uncoupled sets of equations: and each set is solved separately by a least squares fit. This may lead to an unphysical result (a non-positive beam matrix) when the measurement is sufficiently noisy. A typical dependency of the fraction of non-positive matrices on the relative measurement error is shown in Fig. 1. Here the 4d diagnostics section with 6 laser wire stations and nominal ILC parameters were assumed. A common relative measurement error is assumed for both the horizontal and vertical dimensions, for reasons outlined below in Sec. II C 4. A way to avoid non-positive beam matrices is to search for the beam matrix as a Cholesky decomposition [13], [14] σ 0 = GG T where G ∈ R 4×4 is lower (or upper) triangular with positive diagonal entries. From the point of view of numerical errors the lower triangular representation should be used when the horizontal emittance is smaller than the vertical, and the upper in the opposite case. This procedure introduces biasing to the emittance measurement, however it seems to be advantageous when either a small number of measurements is available or when a larger measurement error results in a high rejection fraction (fraction of non-positive matrices). In Figs. 2 and 3 examples of emittance fits using simulated beam size measurement data in the 4d ILC emittance measurement section are shown. A relative error of 35% was introduced to the simulated data. Both methods yield a significantly biased mean emittance. However, the Cholesky decomposition method results in an increase of statistics (i.e. physically meaningful fits) by a factor of three. When large statistics are available the straightforward method performs satisfactorily. For the ILC one aims at about 1-5% measurement errors within a bunch train. In this range the choice of algorithm is not important during stable operation, however the Cholesky decomposition method will be helpful in certain cases, for in- stance during the measurement tune-up when the errors are large. In the presence of coupling the intrinsic emittance is smaller then the projected one. With measurement errors the centre of the distribution of reconstructed intrinsic emittances is shifted towards smaller values even if the real beam is uncoupled [12] (also seen in Figs. 2 and 3). The distribution of projected emittances does not shift, so care should be taken if the difference between the projected and the intrinsic emittances is used to evaluate the coupling correction. When beam position measurements are available and it is possible to vary the beam energy the dispersion functions can be measured at wire locations. Supposing the measurement vector is {η i } the initial dispersion is recovered with a least squares fit from The dispersion model used here might not be adequate since in reality the bunches coming from the linac might have a complicated correlation pattern. Its correction in combination with the emittance measurement is a subject of a separate work. So in what follows we will not analyse the effect of dispersion apart from estimating its influence on the beam profile measurement. B. Beam optics for the diagnostics section Concepts of optics for 2d and 4d diagnostics sections were presented in [15]. For optimal performance the diagnostics section lattice should be designed so that the beam sizes and aspect ratios at the wire location are optimal for scanning performance and that the solutions of Eqs 3 are only weakly sensitive to perturbations of the right hand side. For optics analysis it is convenient to express the R-matrices in terms of Twiss parameters [16]. In the uncoupled case where, assuming periodic optics, Assuming further that the wire locations are at the maxima of the β-functions in order to optimise the spotsize resolution, α vanishes and the matrices M X , M Y and M XY will have the form: x sin 2 ∆µ x,1 cos 2 ∆µ x,2 2β x cos ∆µ x,2 sin ∆µ x,2 β 2 x sin 2 ∆µ x,2 cos 2 ∆µ x,3 2β x cos ∆µ x,3 sin ∆µ x,3 β 2 x sin 2 ∆µ x,3 · · · · · · · · ·      cos 2 ∆µ y,1 2β y cos ∆µ y,1 sin ∆µ y,1 β 2 y sin 2 ∆µ y,1 cos 2 ∆µ y,2 2β y cos ∆µ y,2 sin ∆µ y,2 β 2 y sin 2 ∆µ y,2 cos 2 ∆µ y,3 2β y cos ∆µ y,3 sin ∆µ y,3 β 2 y sin 2 ∆µ y,3 · · · · · · · · ·      and One can choose the optimality criterion for the lattice to be the condition numbers of the corresponding matrices [13], [14] Here the norm of a matrix M is defined as the maximum value of ||M x|| over all vectors of unit lengths The condition number is used to quantify the solution error of a linear algebraic system [13], [14]. A small condition number corresponds to well-conditioned systems while a large condition number corresponds to illconditioned systems. This number is hard to evaluate analytically but it can be evaluated numerically for any optics design. Apart from the condition number one has to make sure that both x and x make contributions of the same order of magnitude to the measurements, i.e.(for 2d case), To achieve this one can introduce another optimality criterion to minimise: For the full beam matrix reconstruction one needs to have ten measurements including coupling terms. Beam optics for such a measurement section should be designed so that the condition numbers of M X , M Y and M XY are minimised simultaneously (so-called 4d optics). In many cases one is interested only in correcting the coupling terms rather than measuring them. To do so, a set of skew quadrupoles is introduced upstream of the measurement section. They are used to minimise the projected emittances [12] and no direct measurement of the coupling terms is necessary. In this case six measurements are required and the condition numbers of only M X and M Y need to be minimised (so-called 2d optics). 2d measurement section For the diagnostic section lattice we choose a FODO channel with a constant phase advance per cell ∆µ 1 = ∆µ 2 = ∆µ. Then κ will depend on two parameters: β and ∆µ. In Fig. 4, 5 these dependencies are shown for different numbers of measurement stations. The condition number is infinite for 0 • and 90 • phase advance (not seen in the picture). It has a minimum close to 0 • and a second minimum that depends on the number of wire scanners used in the fit. This minimum appears to be at ∆µ = 180 • /N scanners (60 • for 3 scanners). Together with Eq. 6 (see also Fig. 6) this gives 180 • /N scanners as the optimal phase advance in a FODO cell. In Fig. 7 simulated emittance reconstruction error is plotted against the cell phase advance which shows that the error is indeed minimised by following the described optimisation procedure. 4d measurement section For a 4d diagnostics section we can assume that the lattice is constructed from identical cells of phase advances ∆µ 1 and ∆µ 2 . The matrix M depends on 4 parameters β 1,2 and ∆µ 1,2 . For different values of β x,y the behaviour shows a similar pattern with the matrix being singular for ∆µ 1 = ∆µ 2 and ∆µ 1 = 180 • − ∆µ 2 . Combining the information on plots such as in Fig. 8 with that in Figs. 4,5 one sees that there is no clear optimum for this problem. However, for a good performance one can choose, for instance, a phase advance in one plane of 60 • and in the other plane a phase advance close to 90 • (for 4 wire stations with 3 wires each). The optics for the ILC diagnostics sections [15] designed for 2d and 4d emittance measurements are shown in Fig. 9. The 2d diagnostics section is sufficient for emit- cell phase advance, 5 wire scanners, 5% error on beam size measurement tance tuning purposes and is shorter; it is thus currently expected to be used at the ILC [1], [2]. C. Machine Contributions to the Transverse Profile Scans The imperfections in the linac will result in beam jitter, residual dispersion, transverse beam coupling etc. [17], [18]. These may also cause a deviation of the bunch from the Gaussian shape. Estimates of these errors can partially be subtracted from the beam size measurements. Due to imperfections in the diagnostic section the transfer matrices will also not be known precisely, which will introduce additional errors into the reconstruction procedure; this effect is expected to be much smaller than the others and is neglected in this paper. Neglecting all but the dispersion and beam jitter contributions, the value of σ e extracted from the beam profile scan is: where σ scan is the laser-wire scan after deconvolution of laser effects (discussed in Sec. III A 4) and α J represents the magnitude of the beam orbit jitter, as normalized by the observed beam size, and will be detailed in the next section. When an effect contributes to the measurement error δσ e /σ e , we define E effect as its contribution to the total relative error, adding in quadrature as: where E scan is the contribution to the error from the raw laser-wire scan; the contributions to this error are discussed in Sec. III A 4. E jitter is the error remaining after subtracting the electron bunch-to-bunch jitter as discussed in Sec. II C 1. The additional effects of any residual dispersion, E η , could in principle also be subtracted explicitly; the error that remains after such a subtraction is estimated below. The significant machine-related errors are now discussed in turn. Error Contribution from the Jitter of the Beam Location In the following, the jitter of the location of the bunches within the bunch train at the ILC laser-wire IP locations can be written as σ jitter = α J σ e where σ e is the electron transverse bunch size. The value of α J will depend on the stability of the ILC site and on the additional vibrations arising from beam-line components, from energy and kicker jitter, and from the performance of train-to-train and intra-train feedback. Preliminary studies [19] suggest that values of α J may end up ranging from about 0.01 to about 0.7; in the following the resulting error estimates are normalised relative to a value α J 0.25. The bunch-by-bunch jitter can be determined by local BPMs to within their single-bunch resolution. The performance of cavity BPMs is the subject of ongoing R&D [20], which suggests that single-bunch resolutions of order 20 nm should be obtainable. The error contribution E jitter to δσ e /σ e remaining after subtracting the bunch jitter is given by differentiating Eq 7 to be where δα J is the precision to which the bunch jitter is known; the last approximation assumes that the only bunch jitter measurements are made at the LW location. It is possible that more precise measurements could be made using dedicated machine setups. This equation can be written as Error Contribution from Residual Dispersion If the effects of η and η are not subtracted at the location of each LW IP, then the emittance will be overestimated. The error remaining after subtraction of residual vertical dispersion, η, at the laser-wire IP is again given by differentiation of Eq. 7 to be where δη is the precision to which η can be determined. If E η is not to dominate a 1% tranverse bunch size measurement for a typical ILC beam then E η should be less than about 0.5%. If η is measured to δη /η 1%, say, then for σ ey = 1 µm Eq. 10 implies that η must be kept below about 0.5 mm. A method that could potentially be used to determine η more accurately than 1% is to change the mean energy of the electron beam by a relative amount ∆E/E (a possible choice would be ∆E/E 5 × 10 −3 so as to remain within the energy acceptance of the ILC BDS) and measure any subsequent shift in position of the beam centroid over the N train bunches in a train. If the singlebunch energy resolution of the ILC beam spectrometers is σ E , then: With additional R&D, it should be possible to achieve σ BP M 20 nm and spectrometer resolution σ E 10 −4 [21]. An alternative to explicit subtraction of the dispersion effects is to include them implicitly in the fits to the measured laser-wire distribution; this is the method described above in Sec. II A. Systematic beam size variations In general position shifts of individual bunches within a linear collider bunch train can have arbitrary patterns according to the errors of the injection kickers, or effects of the long-range transverse wakefield. Regardless of the shape of the patterns, the individual relative bunch displacements can be subtracted using bunch-by-bunch BPM measurements and so such effects can be absorbed into the treatment of beam position jitter discussed in Sec. II C 1. Ideally the laser-wire fast scanning system will be able to take account of these patterns (for instance by learning the shape from previous trains) so as to maintain an efficient scanning technique across the length of a train. One possible use of the laser-wire measurements will be to predict the spot-size at the IP and thereby enable a comparison with the spot-size inferred from luminosity measurements via Eq. 7. Systematic variations of the bunch transverse dimension along the train will lead to a bias in such predictions. For example, a distortion of the transverse dimension with maximum value ±s train σ e , which varies linearly along the train, will modify the effective transverse dimension that enters Eq. 1 by In order to correct for such effects, a number of scans will need to be made within a train, requiring ultra-fast scanning methods. Ongoing R&D [22] is aiming at scanning rates of several 10's of kHz using electro-optic tech-niques. The influence of the beam jitter on the measurement procedure depends strongly on the linac tuning procedure and requires further studies. Vertical-Horizontal Coupling One can measure x 2 = σ 2 x , y 2 = σ 2 y and extract xy from an additional measurement of the bunch along the u-axis u 2 = σ 2 u , which is defined to be at an angle φ with respect to the x-axis, as shown in Fig. 10. When it comes to measurement of σ u with a LW, it will be necessary to take into account the size of the perpendicular dimension, because of Rayleigh-range effects discussed below. The coupling xy is: The RMS error δ xy on the measurement of the coupling term is then given by: By substituting Eq.13 into Eq. 15 and minimising with respect to φ under the assumption that the relative errors δσ y /σ y and δσ x /σ x are approximately equal, the optimal value for φ is given by Substituting this value of φ into Eq. 15 and using the approximation that xy is small gives the error on the coupling term as: Typical values of interest to the ILC BDS are presented below in Tab. VI, where it can be seen that for the optimal value of φ, σ u ≈ σ y . With this optimal u-wire angle, the relative errors of the vertical and horizontal measurements enter equally in the coupling term in Eq. 17. This justifies the earlier simplification (Sec.II A), where equal relative errors were assumed for all measurements in the emittance reconstruction simulations. Also, since σ x > σ y for the ILC, the error on the coupling given by Eq. 17 will grow more rapidly than the error on the vertical beam size as the relative measurement errors increase. This explains the fact that the number of unphysical beam matrices grows rapidly with the measurement error, because the coupling terms quickly dominate over those of the vertical part of the beam matrix. III. THE LASER-WIRE IN BEAM EMITTANCE MEASUREMENT In this section the laser-wire (LW) beam profile monitor is described and the possible precisions that can be obtained from its use in transverse beam profile measurements are quantified. Ideally the LW will be used at the ILC to measure the electron transverse beam profile at several locations within a bunch train (containing 2625 bunches, Tab I), which will require high power lasers to get sufficient statistics for each laser shot together with ultra-fast laser scanning systems. The LW is useful for beam profiles ranging from several tens of microns, down to the micron scale. Smaller beam profiles have been measured using laser interferometric techniques [5,23] whereas traditional solid wires or screens can be used for larger profiles (although they are disruptive to the electron beams). Very challenging, low f-number, laser optics are necessary for the LW in order to achieve the required small laser spot-sizes and the subsequent performance is evaluated numerically and described in Sec. III A 3. The laser systems necessary to power the LW are also very challenging and the necessary specifications are derived and discussed in Sec. III D 1. A. The Laser-wire Beam Profile Monitor Traditionally the transverse dimensions of an electron beam have been measured by scanning a tungsten or carbon wire across the beam and measuring the resulting backgrounds as a function of relative position of the wire. This method has the disadvantage of being highly disruptive to the electron beam and so it cannot be used during normal luminosity running. At the ILC, the electron beams in the BDS will have vertical transverse size of order 1-few µm; a normal wire scanner would not be able to measure beams of this size, nor would it be able to withstand the energy depositions from such high in- tensities. To solve these issues, the solid wire can be replaced by a finely-focused beam of laser light; such a system is called a laser-wire (LW). The Compton collisions between laser photons and beam electrons are detected downstream and the Compton rate as a function of relative positions of electron and laser beams provides the measurement of the electron beam transverse profile. This principle is illustrated in Fig. 11. Two distinct methods have been employed to date. Operating the laser in continuous wave mode together with a Fabry-Perot cavity to enhance the power has been used [24] at the ATF at the KEK laboratory to measure the emittance of the damping ring; this technique would also be applicable to the ILC damping rings. In other parts of the machine, including the BDS, the beam is not circulating so a single-pass method based on high power pulsed lasers is required [25][26][27]. In the following discussion, the latter technique is assumed. Laser-wire Compton Rates The Compton cross section decreases as the electron beam energy increases. For an electron beam energy E b and laser photon energy k = hc λ , the Compton cross section is given by σ C (ω) = σ T f (ω) where σ T is the Thomson cross section = 0.665×10 −28 m 2 , ω = kE b m 2 e , and [5,28] f (ω) = 3 4 Values of f (ω) for laser wavelengths and beam energies of typical interest at the ILC are presented in Table II. In this section, the Compton rate for a set of laser-wire operating conditions is derived as a function of relative horizontal and vertical offsets, ∆ x and ∆ y respectively, between the centroids of the electron bunch and laser beam. The number N (∆ x , ∆ y ) of Compton photons produced will be proportional to the relevant overlap integral, (∆ x , ∆ y ). In Sec. A 1, (∆ x , ∆ y ) will be evaluated in µm −1 . P is the instantaneous laser power at the laser-electron IP, and N e is the number of electrons in the bunch. If η det is the detector efficiency then, using realistic numerical values, the number of detected photons is N det (∆ x , ∆ y ), where and ξ = η det 0.05 Conventions for laser-optics In the following, the conventions used define z along the electron beam direction, y along vertical and x along the laser-beam direction. The light intensity of the laser has the form where λ is the laser wavelength and M 2 is a quality factor for the laser, which effectively increases the wavelength λ → M 2 λ compared to the diffraction limited case; an ideal single-mode laser would have M 2 = 1. x R is the Rayleigh range of the setup and f # is the f -number of the optics, f # = D /F , where D is the diameter of the lens and F is its focal length. As in Ref. [29], 99% of energy in the Gaussian beam profile is required to be contained within the lens aperture. For the TM 00 mode this requirement means: In this case, the opening angle θ = 1/f # between the centre of the diverging Gaussian beam and its e −2 intensity cone is given by: and so, for TM 00 with f 1 optics, σ 0 = λ/π and θ = 1 rad, or 57 • . Numerical Results Results for the case of laser-M 2 =1.3 and f 1 final focus optics are shown in Fig. 12 for an electron bunch transverse Gaussian profiles with (a) σ ey = 1 µm, σ ex = 10 µm and (b) σ ey = 1 µm, σ ex = 100 µm; the effect of the Rayleigh-range is very apparent for the larger aspectratio. Results for the case of operating the laser in TM 01 mode with laser-M 2 =1.3 and f 1 final focus optics are shown in Fig. 13 for electron bunch transverse Gaussian profiles with (a) σ ey = 1 µm, σ ex = 10 µm and (b) σ ey = 5 µm, σ ex = 50 µm; the potential benefit of the TM 01 mode is apparent for the larger vertical spot-size. The relative benefits of the TM 00 and TM 01 modes are presented in Tab. III, where it can be seen that for σ ey > 1−2 µm there is a significant advantage for the statistical power by using the TM 01 mode; this advantage has been demonstrated at the ATF [30]. However the sensitivity to the laser properties (as parameterized by a simple M 2 in these calculations) is greater for the TM 01 mode and, for spot-sizes of order 1 µm, the relative statistical power of the TM 01 to that of the TM 00 mode decreases rapidly, as illustrated in Fig 14. In this study, the laser-spot sizes of order 1 µm are of particular importance for the BDS LW system, so in the following the TM 00 mode is used, while acknowledging that higher-order laser modes may be advantageous in other locations. Error Contribution from the Laser-Wire Scan In this section the various contributions to the relative error E scan in Eq. 8 are outlined. If σ scan is the electron beam size after subtracting the laser effects discussed below, then: III: Optimal f -numbers (for Estat) and corresponding statistical precision for electron bunches with aspect ratio σex/σey = 10, λ = 532 nm. The laser M 2 = 1.3 for both TM00 and TM01 modes. The minimum practical f -number is taken as 1.0, even where a better statistical precision could in principle be obtained by smaller f -numbers. The factor of 1.15 of Eq. A11 is included in the TM01 calculations. Also shown is the error E M 2 if there is a 5% error in the value of the laser M 2 . TM00 TM01 σ scan itself is obtained from subtracting the effects of the laser pointing instability: where σ fit is the raw result of a fit to the laser-wire scan profile, for which the errors are discussed in detail in Sec. III B. σ point is the contribution to the scan profile from the RMS laser pointing angular fluctuations ψ ; σ point = F ψ , where F is the focal length of the lens. As discussed in Sec. III C 1 F ≥ 15 mm. After subtracting σ point as in Eq. 27 the error contribution can thus be written as: where and E point = B. Laser-wire Performance In this section the performance of the ILC laser-wire systems will be examined in detail with a view to quantifying the errors that contribute to δσ fit of Eq. 29. The contributions to the raw laser-wire scan can be broken down as follows: where E stat is the statistical error of a 19-point fit to the raw scan curve (Sec. III C 1). In general, N scan laser-shots could be used in a variety of scanning modes. However, as set examples in this paper, all the scans consist of N scan = 19 equally spaced values of ∆ y over a range ±7σ m (as defined in Eq. A4), with ∆ x = 0 and ξ = 1.0, where ξ is as defined in Eq.20. E ξ is the error arising from the shot-by-shot normalisation fluctuations as introduced in Eq. 37. E M 2 is the error on the extraction of σ e introduced by the error on the laser light distribution at the IP; this is characterized here by an error in the M 2 value of the laser and, for a real system, will need to be calculated including the effects of alignment errors etc. in the finalfocus optics. As illustrated in Sec. III C 1 E M 2 can be estimated by fitting the measured profile to σ ey assuming a value of M 2 that is wrong by a factor (1 + δ M 2 ). In the following, the laser M 2 is thus assumed to be determined shot by shot to an accuracy of δ M 2 . Naively, without allowing for Rayleigh range effects, the error on the extracted value of σ e from subtraction of the laser spot-size is Inserting the representative values of M 2 = 1.3 and σ e = 1 µm gives: (33) A full numerical treatment, using the relations given in Appendix A, is presented in Tables VIII-X and shows that this is a good approximation for small δ M 2 1% but is a slight underestimate for larger values. Fits to TM00 Distributions The overlap integrals necessary to calculate the number of LW Compton photons are presented in Appendix A. A numerical evaluation of Eq. A2 is now used to simulate laser-wire scans for a variety of situations of interest to the ILC. Both E stat and E M 2 will depend on the f -number of the laser optics employed. This dependence is illustrated in Fig. 15 for the case of σ ey = 1 µm and σ ex = 25 µm. For each set of σ ex , σ ey , there is an optimal f -number that gives the lowest statistical error for given values of ξ and N scan . However, as can be seen in Fig. 15, the minima are often fairly shallow, which must be contrasted with the difficulty of building low f -number optics. The difficulty is not just in building low f -number alone, but in producing a system that can maintain a small laser spot size approximately ±10 σ off axis, as needed during a scan. For this reason, f -numbers less than 1 are not considered here. In addition to determining the optimal f -number for various electron transverse dimensions, the nominal errors E stat and E M 2 are also determined here for f 1.5 optics, because this is a likely technological goal for a system that will produce a small spot-size both on the optical axis and over an acceptable scan range of order 1 mrad. The beam-pipe in the ILC BDS has inner radius of 12 mm and outer radius of approximately 14 mm. In order to ensure sufficient clearance from the beam halo, the minimum focal length is taken here to be F min = 15 mm. Imposing the requirement of being able to scan ±7 σ m , then the scanning system must be able to scan a range Imposing this condition on the most challenging small spot-size of σ ey = 1 µm, σ = 1 µm (assuming M 2 = 1.3) and so σ m 1.4 µm gives θ scan ≥ 0.7 mrad. The maximum scanning rate will depend on the scan range required so, with this in mind, the scan range should be kept as small as practical. In the following the condition is therefore adopted. For larger spot-sizes and for the horizontal (or skew-angle) scans of the electron bunch, the suitable f -number is determined primarily by the angular scan-range of the final focus lens plus scanning system. The laser optics for these dimensions will probably use diameter D = 5 cm optics (or similar). In this case, assuming again a scan range of 7σ m the practical f # is given by The errors obtained from a laser-wire scan using an f # given by Eq. 36 for the horizontal dimension σ ex are given in Tab. XI. In practice, for very large scan ranges, it may be preferable to use a stepping-motor system to move the final focus lens as opposed to scanning using optical ray deflection, which would enable smaller f # s to be employed. In that case, the scan would have to be very slow compared to the machine repetition rate. Normalisation Fluctuations The error on the normalisation factor ξ is given by: where δP is the resolution of the measured laser power, δN e is the bunch charge measurement resolution, and the final term is the contribution from time jitter (or phase error) between the laser and electron beams. The laser power can be measured bunch-by-bunch by a fast photo-diode presumably to a level of order 10 −2 and the bunch charge to a similar level by using dedicated current monitors or by combining measurements from a set of BPMs. The contribution from time jitter is now addressed. As indicated in Tab. I, the ILC bunch train consists of N train = 2625 bunches with repetition frequency f rep = 5 Hz. Each bunch has length L b = 300 µm with corresponding bunch Gaussian time width of If T is the laser pulse length and τ is the RMS time jitter between the laser and electron pulses then the RMS contribution of this temporal fluctuation to the relative value of the overlap integral of Gaussian laser and electron profiles is given by: or after including typical ILC values: (39) The number of Compton photons (Eq. 18) produced by each laser-shot is proportional to the instantaneous value of ξ and so any shot-by-shot ξ-fluctuations will contribute to the error on the fit to the raw LW scan, as given by Performing the fit with this additional error factor enables its contribution to the total error (Eq. 31) to be determined according to: where values of the coefficient a ξ are given in Tab IV for a range of electron-bunch vertical spot-sizes and aspect ratios σ ex /σ ey . Combining all the terms, the normalisation error as given by Eq. 41 and Eq.37 becomes: Alternative Scanning Mode An alternative laser-wire scanning mode can be considered where the laser is kept fixed in space relative to the centre of an accurate BPM. The relative offset ∆ y between the laser and the electron bunch can then be measured by the BPM on a bunch-by-bunch basis to an accuracy given by the BPM resolution σ BPM . If the bunch charge N e and laser power P are also measured on a bunch-by-bunch basis, then the factor ξ defined in Eq. 20 is known to an accuracy given by Eq. 37. If it is assumed that the electron transverse charge distribution is a pure Gaussian, then Eq. A2 can be inverted bunch-by-bunch. Given that ∆ y is generated by the bunch jitter with RMS value α J σ ey and assuming σ ey σ σ m / √ 2 1 µm and α J 0.25 then typical values of N det (Eq. 19) are of order 332 (43) giving a statistical error on each measured value of N of order 5%. In addition there will be a contribution to the error from ξ due to the measurements of bunch charge and laser power and from the laser trigger (phase) jitter. In the following, the total error on ξ is taken to be 2%. The expected measurement error on the transverse size σ ey of the electron bunch was then evaluated for a nominal electron spot-size of σ ey = 1 µm by σ ex = 10 µm by inverting the full overlap integral of Eq. A2. The resulting percentage error on a single-shot measurement of σ ey is presented in Fig 16 as a function of (a) BPM resolution and (b) RMS beam jitter α J . From these plots it can be deduced that the bunch-by-bunch error on σ ey is of order 15% for the chosen realistic parameters. Making such measurements over a whole train would then yield an error on the average spots-size of the train of 0.15/ √ 2625 2.9 × 10 −3 . D. Summary of Laser-wire Requirements A LW system for the ILC BDS will involve many sophisticated elements including high power mode-locked lasers, high quality laser final-focus optics, and integrated BPM systems. In this section, the desired performance specifications of the key sub-systems are discussed and the resulting errors on the measured electron transverse spot-size are estimated. Laser Requirements The above analysis has discussed various laser requirements that must be met if fast intra-train emittance measurements are to be performed at the ILC. These requirements are gathered here to provide a benchmark for R&D requirements [31] and to act as a base-line for subsequent discussion of the other laser-wire component specifications. The laser supplying the light to the laser-wire IP will need to match the ILC bunch structure, providing laserpulses each of instantaneous power P 10 MW with sufficient pulse-length to overlap fully with the electron bunch so as to minimise any synchronisation issues. The pulse pattern and synchronisation can be obtained by mode-locking a master laser oscillator to a subharmonic of the machine RF and then by pulse-picking for high-power amplification. Assuming that only the required pulses are amplified, the average power of the laser will be dominated by only those amplified pulses and so can be estimated as: (44) where f rep is the rep-rate of the machine (Tab I). If no pulse-picking for the high-power pulses were applied, then P av would be 108 W, assuming the ILC nominal V: Requirements on laser-system for intra-train laserwire scans at the ILC, with reference to the equation that sets the goal value. Note, an extra factor of 2 has been applied to allow a 50% loss of power due to light transport between the laser and the laser-wire IP, which may involve distances of several hundred metres in the ILC Parameter Symbol Goal Value Eqn. bunch spacing of 369 ns (Tab I), which would make the laser very much more challenging. An extra factor of 2 has been included in the laser power requirements for P , because at the ILC the light transport between laser and IP may be as long as several hundred metres and so will involve substantial power losses en route. Error Summary The most challenging laser-wire measurements at the ILC occur in the BDS and some representative values of the bunch dimensions of interest are given in Tab. VI. For illustration in this table, the precisions obtainable on the matrix element xy are listed, assuming that each dimension σ x , σ y , and σ u can be measured to a nominal 1% (which means σ 2 x and σ 2 y are measured to 2%). Some examples of vertical emittance reconstructions for 4d optics using these assumptions are presented in figures 17 and 18. In order to summarise the ILC laser-wire requirements and to describe the various contribution to the measurement errors, a beam with representative transverse dimensions σ ex = 10 µm × σ ey = 1 µm is now used as a specific example to illustrate the key points. This beam is somewhat more challenging than that expected at the nominal ILC, but points to where additional R&D may be required if specific operating conditions give rise to smaller spot-sizes. The following laser, optical, and BPM specifications are non-trivial and are themselves subjects of R&D, however it is probable that suitable solutions to them will be found. The assumptions are: • the laser-requirements of Tab. V can be met, FIG. 18: Distribution of reconstructed vertical emittance with 1% (a) and 5% (b) random errors on the beam size measurement for a 4d diagnostics section assuming 50% random mismatch of initial optical functions (statistics corresponding to train length). The true emittance is 0.079 µm · µrad . • the bunch-by-bunch charge and the laser instantaneous power can both be measured to 10 −2 , • the laser profile at the IP (characterised in this paper by a simple M 2 value) is known to the equivalent of δ M 2 = 0.01, • BPM resolution σ BPM = 20 nm. • f 1.5 optics can be made to work, together with the scanning system. All the errors discussed above are summarised in Tab. VII for the most challenging nominal transverse dimensions of interest to the ILC, namely: σ ey = 1 µm, VII: Error terms for σey for an electron bunch whose transverse dimensions are σey = 1 µm and σex = 10 µm, giving σu = 1.41 µm and σv = 9.95 µm. The values were obtained assuming the performance goals of Tab. V, laser δ M 2 = 0.01 , σBPM = 20 nm, αJ = 0.25, The electron bunch charge and laser power are assumed each to be known to 1% and the pointing jitter to 10%. The measurement statistical errors are for a full train (i.e. Nscan 140). No subtraction of residual dispersion has been made for these measurements; instead they are input into the global fit to extract the emittance and dispersion terms together. σ ex = 10 µm, with corresponding skew scans with dimensions: σ u = 1.41 µm and σ v = 9.95 µm. Note that in order to reduce the error from the laserspot size uncertainty for the 1 µm vertical spot-size, UV laser light (λ = 266 nm) has been used. In this case, additional laser power (perhaps by a factor of approximately 1.5-2) may be required to compensate for the inefficiency of the second laser frequency doubling. Most of this factor has already been included in the extra contingency factor of 2 discussed in Sec. III D 1. IV. CONCLUSION The measurement of emittance at the ILC will be essential to maintain the high luminosity performance of the machine. A fast non-invasive scheme to do this has been presented, which involves advanced laser-wire systems and dedicated machine optics. It was shown that the efficiency of emittance determination falls dramatically if the transverse electron-bunch measurements provide an accuracy worse than about 30%. The accuracy of the resulting emittance measurement is directly related to the transverse electron-bunch measurements; it is therefore preferable to achieve accurate profile measurements of order 1-5%. A set of methods for emittance reconstruction were presented to improve the reconstruction efficiency in the event of degraded precision and a general scheme for the optics of the emittance measurement section was discussed. The most challenging vertical spot-sizes in the ILC BDS will eventually be of order 1 µm. The required LW performance was discussed in detail and a plausible route to obtaining a transverse spot-size measurement with a relative precision of order 1.3% using green laser-light was presented, together with prospects of achieving modestly improved measurements using ultra-violet light. It can be concluded that, while percent-scale measurements on a train-by-train basis seem possible, many sub-systems need significant improvements over the current state-ofart; an ongoing programme of R&D is currently addressing these challenges. APPENDIX A: LASER-WIRE OVERLAP INTEGRALS In the following, the electron beam is assumed to have a simple Gaussian charge profile, with σ ex and σ ey be-ing the horizontal and vertical electron spot-sizes respectively. σ z is assumed long compared to the laser spot-size, so the overlap integral in z integrates out trivially. Scans Using the Laser TM00 Mode In this section, the full overlap integrals of TM 00 and TM 01 laser modes with a Gaussian electron bunch are presented, building on previous studies [32] by including full effects of Rayleigh range and detailed analysis of the laser final focus optics. The results of the numerical integrals for a range of parameters of interest to ILC laser-wires are presented in Tables VIII-X For the laser TM 00 mode, after performing the zintegral, the remaining transverse overlap integral is: Performing the y-integral gives: In the approximation of an infinite Rayleigh range the equations reduce to the more familiar form with [5] σ m = σ 2 e + σ 2 (A4) and a. Scans Using the Laser TM01 Mode Using the same conventions as in Sec. III A 2 the light intensity of the laser TM 01 mode has the form I (x, y, z) = I 0 2πσ 2 (A6) The necessary overlap integral is now: where, as before, ∆ x and ∆ y are the horizontal and vertical relative displacements of the electron and laser beams. Performing the y-integral gives: (∆ x , ∆ y ) = I I e 2πσ ex dx σ s (x, ∆ x ) σ ey σ s (x, ∆ x ) The minimum practical f -number is taken as 1.0, even where a better statistical precision could in principle be obtained by smaller fnumbers. The numbers in brackets are the corresponding statistical errors using f1.5 optics; the systematic errors due to δ M 2 = 0.01, 0.05, 0.10 are then 1.09%, 5.68% and 12% respectively and do not depend significantly on the electron bunch aspect ratio. where σ s (x, ∆ x ) is as defined in Eq. A3. For the TM 01 mode, the condition that 99% of the light energy is contained within the lens aperture becomes: IX: Optimal f -numbers (for Estat) for the measurement of σey and the corresponding statistical precision for σey = 1 µm, λ = 355 nm and M 2 = 1.3. The minimum practical f -number is taken as 1.0, even where a better statistical precision could in principle be obtained by smaller f -numbers. The numbers in brackets are the corresponding statistical errors using f1.5 optics; the systematic errors due to δ M 2 = 0.01, 0.05, 0.10 are then 0.48%, 2.48% and 5.16% respectively and do not depend significantly on the electron bunch aspect ratio. Optimal f -numbers (for Estat) for the measurement of σey and the corresponding statistical precision for σey = 1 µm, λ = 266 nm and M 2 = 1.3. The minimum practical f -number is taken as 1.0, even where a better statistical precision could in principle be obtained by smaller f -numbers. The numbers in brackets are the corresponding statistical errors using f1.5 optics; the systematic errors due to δ M 2 = 0.01, 0.05, 0.10 are then 0.27%, 1.39% and 2.87% respectively and do not depend significantly on the electron bunch aspect ratio.
12,268.2
2008-02-01T00:00:00.000
[ "Physics" ]
Open data In the EU, the public sector is one of the most data-intensive sectors. Public sector bodies produce, collect and pay for vast amounts of data, known as public sector information (PSI), or government data. Examples include geographical information, statistics, weather data, data from publicly funded researched projects, and digitised books from libraries. 'Open' public data refers to PSI that can be readily and widely accessed and reused, sometimes under non-restrictive conditions. The EU open data market is a key building block of the overall EU data economy. According to the study supporting the impact assessment carried out to provide input to the review of the PSI Directive, the total direct economic value of PSI is expected to increase from a baseline of €52 billion in 2018 for the EU27 + UK, to €194 billion in 2030. Open data policy is linked with open research data policy since both address publicly funded data or their data results from public funding. Therefore, this data should be openly accessible and reuseable. Allowing public sector data to be reused for other purposes, including commercial ones, can: stimulate economic growth and spur innovation: public data has significant potential for re-use in new products and services; help address societal challenges with the development of innovative solutions such as in healthcare or in transport; enhance evidence-based policymaking and increase efficiency in public administrations; become a critical asset for the development of new technologies, such as artificial intelligence (AI), which require the processing of vast amounts of high-quality data; foster the participation of citizens in political and social life and increase the transparency of government. In 2003, the European Commission set up a legal framework to allow the reuse of public sector information through the 'PSI Directive' (Directive 2003/98/EC), subsequently revised by Directive 2013/37/EU. This Directive is built around two pillars of the internal market: transparency and fair competition. It focuses on the economic aspects of the re-use of information. The European Commission performed a review of the PSI Directive, on the basis of a public online consultation, fulfilling the periodic review obligation foreseen in the Directive. Building on the results of this consultation, together with an extensive evaluation of the Directive and an impact assessment, the European Commission proposed a revision of the PSI Directive as part of a package of measures aiming to facilitate the creation of a common data space in the EU. The new Directive introduces the concept of high-value datasets, defined as data that is associated with important benefits for the society and economy when reused. High-value data sets are subject to a separate set of rules ensuring their availability free of charge, in machine readable formats, provided via application programming interfaces (APIs) and, where relevant, as bulk download. The thematic scope of high-value datasets is provided in an annex to the Directive. Within these limits, and with the assistance of a Committee composed of representatives of EU countries, the Commission will adopt a list of specific high-value datasets by way of an implementing act, following an impact assessment. In addition to the Open Data Directive, a number of non-legislative measures support the opening up of public sector information, such as the Public Sector Information expert group (PSI Group). Since 2015, the European Commission has funded the European data portal through the Connecting Europe Facility. The data portal is a pan-European repository of public sector information open for reuse in the EU. This portal also features a training centre on how to reuse open data and a database of success stories from European and international re-users. PRESS RELEASE | 16 July 2021 New rules on Open data and reuse of public sector information start to apply Tomorrow, 17 July 2021, will mark the deadline for Member States to transpose the revised Directive on open data and reuse of public sector information into national law. The updated rules will stimulate the development of innovative solutions such as mobility apps, increase transparency by opening the access to publicly funded research data, and support new technologies, including artificial intelligence. PRESS RELEASE | 03 June 2021 Commission seeks views on data sharing The European Commission has launched an open public consultation on the announced Data Act aiming to create a fair data economy by ensuring access to and use of data. The consultation seeks to gather views from citizens, businesses, online platforms, academics, civil society, administrations and all interested parties. It is open until 3 September. PRESS RELEASE | 23 January 2019 EU negotiators agree on new rules for sharing of public sector data Negotiators from the European Parliament, the Council of the EU and the Commission have reached an agreement on a revised directive that will facilitate the availability and re-use of public sector data. Related Content Big Picture Data Data is everywhere and growing at an unprecedented pace. The Commission has developed a European data strategy to help us unlock its benefits. Dig deeper Open data portals Open data portals facilitate access to and reuse of public sector information. They can help encourage cross-border use of reusable data in Europe. Non-legislative measures to facilitate reuse The European Commission works to overcome the barriers limiting the reuse of public sector information through non-legislative measures. Implementation of the Public Sector Information Directive The Commission has created a detailed list on how different countries have implemented the Public Sector Information (PSI) directive From the Public Sector Information (PSI) Directive to the Open Data Directive The reuse of open data can help grow the European economy, develop artificial intelligence and aid work to overcome societal challenges. See Also Guidance on private sector data sharing The Commission is looking to facilitate the sharing of data held by companies to improve public services and guide policy decisions.
1,341.2
2019-11-13T00:00:00.000
[ "Political Science", "Computer Science" ]
Target Inference on Evaluation of Angle Oriented Cluster In general, any field consists of unnecessary data. Several algorithms exist to remove unwanted data because it cannot seal to this processes. Research Scholars are still studying to complete this work. For Instance, face recognition system suffers in-depth pose verification problem over the last few decades. To solve this problem we used angle orientation technique. It consists of various angles of input images (same person with different direction) to compare with the database image. To remove needless data i.e., unsupervised image is the best solution to recognize a target inference. So with this idea we are attempting a small approach for this kind of applications. In this paper, we introduced a ternary cluster relation on angle oriented images. Again, various angles of images form into three nested clusters in Clock wise and/or Anti-clock wise directions. In this, we used multivariate analysis technique to improve the quality of cluster with the help of evaluation of cluster and also statistical approaches of tackle outlier detection methodology and bootstrapping technique to find the target inference. The experimental results are produced on angle oriented cluster images to increase the performance using analysis of variance test. Introduction In any engineering approach, verification and validation technique is more important before introducing any reliable detection system. At this moment, in many reliable systems, loading of object may be duplicated and is a common problem in many applications like recognition system. For example, in angle orientation system, object can be in various directions of angles or poses, so it leads to data redundancy. In 2011, Jagan Mohan et al found the novel approach based on angle orientation method in Increasing the Reliability of Angle Oriented Recognition System [7,8]. In this system various directions (i.e., clockwise and anti-clock wise) in the angles of input images (different directions of same person) are compared with database pose of same person. The difficulty of this system is to compare the database pose with training set images which involves more iteration of comparisons between images of various angles. To increase the performance of the system we need to apply quality metrics for reducing the iterations and comparisons. To solve this problem we have to use Evaluation of clustering techniques (Image and cluster quality metrics) and optimizing the iterations to recognize the target pose (redundancy). In the selection of the best suitable target pose by comparing with the database pose requires preprocessing techniques. The Evaluation of image quality is necessary in image coding for minimizing the miss representation. There are two approaches namely, subjective and objective evaluation. The subjective evaluation is divided into two categories rating scale and comparison methods. In the rating scale method, the subjective views a sequence of images under comfortable conditions and assigns each image to one of several methods. The subject evaluation method may be used in impairment scale. In the comparison method, the scale is based on a comparison with a set of images and used in a comparison scale. Several novelists found various subject evaluation tests which are complex and time consuming and there is no constructive method for performance improvement, and it is difficult to use them as part of the design process. So, we have attempted to find objective measure to evaluate the quality of the processed images. Earlier works on these lines have discussed the image quality. Some authors, Ismail et al proposed the idea of image quality metrics in statistical evaluation of image quality measures in journal of electronic imaging, 2002 [5]. This paper is organized as follows: Section I deals with Ternary approach for angle orientation. The Evaluation of cluster is presented in section II. In section III, we deal with taxonomy of multivariate analysis for poses of images. Section IV is discussed about Map Reducing Algorithm. Experimental results are shown in section V. Conclusion is in section IV. Ternary Approach In mathematics, ternary approach deals with three parts 122 Target Inference on Evaluation of Angle Oriented Cluster which includes relationship between them. Three parts may also be referred to as three dimensions and it is finitary relation. Cluster is a collection of datum's that are alike to one another within the same cluster and/or unlike to the objects in other clusters. Clusters are formed based on the data. No predefined class (cluster) is formed with an unsupervised image to form it into cluster. There are three major methods in clustering namely Partition, Hierarchical and Model Based cluster. 1. Partitioning Cluster: Form arbitrary partitions and then iteratively refine them by some condition. 2. Hierarchical cluster: creating ordered breakdown with the set of data (or objects) using some condition. 3. Model-based cluster: A model is offered for each of the clusters for finding the best fit of that model to each other. For example: Fuzzy Decision Tree. To overwhelm the above said three clusters, we introduced a novel method namely ternary cluster with the idea of ternary relation; it is the amalgamation of the above three cluster properties for angle orientation clusters. It is shown in figure A. In Ternary Cluster, Entire training image is denoted as root node i.e., Iand it is divided into two groups of major similar objects called clusters I 1 and I 2 (used partitioning clustering) which are internal nodes. Jagan Mohan et. al., discussed about the classification of clustered database images using decision tree given an Efficient K-Means Cluster Reliability on Ternary Face Recognition using Angle Oriented Approach, 2012 [9].The similar image object groups which are rotated in the clock wise direction, belongs to the internal node, i.e., cluster I 1 . Similarly that of I 2 containing the group of similar image objects which are rotated in anti-clock wise direction. Again, the cluster I 1 is grouped into three terminal nodes (segments), I 11 , I 12 and I 13 , called nested clusters (using hierarchy cluster). Likewise I 21 , I 22 and I 23 defined as nested clusters which are three terminal nodes of cluster I 2 . If θ is the angle for rotation in each cluster, the nested cluster I 11 consists of images with angle 0 to 30 0 , I 12 comprises the images in between 31 0 to 60 0 and I 13 bears 61 0 to 90 0 . The same type of segmentation followed in internal node of I 2 , the images of angle with an interval of 30 0 starts with 0 0 and ends at 90 0 in anti-clock wise direction (used fuzzy logic).Cluster images based on their visual content. Now we have to form ternary cluster with unsupervised images. Fuzzy sets and their corresponding membership functions have been defined by domain experts. Each of the fuzzy sets can be held as [0, 1] valued attribute, called fuzzy attribute. In this respect, we suggest the variable θ is classified into three fuzzy sets. Finally, the fuzzy set can be represented by The θ values vary between 0 0 -90 0 as shown in the above graph. By this we recognize fuzzy cluster values always in between 0 and 1. Evaluation of Clusters Evaluation of cluster improves the quality of clusters in view of satisfying the clustering objective. According to this objective, instances in the same cluster must be closer to each other than to instances across clusters. The most challenging issue in the cluster is to check whether it is good or not. In 1964, Bonner has suggested the notion of evaluation criteria. There are two types of evaluation criteria namely Internal and External [4,10]. Internal Criteria of a Cluster By using similarity measures, we can measure the internal quality of the cluster. Here we used the Mean Square Error [5] and Sum of the Squared Error [6]. This is used to check whether the various angles of pose images in a cluster fit to one input image or not. Mean Square Error: Sum of Squared Error (SSE): External Criteria of a Cluster It is used to examine, whether the forming of clusters are matched with the given instance or not. Here we used one of the simple and transparent measure called purity measure for testing whether structure of cluster matched to the given input pose or not. Bootstrap Principle After examining the cluster quality, now it is time to take cluster for finding target inference so here we used the statistical approach of bootstrap principle which is suggested by Efron et al, An Introduction to the Bootstrap, Davison andChapman &Hall 1993, 1997 [1,3]. It is useful when size of the population is high. A Bootstrap data set is one which is created by randomly selecting n points from the training set D, with replacement. We used this technique for taking the sample data set which consists of various angles of pose images. By using this principle we randomly pick any one of the clusters to get target inference. Taxonomy of Multivariate Analysis To understand and find a target inference, it is very important to examine more than one face simultaneously from a set of poses of an image. So it is needed to consider more than one variable i.e.., more than one image from various angles of images. Multivariate analysis is a graphical tool mainly used to find patterns and relationships between several variables simultaneously. Kappor et al have discussed the Multivariate analysis technique that allows the analysis of more than two variables at once [14]. Multivariate Normal Distribution Multivariate normal distribution is a probability distribution in a multivariate analysis. Multivariate normal distribution has a mean µ and variance-covariance matrix∑ of random n-vector X and is denoted as X∈N(µ,∑) and its density is given by The following is the very special property of multivariate normal distribution, which is used to test the independency of the random variable. Independence Let X is a normal random vector. The components are independent iff they are uncorrelated.i.e.., Cov(X i ,X j )=0 then they are uncorrelated so the two components X i and X j are independent. In this paper we used this extraordinary property in the following two cases: 1. We have to compare all angles of images and check whether all are belonging to one input image or not. In this case if they are not uncorrelated then all poses of images belongs to one particular image. i.e., Cov(X i ,X j )≠0 and X i ,X j ∈ C (X i ,X j are from poses of images) which means that they are not independent and there is relation between these angles or poses. 2. After succeeded from step 1, among all angles of images we have to test which angle or pose is a match with the input images. In this case we have to test the independency property between database image and the angles or poses of images. i.e.., Cov (X i ,X j ) ≠0, Here if we find any one of the angle or pose is not independent to input image, which means it is the target inference for the input image. After step 2, there is a possibility of getting more than one image as a target inference so here we have to use the tackle outlier. This is explained in the following section. Tackle Outlier Detection in Multivariate Analysis In pose orientation analysis, object can be viewed in many directions. So to analyze such data for finding target object from various angles of image data recorded or sampled one of the steps is obtaining target object by observation or detection. It is very difficult to find out the best output pose results from various images. So it takes the image which is nearly matched to the database pose using tackle outlier. Tackle outlier is mainly useful when there is more than one as result and it chooses best of them which matches to the target object is introduced by Michael Thompson, 2009 [15]. Map Reduction Jagan Mohan et al proposed this method in similarity based query optimization on map reduce using Euler angle oriented approach, 2012 [2]. It uses the two interfaces Map and Reduce, which is used for processing and generating clusters. In map phase, Researcher have processed the cluster in order to analyze clusters which consist of various angles or poses of images. Mapping is used to load the images and transform it into key-value pair to store all these angles of images [11, 12, and 13]. In the above sections 2, 3 we mentioned that there is a possibility of getting more than one as target inference. So here it is needed to use reduce phase. After mapping is finished the key-value files are shuffled and for getting the new key-value pair using Tanimatto distance with the extension of Jaccard methods. This is shown in below figure: Experimental Results For angle orientation images we performed ANOVA test for Quality of the cluster, Independence test of Multivariate analysis for finding target inference and finally we tested the software reliability to increase the performance of Angle orientation images. All these are shown in below sections: Analysis of Variance It is a statistical tool which is used for measuring the quality. It is used to test whether the hypotheses is acceptable or not. Here various angles of images taken as the hypotheses. And checking whether all the poses of images are acceptable to one object or not. Testing the Target Inference The following is the result after doing above all procedure which is used in section 3, 4. Software Reliability Test on Angle Oriented Images To conduct cluster reliability test on angle oriented images like anti-clock wise and clock wise images. The Probability of failure is calculated by angle oriented images, testing a random sample of all available input images. Conclusion This paper mainly deals with finding the target inference among various angles of poses. Here we group the images into cluster based on ternary relation and introduced the ternary cluster and then the evaluation of the cluster. We have various poses of images. So we need to examine more than one image. For this we used multivariate analysis. For finding target inference we have used bootstrapping principle which chooses the cluster and tests the independence between various angles of images. We used the independency of multivariate normal distribution. If we found more than one object matched to the input image then we used tackle outlier and map reduction algorithm. Experimental results are produced with the help of ANOVA.
3,372.6
2014-01-01T00:00:00.000
[ "Computer Science" ]
Quantum error-correcting codes and their geometries This is an expository article aiming to introduce the reader to the underlying mathematics and geometry of quantum error correction. Information stored on quantum particles is subject to noise and interference from the environment. Quantum error-correcting codes allow the negation of these effects in order to successfully restore the original quantum information. We briefly describe the necessary quantum mechanical background to be able to understand how quantum error-correction works. We go on to construct quantum codes: firstly qubit stabilizer codes, then qubit non-stabilizer codes, and finally codes with a higher local dimension. We will delve into the geometry of these codes. This allows one to deduce the parameters of the code efficiently, deduce the inequivalence between codes that have the same parameters, and presents a useful tool in deducing the feasibility of certain parameters. We also include sections on quantum maximum distance separable codes and the quantum MacWilliams identities. We have used various sources in the preparation of this article, principally Gottesman [6,7], Glynn et al [5] and Ketkar et al [12]. The most original parts of these notes are Section 4 and Section 6. Section 5 is based on Ketkar et al [12] but massaged so that appears as a straightforward generalisation of the qubit case of Section 2. Although the main results of Section 3 are from Glynn et al [5], in a deviation from their approach we have chosen to prove these results without using the F 4 trick, which we do not consider until later in Section 5.5. The interested reader is referred to the books by Sakurai [16] and Nielsen & Chuang [13] for standard treatments of quantum mechanics and quantum information theory, to the book by Haroche & Raimond [9] for a thorough treatment of current experiments in quantum mechanics, and to the book by Aaronson [1] for further connections to mathematics, computer science, physics, and philosophy. For those uninitiated in quantum mechanics or quantum computing, we strongly recommend the delightful mnemotic essay on quantum computing by Matuschak and Nielsen at https://quantum.country/qcvc. 1 Quantum codes Introduction A qubit is a two-state or two-level quantum-mechanical system. For example, the intrinsic angular momentum (spin) of an electron is such a system. It can only take two values when measured in arbitrary spatial direction, say by measuring the electrons deflection when passing by an inhomogeneous magnetic field. The two corresponding spin-states are commonly referred to as as "spin up" and "spin down" states with respect to that direction. Another example is the polarization of light. Here the two states can be taken to be vertically and horizontally polarized light; another choice is light that is left circularly and right circularly polarized. In general, a continuum of different photon polarizations are possible. Yet only two distinct states are observed when e.g. putting beamsplitters or polarization filters in the path of a light beam. This raises the question: why are only ever two discrete values corresponding to two discrete states observed, if electrons and photons can take on a continuum of possible spin-directions or polarizations? The answer lies with what measurements on quantum systems reveal. It turns out that for a two-state quantum-mechanical system, any individual measurements can only ever reveal the answer to a binary question. In other words, the measurement indicates in which of two mutually exclusive states the qubit can be found after the measurement. Thus while qubits can take on a continuity of states and a continuity of measurements can be performed, only two-valued results can ever be obtained. Thus the notion of a qubit as a quantum bit. We will not dwell on the strangeness of quantum mechanics further, the interested reader is referred to discussions of the Stern-Gerlach and double-slit experiments such as found in the books by Sakurai [16] and Haroche & Raimond [9] 1 . In mathematical terms a qubit is represented by a unit vector in C 2 . The spin up and spin down (or any other choice of a pair of physically completely distinguishable states) are represented by an orthonormal basis |0 and |1 . The notation |0 is a shorthand for the vector 1 0 and |1 stands for 0 1 . The two kets |0 and |1 are also known as the computational basis vectors. Consider now the state While |ψ ∈ C 2 represents a physically unique state, it is, upon measurement in the spin-up -spin-down direction, found in either of these two directions with equal probability. Sometimes this situation is referred to as the system being "in two states simultaneously". A more accurate description is that the system is "in superposition of spin-up and spin-down", or in other words, the system is correctly described as a linear combination of spin-up and spin-down. As usual, z is the complex conjugate of the complex number z. When measured, the qubit is with probability α 0 α 0 found in state |0 ("spin-up") and with probability α 1 α 1 found in state |1 ("spin down"). Since the sum of these two probabilities must be one, we have that for a qubit α 0 α 0 + α 1 α 1 = 1. The "ket" notation |α is used for a column vector, whilst the "bra" notation α| is used for a row vector whose coordinates are the complex conjugates of the coordinates of |α . Thus, the "bra" α| is a linear form. The inner product or "bra-ket" on C 2 is defined as α|β = α 0 β 0 + α 1 β 1 . The normalisation condition in Eq. (2) then reads as α|α = 1, and qubits are represented by complex vectors in C 2 of unit length. The matrix is an example of a unitary transformation since Note that {|0 , |1 } is an orthonormal basis, so In matrix terms, the trace is equal to the sum of the elements on the principal diagonal. The Pauli matrices, are unitary linear transformations of C 2 which form a basis for the space of 2 × 2 matrices. In general, any error -also those which are not unitary -affecting a single qubit can be written as a linear combination of the Pauli matrices. We sometimes denote σ 0 , σ x , σ y , σ z simply as I, X, Y, Z respectively. Note that the Pauli matrices are both unitary and Hermitian. They are also mutually orthogonal under the Hilbert-Schmidt inner product A measurement or observable is represented by a hermitian operator. For example, the spin-up -spin-down measurementσ z is represented by the Pauli matrix σ z 2 . The outcome of an individual measurement can only take two values. These correspond to the eigenvalues of σ z which are +1 and −1. After the measurement, the state is then found in the corresponding eigenstate: in |0 if the outcome +1 was obtained, and in |1 if the outcome −1 was obtained. These occur with probabilities respectively. The above treatment can be generalised. Denote by an observable which is represented by a Hermitian matrix A. Let m i and |m i be its eigenvalues and corresponding eigenvectors. Measuring an observable on a quantum state |α yields the values m i with probability p i = | α|m i | 2 . The state is found in the corresponding eigenstates afterwards. The description of multiple quantum systems takes place in the tensor product space of the individual Hilbert spaces. Thus a system of n qubits is described in the n-fold tensor product space of the one-qubit spaces. One arrives at the 2 n -dimensional Hilbert space (C 2 ) ⊗n = C 2 ⊗ · · · ⊗ C 2 (n times). A density matrix is used to describe a classical probability distribution (also called a statistical mixture or statistical ensemble) over quantum states. Suppose that some source emits the quantum state |φ i with probability p i . One requires that p i ≥ 0 and i p i = 1. From the discussion in the previous section, it is clear that the measurement of an observable must yield an expectation value of By linearity, this can be rewritten as Indeed the operator captures all there is to know about a quantum system and ρ is known as the density matrix describing it. For a complex matrix ρ to represent a quantum state, one requires ρ = ρ † , ψ| ρ |ψ ≥ 0 for all |ψ (positive-semidefinite) and tr(ρ) = 1. Comparing with classical probability theory, this corresponds to a real valued, non-negative, and normalized probability distribution. The density matrix formalism can indeed be seen as a generalization of classical probability theory and quantum mechanics can be taken to be the study of the cone formed by complex positive-semidefinite matrices, and transformations thereof. This is an analogy to the probability simplex encountered in classical probability theory. Now we can state what we left out in preceding discussion about measurements: consider the case when some eigenvalues of the measurement operator A = m i |m i m i | are equal, i.e. the spectrum of A is degenerate. What is the probability for obtaining outcome i and what is the post-measurement state? Let P j be the projector onto the eigenspace with eigenvalue m j of A. Then a measurement yields outcome m j with probability p j = tr(P j ρ) and the density operator immediately after the measurement reads P j ρP j tr(P j ρ) . The time evolution of an isolated qubit is given by a unitary operator in SU (2). On a closed quantum system of n qubits, the time evolution is given by unitary operators on H system = (C 2 ) ⊗n . In case of a quantum system interacting with its environment such unitaries can also act on a larger system A unitary on such a larger system can on H system be represented in the (non-unique) operator-sum decomposition as Throughout 1 will denote the identity map. More generally, this reads for a density matrix as The above map is also known as a quantum channel or completely positive map and represents the most general form of physical change a quantum state can undergo. In the case of a classical (conventional) bit, an error is represented by the bit-flip 0 1. For qubits, we regard any non-identity unitary transformation or non-identity quantum channel as an error. We can decompose any unitary or quantum channel in terms of a matrix basis. A good choice is the Pauli group: it is generated by all possible tensor products of the 4 Pauli matrices, together with phases ±1 or ±i. Observe that σ x , σ z and σ y anti-commute. That is, Thus, the Pauli group P n is a non-abelian group consisting of the 4 n tensor products of σ 0 , σ x , σ z and σ y , which together with the four phases is a group of size 4 n+1 . A quantum error-correcting code is a linear subspace Q of (C 2 ) ⊗n into which a number of logical qubits can be encoded such that all errors of a certain type can be detected and/or corrected. The question we ask is thus: given a noisy channel E, does there exist a recovery channel R, such that every density matrix ρ, for which the image of ρ is contained in Q, can be recovered? In other words, for all density matrices ρ with spectral decomposition where |φ i ∈ Q, we require that A 1-qubit error-correcting quantum code A classical code is a subset of A n , where A is a finite set called the alphabet and n is the length of the code. The repetition code is the simplest type of code in which each element a ∈ A is encoded as (a, a, . . . , a), an n-tuple of a's. However, we could try the following repetition-type code Above and from now on, we simplify notation |0 ⊗ |0 as |00 , etc. Suppose now a "bit-flip" σ x happens on the second position. This gives One can correct such an error by majority decision, One needs a measurement that indicates exactly where the bit-flip has occurred. This can be done, as will be explained in Example 2.8. However, we cannot correct a single σ z error as is also a possible state of our code. Shor [17] was the first to introduce a quantum code which can correct any single-qubit error. He circumvented this apparent problem by introducing a majority decision on the signs to correct a σ z error. Example 1.2 (Shor code) The coding space for the Shor code is (C 2 ) ⊗9 and a qubit is encoded as and Hence, by linearity, Suppose that we have a σ x error (bit-flip) occuring on the 4-th bit. Then the α 0 term would change to which we would detect and correct by taking the majority decision as with the classical error-correcting code, so we decode Now suppose we have σ z error (phase error) occuring on the 7-th bit. Then the α 0 term would be which we would detect and correct by taking the majority decision on the signs. Since σ y = iσ x σ z , we can also correct σ y errors since the two decisions we made above are independent of each other. Note that the scalar i does not play a role in the decoding. The orthogonal projection onto a subspace Let Q be a subspace of (C 2 ) ⊗n and let Q ⊥ be its orthogonal subspace with respect to the standard inner product defined on (C 2 ) ⊗n ∼ = C 2 n . Any vector |ψ can be written (uniquely) as the sum of a vector P |ψ ∈ Q and P ⊥ |ψ ∈ Q ⊥ . The map |ψ → P |ψ is a linear map, called the orthogonal projection onto Q. Proof. For any j k, Furthermore, Clearly, by definition, P 2 = P . By Lemma 1.3, P is Hermitian, since it is the sum of Hermitian operators. The following lemma implies that this is enough to characterise P . Lemma 1.4 If P is a linear Hermitian operator for which P 2 = P and whose image is Q then P is the orthogonal projection onto Q. The eigenspace with eigenvalue 0 is im(P ) ⊥ . Thus, P is the orthogonal projection onto im(P ). Error-detection and correction For the reliable transmission of an (unknown) quantum system over a noisy channel, we are now faced with three major challenges. 1. Measurement disturbance. As explained in Section 1.1, measurements induce an "update" of the state that is measured. Thus, when obtaining error syndromes in order to understand what error has occurred, the underlying quantum state may be altered. 2. Continuous set of errors. The set of errors is continuous and not discrete. How can we distinguish and correct for an error set this large? 3. No-cloning. Unknown quantum states cannot be copied. Thus an approach of adding redundancy as done for a classical repetition code is bound to fail. How can these challenges be overcome? First, the syndrome measurements are chosen such that they stabilise the set of quantum states that consist of the code. In this way, all code states remain unchanged when extracting the syndromes, while erroneous states are changed in reversible fashion. Second, the linearity of quantum mechanics implies that when some discrete set of errors can be corrected, then so can be errors which lie in their span. We shall not show a proof of this here, but one can be found in [6, Theorem 2] and [4]. Lastly, the encoded quantum information is distributed amongst many systems and thus "hidden" from any noisy channel. In this way the state does not have to be copied and no redundancy is added. This not only gives rise to the below Knill-Laflamme conditions on error correction, but also provides an information theoretic interpretation of quantum error-correction. In quantum error-correction one is faced with the following task. Let be a quantum channel. Given the channel N , for what set of states Q does there exist a recovery channel R such that R • N (ρ) = ρ for all It turns out that the set of correctable states form subspaces. The following theorem gives a necessary and sufficient condition for a recovery channel to exist. Error weights We define the weight wt(M ) of an operator M in the Pauli group P n to be the number of tensor factors which are not equal to σ 0 . For example, has weight three. In classical codes the distance between any two elements of A n is the number of coordinates in which they differ. If the minimum distance of a code C is at least 2t + 1 then C is a t-error correcting code (i.e. we can correct errors if up to t coordinates of a codeword change). In quantum codes the same holds, if a quantum code can detect all errors of weight less than 2t + 1 then it is a t-error correcting code. Definition and examples Most quantum codes presently known are stabilizer codes, and their usefulness lies partially in the fact that their connection with classical codes allows for them to be described in an efficient way. Here, we will mainly deal with stabilizer codes, although we will also see examples of quantum codes in Section 4 which are not stabilizer codes. A qubit stabilizer code Q(S) is the joint eigenspace with eigenvalue 1 of the elements of an abelian subgroup S of P n not containing −1. The subgroup S is also known as the stabilizer. We will often define S as being generated by a set of n − k commuting independent generators M 1 , . . . , M n−k of P n . By independent, we mean that M 1 , . . . , M n−k generate S, while any smaller subset does not. Thus, the set of M i 's are called generators. It is important note that we require −1 ∈ S, since otherwise Q(S) = {0}. We also assume that there is no coordinate in which every element of S has a σ 0 in that coordinate, as we could simply delete this coordinate and this would not affect the error correcting capabilities of the code. Note that the phase of any element in S is ±1, since if which, as mentioned above, implies that Q(S) = {0}. Example 2.1 Suppose n = 2 and S is generated by a single Pauli operator M = σ x ⊗ σ z . We note that the dimension of Q(S) is 2. We often use the short-hand notation σ 0 = I, σ x = X, etc.., so in the previous example we might write M = XZ. In the shorthand notation we would write that S is defined by Observe that M i M j = M j M i for all i and j ∈ {1, 2, 3}. For example This can be checked quickly by verifying that different Pauli matrices {σ x , σ y , σ z } coincide in the same position in M i and M j (i = j) an even number of times. To find a basis for the stabilizer code, suppose that is in the code space, i.e. that α is in the +1-eigenspace of all M i . Since We have that |α is in the +1-eigenspaceM 1 = Im(I + M 1 ) of M 1 if and only if α j00 = α j10 and α j01 = −α j11 . Similarly, Thus, |α is in the +1-eigenspaceM 2 if and only if iα j00 = α j11 and α j01 = −iα j10 . Thus, Finally, is the one-dimensional subspace spanned by In fact, we seldom actually calculate a basis as for Q(S) as it is not necessary in practise. We have only calculated this previous example so one gets a feel of how laborious this is even for small parameters. From a practical point of view it is enough to know the orthogonal projection P for the subspace Q. The dimension and minimum distance of a stabilizer code Let S be an abelian subgroup of P n . Let Q(S) be the subspace defined as the joint eigenspace of eigenvalue 1 of the elements of S. Let P = P (S) be the orthogonal projection onto the subspace Q(S). Since E † = E for all E ∈ P n , we have that P † = P . Moreover, By Lemma 1.4, P = P (S). Theorem 2.4 The stabilizer code Q(S) which is the joint +1-eigenspace of an abelian subgroup S generated by n − k independent elements has dimension 2 k . Proof. By Lemma 2.3, the orthogonal projection onto Q(S) is The image of P is its eigenspace of eigenvalue one and also Q(S). The operator P is Hermitian and thus diagonalisable. Since P 2 = P its eigenvalues are 0 and 1. The trace of P is equal to the sum of its eigenvalues, which in the case of P is the dimension of the eigenspace of eigenvalue one. Therefore, the dimension of Q(S) is equal to the trace of P (S). It only remains to note that tr(M ) = 0 for all M ∈ P n with the exception of M = 1, in which case tr(1) = 2 n . Thus, dim Q = 2 n /|S| = 2 k . Having ascertained the dimension of a stabilizer code, we go on to determine its minimum distance. Let Centraliser(S) denote the set of elements of P n that commute with all elements of S, i.e. the centraliser of S in the group P n . Proof. We proceed by contradiction. (⇒) Suppose that E is undetectable but that E ∈ Centraliser(S) \ S. Since any two elements of P n either commute or anti-commute, E ∈ Centraliser(S) implies there is a M ∈ S such that Take any |ψ , |φ ∈ Q(S) with ψ|φ = 0. Then which implies ψ| E |φ = 0. Hence, by Theorem 1.5, E is detectable, a contradiction. holds for all M ∈ S, which implies that E |ψ ∈ Q. The subgroup generated by S and λ −1 E E defines a smaller stabilizer code, so there is a |ψ ∈ Q such that λ −1 E E |ψ = |ψ , contradicting the above. Hence, E is not detectable. In the case that k = 0, we have that Q(S) is a 1-dimensional subspace so cannot be used to store quantum information and all errors are correctable according to the definition. However, we do not rule out considering such codes since for any proper subgroup S of S, the code Q(S ) will be of interest. Since the elements of S \ S will be in Centraliser(S ) \ S , Theorem 2.6 indicates that it makes sense to define the minimum distance of Q(S) to be equal to the minimum weight of the non-identity elements of S. These codes are called self-dual, for reasons that will become clear, see Theorem 2.12. Theorem 2.6 If k 1 then the minimum distance of the 2 k -dimensional stabilizer code Q(S) with stabilizer group S is equal to the minimum weight of the errors in Centraliser(S) \ S. Proof. By Lemma 2.5, Q(S) can detect all errors which are not elements of Centraliser(S) \ S. In particular, it can also detect all errors of weight less than the minimum weight of an error in Centraliser(S) \ S. If there are elements of S whose weight is less than the minimum distance of Q(S) then the code is called impure. If this is not the case then the code is called pure. We use the shorthand notation ((n, K, d)) to denote a quantum code of (C 2 ) ⊗n of dimension K and minimum distance d. The notation [[n, k, d]] denotes a quantum code of dimension 2 k . If it is a stabilizer code Q(S) then d is equal to the minimum weight of the elements in Centraliser(S) \ S. We now rewrite the Shor code from Example 1.2 as a stabilizer code. [9,1,3]] code) Let S be the subgroup generated by the following elements of P 9 . In shorthand notation this would be written in the following way. Suppose that E is an error of weight at most 2. We want to prove that E ∈ S or E does not commute with some M i . We proceed with a case-by-case analysis. If E has weight one and a single X or Y then it does not commute with one of M 1 , . . . , M 6 . If E has weight one and a single Z then it does not commute with one of M 7 , M 8 . If E has weight two which are both X then, without loss of generality, suppose there is a X in the first system. Then E must have a X or Y in the second system so that it commutes with M 1 . But then it must also have a X or Z in the third system so that it commutes with M 2 , contradicting the fact that it has weight two. We leave the case-by-case analysis as an exercise but conclude that the only errors of weight two which commute with all the M i are precisely those which are in S, i.e. We will prove that the minimum distance of this code is 3 in a very simple manner once we have determined its geometry. An important observation here is that the Shor code is impure since S contains errors of weight 2, whereas the minimum distance is 3. We can store the same amount of information on fewer qubits with the following code. 1,3]] code) Let S be the subgroup generated by the following elements of P 5 . This matrix makes the task of checking that M i M j = M j M i fairly quick. We will prove that the minimum distance is 3 by considering its geometry in Example 3.15. Let us see how we can use this example to correct errors of weight one. We perform measurementsM i on E |φ . This will return a value ±1 (the eigenvalues of M i ). This gives us a "syndrome", a 4-tuple of signs for each error E. These are given in the following tables. Since each syndrome is distinct we can use this look-up table to identify the error and correct it. An important observation here is that when we perform the measurement M i , only the sign of the state can possibly change. Since E |φ is an eigenvector of M i , so after measuring we will be in the state ±E |φ . Thus, we can measure consecutively each measurementM i , for i = 1, . . . , n − k. Qubit stabilizer codes as binary linear codes In this section we introduce a connection between qubit stabilizer codes and classical binary linear codes. We will go on to exploit this connection to construct qubit quantum codes and then to realise a more general connection between stabilizer codes and classical codes. Let F q denote the finite field with q elements. Consider the map defined by the following table. τ : We extend the map τ to P n by applying τ to an element of P n coordinatewise, where the image of the j-th position of M is the j and (j + n)-th coordinate in τ (M ). For example, We draw the line between the n and (n + 1)-st coordinate, for readability sake. We ignore the phase, so τ (λM ) = τ (M ) for all λ ∈ {±1, ±i}. Effectively, this defines the domain of the map τ as P n /{±1, ±i}. Proof. Observe that the multiplicative structure up to a phase factor (for example we ignore the i in σ y = iσ x σ z ) is isomorphic to the additive structure of F 2 2 . We have established a bijection between the elements of P n /{±1, ±i} and F 2n 2 . The above lemma implies that a subgroup S of P n is in bijective correspondence with a subspace of F 2n 2 . We now wish to ascertain what property this subspace has if S is a subgroup generated by commuting elements of P n . To this end, we define an alternating form for u, w ∈ F 2n 2 , The symplectic weight of a vector v ∈ F 2n 2 is defined as Lemma 2.11 The weight of M ∈ P n is equal to the symplectic weight of τ (M ). Proof. We have that n − wt(M ) is equal to the number of σ 0 's in M which is equal to n minus the symplectic weight of τ (M ). For a subspace C F 2n 2 , we define ⊥ a as Theorem 2.12 S is a subgroup of P n generated by n − k independent mutually commuting elements if and only if C = τ (S) is a (n − k)-dimensional subspace of F 2n 2 for which C C ⊥a . If k = 0 then the minimum distance of Q(S) is equal to the minimum symplectic weight of the elements of C ⊥a \ C. If k = 0 then the minimum distance of Q(S) is equal to the minimum symplectic weight of the non-zero elements of C = C ⊥a . Proof. The fact that C = τ (S) is contained in C ⊥a follows from Lemma 2.9 and Lemma 2.10. By Theorem 2.6, for k = 0, the minimum distance is equal to the minimum weight of the images of the elements of Centraliser(S) under τ , which are not elements of the image of S. Since C = τ (S) and C ⊥a = τ (Centraliser(S)), the theorem follows for k = 0. For k = 0, by definition, the minimum distance is equal to the minimum weight of the images of the elements of S under τ , which are the non-zero elements of C. We can construct a generator matrix G(S) for C = τ (S) by taking the (n − k) × 2n matrix whose i-th row is τ (M i ). The following table makes for a useful reference. P n the Pauli group, given by n-fold tensor products of Pauli matrices σ 0 , σ x , σ y , σ z with phases {±i, ±1}. M 1 , . . . , M n−k the generators, a set of independent elements of P n that generate S. S the stabilizer, an abelian subgroup of P n . Q(S) the quantum code obtained as the joint intersection of the eigenspaces of eigenvalue 1 of the operators in S. Let S be the subgroup of P 5 generated by the following pairwise commuting elements. The matrix G(S) for this code is One can check directly that (u, v) a = 0 for any two rows u, v of G(S). Alternatively, it is enough to observe that A is symmetric and that We will prove in Example 3.15 that the minimum distance of Q(S) is 3. Observe that any n×n The difficulty lies in choosing A so that the symplectic weight of the code generated by G (and hence d) is large. 3 The geometry of additive, linear and stabilizer codes Additive and linear codes over a finite field We recall that a code of length n is a subset C of A n , where A is a finite set called the alphabet. An element of C is called a codeword. The distance between any two elements of A n is the number of coordinates in which they differ. The minimum distance of C is the minimum distance between any two codewords of C. Suppose A is a finite abelian group with identity element 0. If u + v ∈ C for all u, v ∈ C then we say that C is additive. The weight of an element (codeword) u of an additive code is the number of non-zero coordinates that it has. Lemma 3.1 If C is an additive code over an alphabet which is a finite abelian group then the minimum distance d of C is equal to the minimum non-zero weight w. Suppose that u is a codeword of minimum weight w. Then since 0 ∈ C, we have w d. Suppose that u and v are two codewords which differ in exactly d coordinates. Then u − v is a codeword in C of weight d and so d w. Suppose that A = F q , the finite field with q = p h elements, p prime. If C is additive then λu ∈ C for all λ ∈ F p , so C is a subspace over F p . If C has the additional property that λu ∈ C for all λ in F q then we say C is linear. A linear code of length n is a subspace of F n q . We use the notation (n, K, d) q code to denote a code over an alphabet of size q of length n, size K and minimum distance d. The notation [n, k, d] q code denotes a k-dimensional linear code over F q of length n and minimum distance d. The geometry of linear codes We will begin our geometrical study of codes by considering linear codes over F q . Let G be a k × n matrix. We recall that when a t is a row vector in F k q , the expression a t G yields a linear combination of the rows of G. Likewise, when b is a column vector in F n q , the expression Gb yields a linear combination of the columns of G. Let C be a k-dimensional linear code over F q of length n, in other words, C is a k-dimensional subspace of F n q . We describe C by a k × n matrix G whose row space is C, i.e. the rows of G are a basis for C. Thus, for each u ∈ C, there is an In other words, the generator matrix G acts as a linear encoding matrix for the message a, yielding the codeword u ready to be sent over a noisy channel. The geometry of C is seen by considering the set of columns of the generator matrix G. Let X be the set of columns of G, so X is a (possibly multi-)set of n vectors of F k q . The codeword u = a t G has a zero in its i-th coordinate if and only if This property is unaffected if we replace z by a non-zero scalar multiple of z, so it is natural to consider X as a (possibly multi-)set of n points of PG The projective space PG(k − 1, q) is obtained from the vector space F k q by identifying the vectors which are scalar multiples of each other. In this way, the points of PG(k − 1, q) are the one-dimensional subspaces of F k q and, more generally, the (i − 1)dimensional subspaces of PG(k − 1, q) are the i-dimensional subspaces of F k q . The lines, planes and hyperplanes of PG(k −1, q) are the 1-dimensional, 2-dimensional and co-dimension 1 subspaces, respectively. Note that in PG(k − 1, q) familiar geometric properties hold. For example, two points are joined by a line; the intersection of two planes in a three-dimensional subspace is a line. If a point x is contained in a subspace π we say that x is incident with π. If two subspaces π 1 and π 2 have an empty intersection (i.e. their corresponding subspaces in F k q intersect in the zero vector), then we say that they are skew. A set of points x 1 , . . . , x r of a projective space are independent if they span an (r − 1)-dimensional (projective) subspace. If they are not independent then they are dependent. The number of r-tuples of linearly independent vectors of F k q is Hence, the number of r-dimensional subspaces of F k q is Thus, the number of points of PG(k − 1, q) is There is a natural duality between the points of PG(k − 1, q) and the hyperplanes of PG(k − 1, q). A point (a 1 , . . . , a k ) is mapped to the hyperplane defined as the kernel as the linear form Thus, the number of hyperplanes of PG(k − 1, q) is also which can be checked directly by calculating The number of points in PG(k − 1, 2) is 2 k − 1 and the number of lines of Thus, the lemma is holds taking into account the dimension shift when considering the projective space. The following theorem explains what the minimum distance d of a linear code implies for the set of points X . Theorem 3.3 An [n, k, d] linear code over F q is equivalent to a (possibly multi-)set of points X in PG(k − 1, q) in which every hyperplane of PG(k − 1, q) contains at most n − d points of X and some hyperplane contains exactly n − d points of X . Proof. Let G be a k × n matrix whose row space is a [n, k, d] linear code C. Let X be the set of columns of G viewed as points of PG(k − 1, q). Recall that the codeword u = a t G has a zero in its i-th coordinate if and only if The kernel of the linear form a 1 X 1 + · · · + a k X k defines a hyperplane π a of PG(k − 1, q). The codeword u = a t G has weight w if and only if u has exactly n − w zero coordinates. This is the case if and only if π a is incident with n − w points of X . By Lemma 3.1, the minimum distance of a linear code is equal to its minimum weight. Hence, the maximum number of points of X on a hyperplane of PG The geometry of additive codes An additive code C over F q is linear over F p , where q = p h for some prime p. Therefore, |C| = p r for some r. The following theorem is the additive version of Theorem 3.3; the set of points X is replaced by a set of subspaces. Theorem 3.4 An (n, p r , d) additive code over F q with q = p h is equivalent to a (possibly multi-)set X of (h − 1)-dimensional subspaces in PG(r − 1, p) in which every hyperplane of PG(r − 1, p) contains at most n − d subspaces of X and some hyperplane contains exactly n − d subspaces of X . Proof. Let G be a r × n matrix which is a basis for C over F p . As in the case of linear codes, we consider the (possibly multi-)set X of columns of G. However, we shouldn't consider the elements of X as points of PG(r − 1, q), since we obtain C from G by taking the row span over F p and not over F q . Thus, we consider the elements of X as subspaces of PG(r − 1, p). Suppose that e ∈ F q , is such that {1, e, e 2 , . . . , e h−1 } is a basis for F q over F p . Then, up to scalar factor, we can write x ∈ X as We associate x with the subspace spanned by x 0 , . . . , x h−1 in PG(r − 1, p), which we denote by x . The subspace x has dimension at most h − 1. Suppose that x is the i-th column of G, so x ∈ X . The non-zero codeword u = a t G, where a ∈ F r p , has a zero in its i-th coordinate if and only if the hyperplane of PG(r − 1, p), which is the kernel of linear form Observe that a linear code over F q necessarily has size q k , so if we wish to obtain an additive code with the same parameters as a linear code, then r = kh for some k. The geometry of qubit quantum codes For the moment, we restrict to the case q = 2 and consider the geometrical consequences of Theorem 2.12, which describes the connection between stabilizer codes and binary linear codes. A qubit stabilizer code Q(S) is equivalent to a binary linear code C = τ (S) of length 2n which is contained in its alternating dual C ⊥a . According to Theorem 2.12, the minimum distance of Q(S) is the minimum symplectic weight of C ⊥a \C. Consider once again the Shor code from Example 1.2. Since there are two columns which are linearly dependent, there are elements of C ⊥a of symplectic weight two; these are images under τ of Pauli operators of Centraliser(S) of weight two. To see this, recall that the alternating form is defined as so the dependency of the first two columns implies that (0, 0, 0, 0, 0, 0, 0, 0, 0 | 1, 1, 0, 0, 0, 0, 0, 0, 0) is an element of C ⊥a . However, this element is an element of C, since it's the first row of the matrix. Recall that the minimum distance is equal to the minimum symplectic weight of C ⊥a \ C. Therefore, although C ⊥a contains elements of symplectic weight 2, the minimum symplectic weight of C ⊥a \ C is in fact 3. We will prove this in Example 3.9. Given a subgroup S, generated by n − k commuting elements M 1 , . . . , M n−k of P n , we obtain a set X of n lines or possibly points in PG(n − k − 1, 2) in the following way. For each i ∈ {1, . . . , n}, we get a line (or a point) by considering the span of the i-th and (i + n)-th column of the generator matrix G(S). Vice versa, given a set of n lines in PG(n − k − 1, 2), we construct a (n − k) × 2n matrix, from which we obtain M 1 , . . . , M n−k by applying τ −1 to the rows of the matrix. On first sight it may seem that there is a certain amount of freedom when we reconstruct the code from a given quantum set of lines. Each line is incident with three points and we can choose which pair of points on the line to use to construct the i-th and the (i + n)-th column of G. This choice is equivalent to invoking a permutation of {σ x , σ y , σ z } on the i-th position of each of the M 1 , . . . , M n−k . This does not affect the property that these elements pairwise commute, so we define all quantum codes that can be obtained from each other in this way to be equivalent. For example, in Example 2.14, invoking the permutation σ which takes X → Z → Y → X on the M i in the first, second and fourth positions gives from Example 2.14, we see that the set of lines X remains unchanged. There is also a choice between the scalar factor of M when we apply τ −1 to a row of the matrix G. We will always assume that this factor to be 1. However, changing the sign of some of the generators of a subgroup S can be useful, as we shall see in Section 4. We would like to give a geometrical interpretation of the fact that the code C = τ (S) is contained in C ⊥a . Recall that we say two subspaces of PG(k − 1, q) are skew if they do not intersect. Theorem 3.7 The following are equivalent. 1. There is a [[n, k, d]] stabilizer code Q(S), where S is a subgroup generated by n − k independent commuting elements of P n and whose centraliser contains no element of weight one. 2. There is a set of n lines X spanning PG(n − k − 1, 2) with the property that every co-dimension 2 subspace is skew to an even number of the number of lines of X . Proof. Let X be the set of n lines obtained for i = 1, . . . , n as the span of the i-th and (i + n)-th column of G(S). Let u, w ∈ C, so u = (a 1 , . . . , a n−k )G and w = (b 1 , . . . , b n−k )G for some a = (a 1 , . . . , a n−k ) ∈ F n−k One has C ⊆ C ⊥a if and only if for all u, w ∈ C. We want to deduce the geometrical meaning of (u, w) a = 0. Consider a single term in the sum first. Let x and y be the j-th and the (n + j)-th column of G respectively. Then The right-hand side is zero if and only if the matrix a · x a · y b · x b · yc has zero determinant, i.e. it has rank 1. Recall that we define π a as the hyperplane which is the kernel of the linear form a · X = a 1 X 1 + · · · + a n−k X n−k . We can thus rewrite the above conditions as the requirement that the point λx + µy is contained in both π a and π b . In other words, there is a point on the line , spanned by x and y, which is incident with the intersection of the two hyperplanes π a and π b . Returning to the condition (u, v) a = 0, we must therefore get an even number of ones in the sum n j=1 (u j w n+j − u n+j w j ) . All lines of X that are skew to π a ∩ π b = ker(a · X) ∩ ker(b · X) contribute; for any given a and b there there must in total be an even number of such lines. We note that every co-dimension 2 subspace of PG(n − k − 1, 2) can be realised in this way (as the intersection of some a · X = 0 and b · X = 0). This proves the forward implication. π a λx + µy x a line that is skew to π a ∩ π b x y π b y π a ∩ π b (co-dimension 2) Figure 1: A point λx + µy on the intersection of the hyperplanes π a and π b . (1 ⇐ 2) Let X be a set of lines spanning PG(n − k − 1, 2) with the property that every co-dimension 2 subspace of PG(n − k − 1, 2) is skew to an even number of lines of X . Let G be the matrix whose i-th and (i + n)-th column are points which span the i-th line of X . Let C be the code generated by G. Since X spans PG(n − k − 1, 2), the code C is (n − k)-dimensional. As we proved in the forward implication, the property that every co-dimension 2 subspace is skew to an even number of lines of X implies that for any two codewords u and v of C, (u, v) a = 0 holds. By Lemma 2.10, the image under τ −1 of C is an abelian subgroup S of P n and by Lemma 2.13, it is generated by n − k pairwise commuting elements of P n . Let X be a set of lines and let Θ(X ) be the space spanned by the lines of X . We say that X is a quantum set of lines if it has the property that every co-dimension 2 subspace of Θ(X ) is skew to an even number of lines of X . To deduce the minimum distance of the corresponding stabilizer code, we introduce the parameter d(X ). Recall that r points are independent if they span an (r − 1)-dimensional subspace; they are dependent otherwise. Consider first the case in which dim Θ(X ) = |X | − 1. By Theorem 3.7, X will give a quantum [[n, k, d]] code with k = 0. We define the parameter d(X ) as the minimum number of dependent points that can be found on distinct lines of X ; not including the dependencies for which there is a hyperplane of Θ(X ) which both a) contains all the lines of X which do not contain the dependent points , b) contains all the dependent points. 3 Thus, d(X ) = r, where r is minimal such that there exists a set of dependent points {x 1 , . . . , x r }, where each x i is incident with a line i ∈ X and the lines 1 , . . . , r are distinct, but for which there is no hyperplane containing the lines X \ { 1 , . . . , r } and the points {x 1 , . . . , x r }. In the case in which dim Θ(X ) = |X | − 1, Theorem 3.7 implies that X will give a quantum [[n, k, d]] code with k = 0. We define the parameter d(X ) as the minimum d for which there is a hyperplane of Θ(X ) containing |X | − d lines of X . Equivalently. it is the minimum number of dependent points that can be found on distinct lines of X . This definition and the equivalence will be justified in the proof of Theorem 3.8. From now on we assume that the centraliser of the stabilizer S contains no elements of weight one. By Lemma 3.6, this assumption guarantees that there is a quantum set of lines associated with the stabilizer code. As mentioned before, this is equivalent to assuming that the minimum distance is at least 2 in the case of pure codes. Proof. We only have to prove the part about the minimum distance since Theorem 3.7 covers the rest. As in the proof of Theorem 3.7, let G = G(S) be the (n − k) × 2n generator matrix with entries from F 2 whose row space forms the code C. Define a set of lines where j is the line that corresponds to the span of the j-th and (j + n)-th column of G. Consider the case k = 0. By Theorem 2.12, the parameter d is the minimum symplectic weight of C ⊥a \ C. Suppose now that v ∈ C ⊥a has symplectic weight w and let W denote the set of positions that contribute to the weight, Clearly, |W | = w. Denote by x j the j-th column of G. Each summand corresponds to some point of j . Thus, there are w = |W | points on distinct lines { j | j ∈ W } which are dependent. However, since the minimum distance d is the minimum symplectic weight of C ⊥a \ C, we have to disregard this dependency if v ∈ C. A vector v is in C if and only if v = aG for some a ∈ F n−k First, consider those positions j of v that do not contribute to its symplectic weight, that is, j / ∈ W . For each j / ∈ W , one has that v j = a · x j = 0 and v n+j = a · x n+j = 0 if and only if the line l j is contained in the hyperplane π a described by a · X = 0. So the lines of { j | j ∈ {1, . . . , n} \ W } are contained in π a . Second, consider those positions j of v that contribute to its symplectic weight, j ∈ W . Then since v j = a · x j and v n+j = a · x n+j . Hence, the dependent points are also contained in the hyperplane a · X = 0. This exactly coincides with our definition of d(X ). Now, consider the case k = 0. By Theorem 2.12, the parameter d is the minimum non-zero symplectic weight of C. Let v ∈ C be of minimum non-zero symplectic weight. Since v ∈ C, v = aG for some a ∈ F n−k 2 . Thus, v j = a · x j for all j = 1, . . . , 2n. Let W denote the set of positions that contribute to the symplectic weight of v, i.e. Then, for j ∈ W , a · x j = a · x n+j = 0 which is equivalent to the line j ∈ X being contained in the hyperplane a · X = 0. Therefore, there is a hyperplane of Θ(X ) containing |X | − d lines of X which coincides with our definition of d(X ) in this case. Alternatively, since C = C ⊥a , the parameter d is the minimum non-zero symplectic weight of C ⊥a . As in the case k = 0, a vector v = (v 1 , . . . , v 2n ) ∈ C ⊥a of symplectic weight d, will give a dependency of d points of X , which coincides with our alternative definition of d(X ) in this case. Let G = G(S) be the (n − k) × 2n generator matrix for a code C, whose i-th and (i + n)-th column span the i-th line of X . Let S = τ −1 (C) and let Q(S) be the stabiliser code. By Theorem 3.7 and the fact that Θ(X ) = PG(n − k − 1, 2), Q(S) is a [[n, k, d]] stabilizer code for some d. The fact that d = d(X ) follows from the same arguments as in the forward implication, observing that if which implies v j = a · x j and v n+j = a · x n+j , assuming (a · x j , a · x n+j ) = (0, 0). This is precisely the assumption that j is not contained in the hyperplane π a . Example 3.9 (Shor code) As we saw in Example 3.5, the Shor code has the generator matrix which is drawn in Figure 2. Here, e i , e j denotes the line spanned by points e i and e j . Note that the point e 7 is on the two lines e 1 , e 7 and e 1 + e 2 , e 7 , and thus e 7 is "dependent with itself ". So at first sight it seems that d(X ) = 2. However, the remaining seven lines span a six dimensional subspace since the two planes e 3 , e 4 , e 7 + e 8 and e 5 , e 6 , e 8 span a five dimensional subspace, while the line e 2 , e 7 extends this to a six dimensional subspace that also contains the point e 7 (i.e. contains all dependent points). Following Theorem 3.8, we do not count this dependency and conclude that d(X ) 3. The dependency of e 7 with itself implies that the Shor code is impure. The dependent points {e 1 , e 2 , e 1 + e 2 } imply that d(X ) = 3. Although the six lines not containing these points are contained in a hyperplane, there is no hyperplane containing the six lines and the dependent points, thus we do not disregard this dependency. Thus, we see that condition b) is essential in the definition of d(X ). Let us generalize one feature of the Shor code further: a planar pencil of lines in a projective space is a set of lines which are all contained in some plane and are all the lines incident with a point in that plane. As illustrated in Figure 2, the Shor code is the union of three planar pencils. Observe that a planar pencil of lines is itself a quantum set of lines. Our aim is to show that a quantum set of lines is nothing more than the union modulo two of planar pencils of lines. We first prove a few lemmas. Proof. Let X and Y be two quantum sets of lines. Recall that Θ(X ), Θ(Y), and Θ(X ∪ Y) are the spaces spanned by X , Y, and both sets of lines respectively. A co-dimension 2 subspace π intersects Θ(X ) in either a co-dimension 2 subspace, in a hyperplane, or in Θ(X ). In the first case it is skew to an even number of the lines of X ; in the latter two cases it is skew to none (which is even). Let X be the subset of X of lines skew to π. Likewise, let Y be the subset of Y of lines skew to π. Then π is skew to |X | + |Y| − 2|X ∩ Y| lines of the union modulo two of X and Y. Since both |X | and |Y| are even, every co-dimension 2 subspace is skew to an even number of lines of X ∪ Y. This proves the lemma. An r-sputnik is a set of (r + 1) concurrent lines (they are all incident with some point) in an r-dimensional subspace π with the property that any r of them span π. In Figure 3 a 3-sputnik is illustrated. Our aim will be to prove that a quantum set of lines is the union modulo two of planar pencils of lines. Firstly we will prove that this claim is true for an r-sputnik. Proof. Let X be an r-sputnik and take any two lines and ∈ X . The r − 1 lines of X \ { , } span a (r − 1)-dimensional subspace which intersects the plane spanned by and in a line . The line is the third line in the planar pencil of lines spanned by and . Thus, adding (modulo 2) this pencil of lines to X we get an (r − 1)-sputnik. Now continue adding planar pencils of lines in this way until we get a 2-sputnik. Since a 2-sputnik is a planar pencil of lines, it is a quantum set of lines. We can then reverse the process adding planar pencils of lines to recover the r-sputnik which, by Lemma 3.10, is also a quantum set of lines. Lemma 3.12 Let X be a quantum set of lines. There is a set D of dependent points such that each point of D is incident with a different line of X . Proof. Let π = Θ(X ) be the subspace spanned by the lines of X and let ∈ X . Let π = Θ(X \ { }) be the subspace spanned by the lines of X \ { }. The subspace π is either a co-dimension 2 subspace of π, a hyperplane of π, or π itself. The first case is ruled out since X is a quantum set of lines and, by definition, any co-dimension 2 subspace is skew to an even number of lines of X . Therefore, there is a point of x of incident with π . Any point of π is the sum of points incident with the lines of X \ { }. Thus, we obtain a set of dependent points each incident with a line of X . If in this set there are two points y and z incident with same line of X , then we can replace y and z by \ {y, z}. Hence, we obtain a set of dependent points each incident with a distinct line of X . Lemma 3.13 A quantum set of three lines is a planar pencil of lines. Proof. Suppose that the quantum set of three lines X = { 1 , 2 , 3 } span PG (4,2) or PG(5, 2) respectively. Then there is a point x ∈ 2 such that the co-dimension 2 subspace spanned by 1 and x (resp. 1 and 2 ) is skew to 3 . This contradicts the definition of a quantum set of lines. Suppose that the quantum set of three lines X = { 1 , 2 , 3 } span PG(3, 2). If 1 and 2 intersect then the co-dimension 2 subspace 1 (and also 2 ) must also intersect 3 . Since they span PG(3, 2) the three lines must be concurrent (and not co-planar). Taking the union modulo 2 of the planar pencil of lines spanned by 2 and 3 we obtain, by Lemma 3.10, a quantum set of two lines, which does not exist. Thus we have three pairwise skew lines 1 , 2 , 3 with the property that any line incident with two of them is incident with the third. This implies there are nine lines which are all incident with exactly one point of each of 1 , 2 , 3 , see Therefore, the quantum set of three lines span a PG(2, 2). A co-dimension 2 subspace is just a point, so a quantum set of lines must be incident with every point of the plane. Hence, X is a planar pencil of lines. The following theorem is due to Glynn, Gulliver, Maks and Gupta [5]. It is important to note that if the qubit stabilizer code has minimum distance 2 then it is possible that the quantum set of lines X contains repeated lines. This occurs, for example, in the [[5, 2, 2]] code. Theorem 3.14 A qubit stabilizer code with minimum distance at least three is equivalent to a quantum set of lines which is generated by the union modulo two of planar pencils of lines. Proof. Let X be a quantum set of lines. We will prove that there is an r-sputnik X such that the union modulo 2 of X , X and r − 1 planar pencils of lines is a quantum set of |X | − 1 lines. Since, by Lemma 3.11, X is the union modulo 2 of planar pencils of lines, this implies that, by iteration, we can take the union modulo 2 of X and some planar pencils of lines and obtain a quantum set of three lines, by Lemma 3.10. By Lemma 3.13, this set of three lines is a planar pencil of lines and we are done. By Lemma 3.12, there is a set x 1 , . . . , x r+1 of minimally dependent points incident with the lines 1 , . . . , r+1 of X , respectively. Let x ∈ r+1 \ {x r+1 }. Let j be the line spanned by the points x and x j , for j = 1, . . . , r. Let X be the r-sputnik, Let L j be the planar pencil of lines spanned by j and j . In Figure 6, r = 5, the lines j are the thick lines, the j are the medium thickness lines and the thin lines are the third line in the planar pencil of lines spanned by j and j . By Lemma 3.10, the union modulo two of (∪ r j=1 L j ) ∪ X ∪ X is a quantum set of lines and, on inspection, it is a set of |X | − 1 lines. x 1 x 2 x 3 x 4 x 5 x Figure 6: The thick lines are in X , the medium-thick lines are in X and the thin lines make up the planar pencils at each point x 1 , . . . , x r . PG (3,2). Then, since any two of the thick lines are pairwise skew, we have that the minimum distance is 3. Research Problem 1 The parameters [ [14,3,5]] are the smallest for which it is unknown whether there exists a qubit stabilizer code or not [8]. To construct such a code one should look for a union modulo two of planar pencils of lines that give 14 lines in PG(10, 2), such that any four points on 4 of the 14 lines that also lie on a common plane, the remaining 10 lines are contained in a hyperplane which also contains those four dependent points. Theorem 3.14 can also be used to rule out the existence of quantum codes with certain parameters sets. For example, were a [ [4,0,3]] stabilizer code to exist then X would be a set of four skew lines in PG (3,2) with the property that any line is skew to an even number of lines of X . However, the lines of X themselves are skew to the other three lines of X , which is an odd number. A more interesting exercise is to prove that a [ [7,0,4]] code does not exist. To prove this, show that there are at least five three dimensional subspaces which intersect all of the 7 lines of PG (6,2) in the quantum set of lines and prove that these pairwise intersect in a point. Direct sum of stabilizer codes As discussed in the previous sections, a stabilizer code is defined as the common (+1)-eigenspace of an independent set of pairwise commuting Pauli operators M 1 , . . . , M n−k ; this is the generator of the code. In other words, these codes are completely characterized by an abelian subgroup S = M 1 , . . . , M n−k ⊂ P n . The aim of this section is to construct quantum codes that are the direct sum of stabilizer codes. Technically speaking, any subspace can be regarded as a quantum code, and naturally we want to make sure to obtain a large mininum distance when taking this direct sum of subspaces. Thus, we seek for some additional structure amongst them. While each individual subspace will again be defined by a set of generators M 1 , . . . , M n−k , we will now not simply take the joint eigenspace with eigenvalue 1 as our code space. We have already observed that to avoid constructing a trivial code, one restricts the stabilizer not to contain a non-trivial multiple of the identity, −1 ∈ S. This implies that each generator can only have an overall phase of +1 or −1 and they are of the form M j = ±σ 1 ⊗ · · · ⊗ σ n for some σ 1 , . . . , σ n ∈ P 1 . Now observe that when M 1 , . . . , M n−k commute, then so do ±M 1 , . . . , ±M n−k . Thus for all t = (t 1 , . . . , t n−k ) ∈ {0, 1} n−k , one can define a corresponding stabilizer code Q(S t ) as the joint (+1)-eigenspace of For distinct t and t ∈ T , there is a j such that t j = t j . Without loss of generality, suppose that t j = 1. For all |v ∈ Q(S t ) and |w ∈ Q(S t ), one has v|w = v|M j w = M j v|w = − v|w = 0. Consequently, Q(S t ) and Q(S t ) are orthogonal. For any T ⊂ {0, 1} m , we define a direct sum stabilizer code (confusingly also known as a union stabilizer code) as To be able to determine the minimum distance of this quantum code, we first determine the errors which are not detectable. In the following lemma, the coordinates where two (n − k)-tuples t and t differ is denoted by Lemma 4.1 Suppose Q(S T ) is unable to detect an error E. Then there is a pair t, t ∈ T such that E commutes with M j for all j ∈ {1, . . . , n − k} \ supp(t + t ). Proof. Suppose there is no such pair. Then, for all t, t ∈ T , there is a j ∈ {1, . . . , n − k} \ supp(t + t ) for which E anti-commutes with M j . For any u ∈ Q(S t ) and u ∈ Q(S t ), we have either in the case that both u and u are eigenvectors of M j with eigenvalue 1 and in the case that both u and u are eigenvectors of M j with eigenvalue −1. Similarly, for any u, u ∈ Q(S t ), since E anti-commutes with M j , u| E |u = 0. Suppose that B t is an orthogonal basis for Q(S t ). Since Q(S T ) is a direct sum of orthogonal subspaces, is an orthogonal basis for Q(S T ). Suppose that w, w ∈ Q(S T ) and that w|w = 0. Writing out w and w with respect to the basis B T we have This implies that E is detectable, a contradiction. This ends the proof. Thus, according to Lemma 4.1, we only need concern ourselves with the errors which are in Centraliser(S t+t ) for any t, t ∈ T . This motivates the definition where d t+t is the minimum distance of Q(S t+t ). Theorem 4.2 The subspace Q(S T ) is an ((n, |T |2 k , d T )) quantum code. Proof. If E is undetectable then it is an element of Centraliser(S t+t ) for some t, t ∈ T . Thus, the minimum distance of Q(S T ) will be the minimum of the minimum distances of Q(S t+t ). The Rains, Hardin, Shor, Sloane non-additive quantum code This code first appeared in [15], although the geometric observation given here appears to be new. Observe that deleting any two rows of this matrix we obtain a 3 × 10 matrix whose 5 pairs of columns define a quantum set of lines in PG (2,2). This quantum set of lines defines a stabilizer code whose minimum distance is 2. Therefore, if we set then, by Theorem 4.2, Q(S T ) is a ((5, 6, 2)) quantum code. The geometry of direct sum stabilizer codes Suppose that we restrict our choice of elements of T to singleton subsets and the empty set, as in Example 4.3. Let X be the quantum set of lines of PG(n − k − 1, 2) associated with the [[n, k, d]] quantum stabilizer code Q(S), where S is the subgroup generated by M 1 , . . . , M n−k . Let P = {e 1 , . . . , e r } be a set of linearly independent points of PG(n−k −1, 2), chosen so that the projection from any two points e i , e j ∈ P of the lines of X is a set of lines of PG(n − k − 3, 2). If this projection is a set of lines then it is necessarily a quantum set of lines, which we denote by X ij . If we choose a basis so that e j ∈ P is the j-th element in the basis then the projection from e i and e j gives a stabilizer code generated by The parameter d(X ij ) is then the minimum distance of the stabilizer code Q(S e i +e j ). Thus, the definition in (4) will be Hence, we have a purely geometric way to construct direct sum stabilizer codes with parameters ((n, (r + 1)2 k , d T )), for some r n − k. Research Problem 2 Find quantum sets of lines X for which there are points with the property that the projection of the lines of X from any pair is onto a quantum set of lines X with relatively large d(X ). It should be possible to make direct sum stabilizer codes with good parameters from this geometrical construction. It would be of great interest if one could construct codes with parameters for which stabilizer codes could feasibly exist but none are known to exist. The higher-dimensional Pauli group When a quantum system has D levels we speak of a quDit. In this section, we will consider quantum codes over such larger subsystems. Consequently, these codes are subspaces of the Hilbert space (C D ) ⊗n . We will consider (C q ) ⊗n , where q = p h , is the power of a prime p. The restriction to prime powers allows us to use the structure of the finite field for their construction. In the case when D is not a prime power, one can use the ring Z/DZ, but then most of the constructions that we will consider here will not work. We label the coordinates of C q with elements of F q , where F q denotes the finite field with q elements. In this way, a basis for the space of endomorphisms of C q can be indexed by the elements of F q × F q . For each a ∈ F q , we define a q × q matrix X(a) to be matrix obtained from from the linear map which permutes the coordinates of C q by adding a to the index. In other words, with basis {|x | x ∈ F q } of C, For each b ∈ F q , we define a q × q matrix Z(b) to be the diagonal matrix whose i-th diagonal entry is w tr(ib) . Here, w = e 2πi/p is a primitive p-th root of unity and tr is the trace map from F q to its prime subfield F p , As in the previous case, if we take say q = 3 then where ω is a primitive complex third root of unity. Recall, that the rows and columns of the matrix are indexed by elements of F q , so i ∈ F q . Thus, We define the Pauli group for q odd as and for q even, that is when p = 2, as The reason that we accommodate this slightly larger group for q even is due to Lemma 5.2 below. One can check that this definition coincides with our definition of the Pauli group for q = 2. The size of P n is pq 2n for q odd and 4q 2n for q even. The following lemma implies that non-identity elements of the Pauli group have order p, for q odd. Note that for q even this is not the case; there are elements of order four. However, we extend the Pauli group as above (defining σ y = iσ x σ z ) and in this way we introduce more elements of order two. We do this so that we have more options for M i in our set of pairwise commuting operators which will generate the abelian subgroup S. 4 Lemma 5.2 For all a, b ∈ F n q and r ∈ N, Proof. By induction on r, we have By Lemma 5.1, this is equal to As in the case of qubit codes, we will again be looking to construct stabilizer codes and for this reason it will be of interest to know when elements M, N ∈ P n commute or not. For this reason the following lemma is fundamental. Proof. X(a) and X(a ) commute, likewise Z(b) and Z(b ), so the lemma follows from Lemma 5.1. Error detection and correction As in the case of qubit codes it suffices to consider errors from the group P n of Pauli-errors which are unitary operators of the form Let Q be a quantum error correcting code of (C q ) ⊗n , i.e. a subspace of (C q ) ⊗n . Then again, as in the case of qubit codes, Q detects an error E ∈ P if for all |φ , |ψ ∈ Q with φ|ψ = 0, we have that for some constant c E which depends only on E. A quantum code Q has minimum distance d if one can detect Pauli-errors with up to d − 1 non-identity matrices and correct Pauli-errors with up to d−1 2 non-identity matrices. We say that a quantum code of (C q ) ⊗n of dimension K and minimum distance d is a ((n, K, d)) q code. If the code has dimension K = q k then we say that the code is a [[n, K, d]] q code. Note that some authors reserve the latter notation [[n, K, d]] q for stabilizer codes only. Stabilizer codes A stabilizer code is the intersection of the eigenspaces with eigenvalue one of the elements of an abelian subgroup S of P n . As before, we denote the code by Q(S). We insist that λ1 ∈ S whenever λ = 1, since otherwise Q(S) is trivial. As in the qubit case, a stabilizer code Q(S) with stabilizer S can detect all Paulierrors that are scalar multiples of elements in S or that do not commute with some element of S. We denote by Centraliser(S), the elements of P n that commute with all elements of S. A non-detectable Pauli-error must be in Centraliser(S). Commuting elements are characterised as follows. is zero. As in the case for qubit codes, we introduce the map τ which maps elements of P n to F 2n q by τ (X(a)Z(b)) = (a|b). For elements u, w ∈ F 2n q , the trace symplectic form is (u, w) a = n j=1 tr(u j w j+n − w j u j+n ). Then with u = (a|b) and w = (a |b ), this is the trace symplectic form (5). Stabiliser codes as additive codes over F q Let τ be the map that maps cX(a)Z(b) to (a|b) ∈ F 2n q . The group S is mapped to an additive code C = τ (S). The symplectic weight of (a|b) ∈ F 2n q is the number of i ∈ {1, . . . , n} such that (a i , b i ) = (0, 0). Thus, an element cX(a)Z(b) of weight w is mapped to a vector of symplectic weight w. The elements of Centraliser(S) are mapped to the dual code of C, namely Here the dual ⊥ a is taken with respect to the trace symplectic form (6). We have the following important theorem. Theorem 5.4 An ((n, K, d)) q stabilizer code exists if and only if there exists an additive code C F 2n q of size |C| = q n /K such that C C ⊥a . If K = 1 then d is the minimum symplectic weight of an element of C ⊥a \ C, otherwise d is the minimum symplectic weight of an element of C ⊥a = C. Proof. Let S be an abelian subgroup of P n not containing non-trivial multiples of the identity. Let Q(S) be the corresponding ((n, K, d)) q stabilizer code and let Thus, since P is Hermitian and P 2 = P , the dimension of its image Q(S) is equal to the trace of P . Since tr(M ) = 0 for all M ∈ P n , M = 1 and tr(1) = q n , one has tr(P ) = q n /|S| and so |S| = q n /K, since dim Q(S) = K. We note that C = τ (S) is an additive code since S is an abelian subgroup and has size |S| = q n /K. Since τ (Centraliser(S)) = C ⊥a , we have C C ⊥a . For K = 1, the minimum symplectic weight of any element of C ⊥a \ C is d, since the minimum distance of Q(S) is the minimum weight of the Pauli operators in Centraliser(S)\S. As in the qubit case, if K = 1 then we define the minimum distance of Q(S) to be the minimum weight of the Pauli operators in Centraliser(S) = S, which is equal to the minimum symplectic weight of any element of C ⊥a = C The backwards implication is similar. Let S = τ −1 (C) and define the stabilizer code to be Q(S). Then the dimension follows as above. If K = 1 then the minimum distance of Q(S) corresponds as above to the minimum symplectic weight of an element of C ⊥a \ C, since Centraliser(S) is equal to τ −1 (C ⊥a ) up to a scalar factor. If K = 1 then the minimum distance of Q(S) corresponds to the minimum non-zero symplectic weight of the elements of C ⊥a = C. Constructions The following theorem is known as the Calderbank-Shor-Steane construction. The ⊥ refers to the standard inner product on F n q given by Theorem 5.5 Suppose there are linear codes C 1 and C 2 with parameters [n, k 1 , d 1 ] q and [n, k 2 , d 2 ] q , with the property that C ⊥ Then C is a linear code over F q and for all v = (v 1 |v 2 ) and w = (w 1 |w 2 ) in C, In the above the first term vanishes since v 1 ∈ C ⊥ 1 C 2 and w 2 ∈ C ⊥ 2 . Likewise, the second term vanishes since v 2 ∈ C ⊥ 2 and w 1 ∈ C ⊥ 1 C 2 . Hence, C C ⊥a and Theorem 5.4 applies. To determine the minimum distance first note that C ⊥a The dimension of C 2 ×C 1 is k 1 +k 2 and the dimension of C ⊥a is 2n−(n−k 1 )−(n−k 2 ) = k 1 + k 2 , so Thus, by Theorem 5.4, if k 1 + k 2 = n then the minimum distance of the stabilizer code τ −1 (C) is the minimum weight of the elements in (C 1 \ C ⊥ 2 ) ∪ (C 2 \ C ⊥ 1 ). If k 1 + k 2 = n then the minimum distance of the stabilizer code τ −1 (C) is the minimum non-zero weight of the elements in C 2 × C 1 = C ⊥ 1 × C ⊥ 2 , which is equal to the minimum non-zero weight of the elements in Example 5.6 The ternary extended Golay code C 1 is a [12, 6, 6] 3 code for which The 12 Pauli operators generating the stabilizer group S are The next construction is called the F q 2 trick (for qubit codes this is the F 4 trick). It's not really a trick at all but it is a quick and effective way to construct quantum codes. These codes are a very special type of stabilizer code in which we impose more structure on the additive code C. For any two vectors u, v in F n q 2 , we define the Hermitian form and for a F q 2 -linear code E we define Proof. The code D ⊥ h is a [n, k, d ] q 2 code for some d . Fix a basis {e, e q } for F q 2 over F q , where e 2q = e 2 . Let θ be the map from F n q 2 to F 2n q defined by θ((a 1 e + b 1 e q , . . . , a n e + b n e q )) = (a 1 , . . . a n |b 1 , . . . , b n ) Let C = θ(D ⊥ h ), a 2k-dimensional linear code over F q of length 2n. For u ∈ D ⊥ h and u ∈ D, This implies Applying the x → x q map, we get Subtracting the last two equations, Hence, (θ(u), θ(u )) a = 0, and so θ(D) C ⊥a . Since |D| = |C ⊥a | = q 2(n−k) , we have that θ(D) = C ⊥a . Moreover, C = θ(D ⊥ h ) and D ⊥ h D, so C C ⊥a . The symplectic weight of an element of θ(u) is equal to the weight of u, so the minimum symplectic weight of The theorem follows from Theorem 5.4. We will use the construction of Theorem 5.7 to obtain quantum MDS codes in the next section. Research Problem 3 If k is small enough one can multiply the columns of a generator matrix for D ⊥ h with non-zero scalars to obtain an equivalent code for which D ⊥ h D holds. It would be interesting to calculate the combinatorial threshold for codes when this can always be done and then deduce properties of codes which surpass this threshold. The geometry of quqit codes In the case q = p h , Theorem 5.4 implies that the existence of a ((n, q n /p r , d)) q stabilizer code Q(S) is equivalent to the existence of an additive code C C ⊥a of length 2n, such that C is generated by r vectors of F 2n q that are linearly independent over F p . Thus, the code C is generated by a r × 2n matrix G(S) over F p and its columns are vectors in F r q . We have seen in Section 3.3 that when h > 1, we should consider those columns as subspaces of PG(r − 1, p) and not as points of PG(r − 1, q). Let x i be the i-th column of the matrix G(S) and let e be an element of F q with the property that {1, e, e 2 , . . . , e h−1 } is a basis for F q over F p . Then there are vector x i,j ∈ F r p such that Let i be the subspace as a subspace of PG(r − 1, p). The following lemma can be considered as a generalisation of Lemma 3.6 Thus, by Lemma 5.8, the geometry of the stabilizer code Q(S) for which the minimum non-zero weight of Centraliser(S) is at least two, is given by a set X of (2h − 1)dimensional subspaces of PG(r − 1, p) of size n. The following lemma allows us to deduce the minimum distance of Q(S), at least in the case that Q(S) is pure. Proof. Suppose that there is an element in Centraliser(S) of weight w. Then the image under τ of this element is a vector v ∈ C ⊥a with symplectic weight w. Let D be the support of v restricted to the first n coordinates. As before, let x i be the i-th column of the matrix G(S) and define x ij as in (8). Since The summand is a point of the subspace i and there are |D| = w such points. This proves the backwards implication. Suppose there are w dependent points incident with distinct subspaces of X . Then there is a subset D ⊆ {1, . . . , n} of size w and λ i,j , λ i+n,j ∈ F p , such that Recall that Since i is a (2h − 1)-dimensional subspace, the points x j , x p j , . . . , x p h−1 j are h linearly independent points, which implies there are µ i,r ∈ F q such that Since x i,j ∈ F r p , we have that µ i,r = µ p r i , for some µ i . Substituting in the above gives, The property that defines X as a quantum set of lines for p = 2 does not carry over to the case p 3. This is because we can scale any column of G by an element of F q \ {0, 1} and not alter the set of lines X . This will alter the value of (u, v) a , so the geometric interpretation of C C ⊥a will not be so clean as in the qubit case. Moreover, it is difficult to deduce the pureness of the code directly from the geometry. To see this, suppose that v ∈ C ⊥a has symplectic support D and for simplicity sake assume that q is prime. Then Now, v ∈ C if and only if there is an a ∈ F r p such that v i = a · x i . This implies that the lines not incident with the dependent points are once again contained in a hyperplane, but we cannot deduce that the points of the dependencies are contained in the hyperplane a · X = 0. Indeed, the fact that for some non-zero scalar λ i ∈ F q . Since this λ i depends on i, we cannot deduce that v i = a · x i for all i = 1, . . . , 2n. However, this also means that when p 3 we have some flexibility in choosing a basis for i and this choice will affect whether C C ⊥a . Consider the set of n (2h − 1)dimensional subspaces of PG(4n − 1, p) associated with a pure [[n, n − 4, 3]] q stabilizer code. By Lemma 5.9, these subspaces are pairwise skew. In geometrical language this is called a partial spread. To construct such a code, according to Theorem 5.7, it suffices to construct a [n, n − 2, 3] q 2 linear code D for which D ⊥ h D. Such a code is has a generator matrix x 1 x 2 . . . x n y 1 y 2 . . . y n , For any n q 2 + 1 such a matrix can be found by scaling the first three columns so that the equation in (9) are satisfied. Research Problem 4 The Glynn et al [5] manuscript developed the geometry of qubit stabilizer codes, introducing the concept of a quantum set of lines. This led them to prove Theorem 3.14, which gives a beautiful geometric classification of qubit stabilizer codes. Here, we have generalised the concept of quantum set of lines to non-qubit stabilizer codes. Although we have seen that the existence of non-identity non-zero scalars means we cannot hope for such a clean geometric classification, one can certainly expect some geometric classification for larger q. 6 Quantum MDS codes Stabiliser MDS codes Let C be a code of length n and minimum distance d over an alphabet of size q. If we consider any n − (d − 1) coordinates then any two codewords must be different on these coordinates (if not the distance between them is at most d − 1), so there are at most q n−d+1 codewords in the code. This is the Singleton bound |C| q n−d+1 . A code which attains the Singleton bound is called a maximum distance separable code or simply an MDS code. Recall that if C is an additive code over F q , where q = p h for some prime p, then C is linear over F p and so necessarily |C| = p r for some r, see Section 3.3. Thus, if C is also an MDS code then h divides r and |C| = q k , where k = n − d + 1. Theorem 5.4 states that an [[n, k, d]] q stabilizer code exists if and only if there exists an additive code C F 2n q of size |C| = q n−k such that C C ⊥a and the minimum symplectic weight of an element of C ⊥a \ C is d. Considering C ⊥a as a code over the alphabet F q × F q , then C ⊥a has minimum weight d, so |C ⊥a | q 2n−2d+2 . Since |C| = q n−k we have that |C ⊥a | = q n+k , which implies that for a [[n, k, d]] q stabilizer code to exist, we must have the condition k n − 2(d − 1). Compare this with the Singleton bound above k n − (d − 1), for codes of size q k . What is perhaps surprising is that this bound holds for all [[n, k, d]] q quantum codes. The quantum Singleton bound states that n k + 2(d − 1) . Consequently, codes reaching equality are called quantum maximum distance separable codes or QMDS codes for short. We will prove this bound in Section 6.3. Reed-Solomon codes The classical example of an MDS code is the following linear code over F q . Denote by {a 1 , . . . , a q } the elements of F q . The Reed-Solomon code is where f k−1 denotes the coefficient of X k−1 in f (X). If k q then each polynomial f defines a different codeword, so the dimension of C is k. A non-zero codeword has weight at least n − k + 1, since a polynomial of degree at most k − 1 has at most k − 1 zeros. Lemma 3.1 then implies that the minimum distance d = n − k + 1 and so the code is MDS. We can use Theorem 5.7 to construct quantum stabilizer codes from Reed-Solomon codes over F q 2 , but only if we can scale the coordinates of C so that C C ⊥ h . Then D = C ⊥ h is a [n, n − k, k + 1] q 2 linear MDS code with the property that D ⊥ h D. Observe that replacing the i-th coordinate f (a i ) by λ i f (a i ) does not alter the parameters of the code. Such a code is then called a generalised Reed-Solomon code. This can only be done for k q, in which case we obtain a [[q 2 + 1, q 2 + 1 − 2k, k + 1]] q stabilizer code. For case k = q, one can check that the Reed-Solomon code is contained in its Hermitian dual, so there is no need to scale in this case. Quantum Singleton bound To prove the quantum Singleton bound we will need some technical tools. 1. Bloch decomposition. Let {e i } be a basis for the space of complex D × D matrices such that tr e † i e j = Dδ ij . For qubits, take for example the Pauli matrices. Every one-quDit density matrix can then be expanded as where we recall that the trace of a matrix is given by the sum of its diagonal elements, tr(M ) = i m ii for any square matrix M = (m ij ). Consider now an n-partite system in the space (C D ) ⊗n . Denote by {E α }, with a multi-index α = (α 1 , . . . , α n ), the matrix basis formed by tensor-products of the e i 's For tensor products, such as say E ⊗ F , one has tr(E ⊗ F ) = tr(E) · tr(F ). In other words, the trace of a tensor product factorizes. Consequently tr E † α E β = D n δ αβ , and the matrix basis formed by {E α } is orthogonal. Denote by wt(E α ) the number of non-identity terms in the tensor-decomposition, and by supp(E α ) the collection of sites where the non-identity terms act on. Naturally, wt(E α ) = | supp(E α )|. We can expand an n-partite state as As above, we from now on omit the index α for readability. This is the Bloch decomposition of ρ. The function tr j is called the partial trace and its action can be understood as that of removing the j-th tensor component. The partial trace does not depend on the basis. Its coordinate-free definition is the following: Let V and W be two vector spaces and denote by I W the identity matrix on W . The partial trace tr W is the unique operator, which for all M acting on V ⊗ W and N acting on V satisfies tr(M · (N ⊗ I W )) = tr(tr W (M ) · N ) . Considering the Hilbert-Schmidt inner product M, N = tr M † N , the partial trace can be seen as the adjoint to the map V → V ⊗ I W . Note that partial traces over different subsystems commute, tr j tr i = tr i tr j and one has that tr(M 1 ⊗ M 2 ⊗ . . . ⊗ M n ) = tr(M 1 ) tr(M 2 ) · · · tr(M n ) . Purification. A density matrix ρ on H A can always be diagonalized as where {|λ i A } is its set of eigenvectors and {λ i } is its set of corresponding eigenvalues. It can be checked that tr B (|φ φ|) = ρ and the state |φ is known as a purification of ρ. 4. Von Neumann entropy. Consider a classical probability distribution represented by a set of probabilities p i ≥ 0 with i p i = 1. Its Shannon entropy is We can introduce a similar quantity for quantum states. Given a density matrix ρ, its von Neumann entropy is defined as Such matrix functions of hermitian operators can be evaluated on their eigenvalues {λ i }. Then the von Neumann entropy evaluates as Let us now write S A = S(tr B [ρ AB ]) and so on. For a state ρ on H A with purification |φ ∈ H A ⊗ H B , we have that S A = S B . The von Neumann entropy satisfies subadditivity and strong subadditivity, We are now in position to prove the Quantum Singleton bound. Proof. The distance must be bounded by 2(d − 1) < n, as otherwise n − (d − 1) < (d − 1) and we could recover the encoded state from two disjoint subsystems, violating the no-cloning theorem. Let Π Q = q k i=1 |v i v i | be the projector onto the code space. A purification with a reference system R leads to where |i R is any orthonormal basis for R. Let us partition the code into the three subsystems A, B, C, such that |A| = |B| = d − 1 and |C| = n − 2(d − 1). Then S R = log q k . As the code has distance d, any subsystem of size strictly smaller than d cannot reveal anything about the reference system R: indeed the condition of RA = R ⊗ A is known to be a necessary and sufficient condition for the subsystem A to be correctable [13]; this is also equivalent to S RA = S R + S A . With the subadditivity of the von Neumann entropy this leads to where we used that the entropies of complementary subsystems are equal for a pure state. The combination of the above two inequalities yields (b) For every subset S ⊂ {1, . . . , n} with |S| ≤ n+k 2 , we have that tr S c (P ) ∝ 1, where P is the orthogonal projection onto the quantum MDS code. Let us discuss these properties: a) states that QMDS codes form families of codes where n + k is constant. Within each family, only the member with the highest distance has to be determined, as its descendants can be obtained by a partial trace: tracing out over a single particle, one has n → n − 1, k → k + 1, d → d − 1. This works because QMDS codes are pure codes, that is, all their (d − 1)-party marginals are maximally mixed. For general quantum codes, this method of making new codes from old is not necessarily possible. Property (b) states that for all pure states |v in the code, the marginals of size less than d are maximally mixed. This implies that every vector in the code space shows maximal bipartite entanglement across any bipartition of d − 1 vs. n − d + 1 parties. Thus QMDS codes form subspaces that show high bipartite entanglent. We This should be compared to the "trivial" upper bound for MDS codes. If there is a (n, q k , n − k + 1) q MDS code then n q + k − 1. The MDS conjecture states that if 4 k q and there is a (n, q k , n − k + 1) q MDS code then n q + 1. This is known to hold for linear codes if q is a prime, see [3]. Let G be a k × n generator matrix for C and let X be the set or multi-set of columns of G, viewed as points of PG(k − 1, q). In Section 3.2, we saw that a non-zero codeword u = aG corresponds to a hyperplane π a of PG(k − 1, q) and that π a = π λa for any λ ∈ F q . The number of points of X incident with the hyperplane π a is n minus the weight of the codeword u. Thus, for i = 0, there are A i /(q − 1) hyperplanes which are incident with n − i points of X . MacWilliams identity for quantum codes As for classical codes, weight enumerators can be defined for quantum codes, which again are useful to deduce the error-correcting properties of codes and to obtain bounds on their existence. Let Q be a quantum code and let P be the orthogonal projection onto Q. The weights of the primary and secondary Shor-Laflamme enumerators are Hence, if E ∈ S, tr(EP ) tr E † P = 0 and if E ∈ S then tr(EP ) tr E † P = q 2n /|S| 2 . Thus, A j is q 2n /|S| 2 times the number of elements in the stabilizer subgroup S that have weight j. We leave the result for B j as an exercise. The geometrical interpretation of A j for stabilizer codes is as follows. Suppose that X is a quantum set of lines in PG(n − k − 1, q). Then A j is (q − 1) times number of hyperplanes containing n − j lines of X . Before proving the quantum MacWilliams identity, consider the following example. where e 2 = e + 1. One can prove that the minimum distance is 4 by checking that all 3 × 3 submatrices are non-singular. By verifying that the hermitian inner product (7) between any two rows is zero, one quickly concludes that D = D ⊥ h . Theorem 5.7 implies that we can construct a [[6, 0, 4]] 2 stabilizer code Q(S) from D. By writing out the entries in the matrix over F 2 and considering the F 2 span we obtain the matrix G(S) for this quantum code. Thus, the stabilizer subgroup has generators By Lemma 5.9, the quantum set of six lines X we get from the matrix G(S) has the property that any three lines of X span the whole space PG(5, 2). Therefore, any two span a three-dimensional subspace which is contained in three hyperplanes which contain no further line of X . Thus, there are 45 hyperplanes which contain exactly two lines of X . Let be a line of X . There are 15 hyperplanes containing , so counting pairs ( , π) where ∈ X and π is a hyperplane containing , we conclude that any hyperplane containing a line of X contains two lines of X . Thus, we work out the weight distribution. For codes with k = 0 (that is, pure states), both weight distributions coincide; this can be checked from the definition. From before, we have that A j is the (q − 1) times number of hyperplanes containing n − j lines of X . Thus, we have proved that the weight distribution for the quantum hexacode is (A 0 , . . . , A 6 ) = (1, 0, 0, 0, 45, 0, 18). Research Problem 9 For stabilizer codes, A j and B j count the number of terms in the stabilizer S and its normaliser N (S) respectively; there is no such combinatorial interpretation for general quantum codes. Although A j can interpreted as the Hilbert-Schmidt norms of the j-body correlations that appear in the code, we would like to determine what object B j is counting for non-stabilizer codes. We return to the proof of the quantum MacWilliams identity. Proof. [Quantum MacWilliams identity] We will only state a proof sketch; the rather tedious combinatorial details can be found in [11,14]. Let S be a collection of subsystems and denote by tr S the partial trace the systems in S. Denote by S c the complement of S in {1, . . . , n}. Consider now how the partial trace tr S followed by a "padding" with the identity acts on an operator P . tr S (P ) ⊗ 1 S = tr S 1 q n E tr E † P E ⊗ 1 S = 1 q n−|S| supp(E)⊆S c tr E † P E . (10) It can be shown (c.f. Appendix A in Ref. [11]) that this can also be written as tr S (P ) ⊗ 1 S = U (q n ) s.t. supp(U )⊆S where the integration is over the unitarily invariant Haar measure of unitary matrices that act trivially on the subsystem S c . The second equality follows from the fact that any complete orthonormal matrix basis {E α } containing the identity forms a unitary 1-design 5 . The quantum MacWilliams identity now essentially follows from equating Eqs. (10) and (11), summing over all subsystems of size |S| = m, multiplying by P , and taking the trace. This yields terms of the form tr E † P tr(EP ) and tr E † P EP , corresponding to the two types of weights A j and B j . Using generating functions, in other words the weight enumerator polynomials A(x, y) and B(x, y), and Krawtchouk polynomials, this yields the MacWilliams identity q n B(x, y) = A(x + (q 2 − 1)y, x − y). 5 t-designs replace the integration over some compact group by a finite sum. A unitary t-design is a set of unitaries Ui, i = 1, . . . , K acting on C q , such that U (D) Pt,t(U )dU = 1 K K i=1 Pt,t(Ui) holds for every homogeneous polynomial Pt,t that has degree t in the matrix elements of U and degree t in the matrix elements of U * . This ends the proof sketch. The enumerators and their weights have a couple of interesting properties: Let K = dim(imP ). a) The weights A j and B j are invariant under the local choice of basis and are so-called local unitary invariants (LU-invariants). That is, A j (P ) = A j (P ) and B j (P ) = B j (P ) , where P = (U 1 ⊗ . . . ⊗ U n )P (U † 1 ⊗ . . . ⊗ U † n ) and U 1 , . . . , U n are unitary q × q matrices. b) A 0 = dim(P ) and KB j ≥ A j ≥ 0. c) A projection operator P with K = dim(im(P )) is a code of distance d, if and only if it satisfies KB j = A j for 0 ≤ j < d. d) One can check that for codes with K = 1, the enumerator polynomial is invariant under the quantum MacWilliams transform, and one has B(x, y) = A(x, y). When such a code is of stabilizer type, it corresponds to a classical self-dual code. Some comments are in order. The weights must be LU-invariant -the properties of the code should not depend on the way one sets up the local coordinate system for each spin particle. The last two properties are useful to obtain weights of hypothetical codes and to apply the machinery of linear programming bounds [2]. That is, one sets up a system of linear equalities and inequalities in the variables A 0 , . . . , A n making use of a), b), and the quantum MacWilliams identity. We refer to the tables by M. Grassl [8] for more existence results.
25,834.2
2020-07-12T00:00:00.000
[ "Computer Science", "Physics" ]
Deletion of Metabotropic Glutamate Receptors 2 and 3 (mGlu2 & mGlu3) in Mice Disrupts Sleep and Wheel-Running Activity, and Increases the Sensitivity of the Circadian System to Light Sleep and/or circadian rhythm disruption (SCRD) is seen in up to 80% of schizophrenia patients. The co-morbidity of schizophrenia and SCRD may in part stem from dysfunction in common brain mechanisms, which include the glutamate system, and in particular, the group II metabotropic glutamate receptors mGlu2 and mGlu3 (encoded by the genes Grm2 and Grm3). These receptors are relevant to the pathophysiology and potential treatment of schizophrenia, and have also been implicated in sleep and circadian function. In the present study, we characterised the sleep and circadian rhythms of Grm2/3 double knockout (Grm2/3 -/-) mice, to provide further evidence for the involvement of group II metabotropic glutamate receptors in the regulation of sleep and circadian rhythms. We report several novel findings. Firstly, Grm2/3 -/- mice demonstrated a decrease in immobility-determined sleep time and an increase in immobility-determined sleep fragmentation. Secondly, Grm2/3 -/- mice showed heightened sensitivity to the circadian effects of light, manifested as increased period lengthening in constant light, and greater phase delays in response to nocturnal light pulses. Greater light-induced phase delays were also exhibited by wildtype C57Bl/6J mice following administration of the mGlu2/3 negative allosteric modulator RO4432717. These results confirm the involvement of group II metabotropic glutamate receptors in photic entrainment and sleep regulation pathways. Finally, the diurnal wheel-running rhythms of Grm2/3 -/- mice were perturbed under a standard light/dark cycle, but their diurnal rest-activity rhythms were unaltered in cages lacking running wheels, as determined with passive infrared motion detectors. Hence, when assessing the diurnal rest-activity rhythms of mice, the choice of assay can have a major bearing on the results obtained. Introduction Schizophrenia is a debilitating psychiatric disorder that affects 0.3-0.7% of the world's population [1], while sleep and/or circadian rhythm disruption (SCRD) is observed in 30-80% of schizophrenia patients [2]. Sleep disturbances in schizophrenia include increases in sleep latency and sleep fragmentation, and reductions in total sleep time, sleep efficiency, rapid eye movement (REM) sleep latency, REM sleep density and slow-wave sleep duration [2][3][4]. Typical circadian abnormalities in the disorder include the abnormal phasing, instability and fragmentation of rest-activity rhythms [5][6][7][8][9]. Some of these abnormalities may be secondary to the disorder and its drug treatment, but the co-morbidity of schizophrenia and SCRD may also stem from dysfunction in common brain mechanisms (e.g. specific neurotransmitter systems) [10,11]. The glutamate system is one such candidate mechanism. There is strong evidence of glutamatergic dysfunction in schizophrenia [12,13], while glutamate is the principal neurotransmitter within the retinohypothalamic tract (RHT), which conveys photic entrainment cues from the retina to the circadian timing system in the suprachiasmatic nucleus (SCN) [14][15][16]. Glutamatergic signalling is also important for bilateral communication between the left and right SCN [17], while the SCN stimulates the paraventricular and ventrolateral preoptic nuclei via glutamatergic efferents [18][19][20][21][22]. Glutamate release in the prefrontal cortex displays rhythmic fluctuations during the sleep-wake cycle; it increases during wakefulness and REM sleep episodes, but decreases during non-REM sleep [23][24][25]. At the receptor level, group II metabotropic glutamate receptors may be particularly relevant to the pathophysiology of schizophrenia. mGlu2 and mGlu3, encoded by the genes Grm2 and Grm3, are presynaptic inhibitory autoreceptors that are widely expressed in both rodent and human brain [26][27][28][29][30][31]. Grm2 expression is predominantly neuronal, whereas Grm3 expression is primarily glial [28]. In a recent meta-analysis of genome-wide association studies, a strong association was reported between the Grm3 locus and schizophrenia [32], extending prior evidence that Grm3 is involved in the genetic predisposition to psychosis [33][34][35][36]. Polymorphic variation in Grm3 has also been associated with cognitive performance, prefrontal glutamatergic neurotransmission and hippocampal pathophysiology [33,37]. In addition, there is clinical and preclinical evidence that mGlu2 & 3 may be therapeutic targets for the disorder, although no drug in this category has yet been licensed for use [38]. In the present study, we characterised the sleep and circadian rhythms of Grm2/3 double knockout (Grm2/3 -/-) mice. Although expression of the dimeric form of mGlu3 is reduced in the brains of schizophrenia patients [53], mGlu2 and mGlu3 are not completely absent, so the Grm2/3 -/mouse cannot be considered a 'disease model'. Instead, our primary aim was to obtain additional evidence for the involvement of group II metabotropic glutamate receptors in the regulation of sleep and circadian rhythms. Based on the evidence described above, we predicted that Grm2/3 -/mice would demonstrate a reduction in total sleep duration, increased sleep fragmentation, and heightened sensitivity to the circadian effects of light (e.g. lightinduced phase shifts of greater magnitude). Age-matched males were used in all experiments. Mice were at least 2 months old at the onset of behavioural screening and were no older than 9 months upon completion of testing. Only males were tested due to the potentially confounding influence of the oestrus cycle on wheel-running activity [59]. 2 transgenic cohorts were used; cohort 1 consisted of 11 Grm2/ 3 +/+ mice and 12 Grm2/3 -/mice, while cohort 2 was comprised of 12 mice of each genotype. For the experiment involving the administration of the drug RO4432717 (F.Hoffmann-LaRoche, Basel, Switzerland), a separate cohort of 22 male wildtype C57Bl/6J mice was used (Harlan, UK). Mice were singly-housed and had access to food and water ad libitum. All behavioural procedures were performed in accordance with the United Kingdom Animals (Scientific Procedures) Act of 1986 and the University of Oxford Policy on the Use of Animals in Scientific Research. All experiments were approved by the University of Oxford Animal Welfare and Ethical Review Board, and were conducted under the PPL 30/2812 by PILs 30/9339 and 30/8553. Order of tests Cohorts 1 and 2 completed an extensive wheel-running screen consisting of several standard protocols (full details below). Video-tracking, passive-infrared (PIR) motion detection and object interaction testing were only performed on cohort 2. The order of these tests was counterbalanced across genotype groups. Half of cohort 2 underwent video-tracking followed by PIR motion detection, then wheel-running, and finally object interaction testing. The other half of cohort 2 were subjected to PIR motion detection followed by video-tracking, then object interaction testing, and finally wheel-running. contained 6 cages, with 3 mice of each genotype housed in alternating positions. A standard 12:12 h light/dark (12:12 LD) cycle was employed and illuminance was 100 lux at the base of each cage. Three 24 h videos (i.e. 3 separate 12:12 LD cycles) were recorded at weekly intervals, using a near-infrared CCTV camera (Maplin Electronics, Rotherham, UK). All videos were recorded at 3 frames per second, in the AVI file format. An acrylic block was placed under the food hopper to keep the mouse in the recording field at all times. Video files were stored on a digital hard drive recorder (Samsung, Suwon, South Korea) prior to analysis with ANY-maze 4.5 (Stoelting, Wood Dale, Illinois). Multiple immobilitydetermined sleep parameters were extracted from the video footage: total sleep time, light and dark phase sleep time, light and dark phase sleep bouts, and light and dark phase sleep bout duration. Sleep was defined as a period of immobility of at least 40 s, a previously established proxy measure of sleep [60]. This measure has an extremely high concordance (>95%) with EEG-based sleep determination [60]. Immobility sensitivity was set at 95% to prevent the detection of movements caused by breathing during sleep. In addition to the sleep parameters outlined above, four activity parameters were extracted from the video footage using ANY-maze 4.5: total activity, light and dark phase activity (distance travelled in metres), and relative light phase activity (the percentage of total activity occurring during the light phase). Note that one Grm2/3 +/+ mouse was excluded from all dark phase sleep and activity analyses as it escaped from its cage during dark phase recording. 3-5 days after the third 24 h recording, mice were exposed to a 1 h 100 lux light pulse from zeitgeber time (ZT) 16 to 17. (ZT0 refers to the onset of the light phase, while ZT12 denotes the onset of the dark phase). Negative masking-percent activity suppression induced by the light pulse-was computed to provide an indication of photosensitivity [61]. This was calculated by comparing activity levels during the light pulse with average activity levels during the same time window (ZT16-17) across the previous 3 days. Passive-infrared (PIR) motion detection. Due to the prohibitive size of the video files, it was impractical to analyse more than 72 h of video data. PIR motion detection provides a less data-heavy alternative, enabling the estimation of sleep over a longer period; in this case, 14 consecutive days of 12:12 LD at 100 lux. It also enables the computation of specific parameters that cannot be extracted from isolated 24 h video recordings, such as period length (see below). Full details of this novel technique, developed in our laboratory, should be published in the near future [Brown et al., in review]. Housing conditions for the PIR protocol were identical to that of the video-tracking protocol outlined above. Again, an acrylic block was placed under the food hopper to keep the mouse in the motion detection field at all times. Raw data from the PIR motion sensors took the form of % time active per 10 s epoch, with sensors activated both by gross locomotion and small movements such as turning of the head. Three immobilitydetermined sleep parameters were extracted from the raw PIR data using Microsoft Excel 2013 (Microsoft, Redmond, USA): total sleep time, light phase sleep time and dark phase sleep time. Note that methods do not yet exist to automatically compute parameters pertaining to the frequency and duration of sleep bouts from PIR data. As with the previously described videobased method of sleep determination [60], sleep was defined as a period of immobility of at least 40 s. We have recently confirmed that this measure has an extremely high concordance (>95%) with both video-based sleep determination and EEG-based sleep determination [Brown et al., in review]. In addition to the sleep parameters outlined above, four activity parameters were extracted from the raw PIR data using the ClockLab toolbox (Actimetrics, Illinois, USA) for MatLab (MathWorks, Massachusetts, USA): total activity, light and dark phase activity (expressed in arbitrary units), and relative light phase activity (as above). Standard diurnal rest-activity rhythm parameters were also computed from the raw PIR data using ClockLab; these comprised two measures of diurnal rest-activity rhythm fragmentation (daily activity bouts and daily activity bout duration), two measures of diurnal rest-activity rhythm consistency (chi-square periodogram amplitude and onset tau error), and period length. Activity bouts were defined using pre-established criteria for ClockLab (maximum gap: 18 min; threshold: 5%). A sixth diurnal rest-activity rhythm parameter-interdaily stabilitywas computed using the software package Actiwatch Activity & Sleep Analysis 7 (Cambridge Neurotechnology, Cambridge, UK). Interdaily stability is a non-parametric measure that quantifies the similarity of activity patterns across multiple days [62]. Diurnal rest-activity rhythm and circadian screening Wheel-running. This screen was based on an established protocol [63][64][65]. Housing conditions were identical to that of the video-tracking and PIR protocols described above, except that cages were fitted with running wheels (18 cm diameter). As previously, illuminance was 100 lux at the base of each cage. Mice in both cohorts were exposed to 14 days of 12:12 LD and 11 days of constant dark (DD). In addition, cohort 1 was subjected to 13 days of constant light (LL), and a 6 h phase advance during 12:12 LD. During 12:12 LD, four activity parameters were extracted from the raw wheel-running data using ClockLab: total activity, light and dark phase activity (wheel rotations), and relative light phase activity (as above). The six diurnal rest-activity rhythm parameters computed from the raw PIR data (see above) were also computed from the raw wheel-running data, using Clock-Lab and Actiwatch Activity & Sleep Analysis 7. Again, activity bouts were defined using preestablished criteria (maximum gap: 18 min; threshold: 5 counts/min). Note that two Grm2/3 -/mice were omitted from these activity bout analyses as their activity levels were too low to derive bout data. ClockLab was also used to quantify period length under free-running conditions (i.e. DD and LL), while re-entrainment following the 6 h phase advance was scored manually from individual actograms as the number of days taken to re-entrain. Both cohorts were subjected to type I and type II phase-shifting light pulses [66] (using previously described protocols [64,67]) to induce a phase delay in their wheel-running rhythms, providing an indication of circadian photosensitivity. For the type I pulse, mice were exposed to 100 lux light for 15 minutes at circadian time (CT) 16 during DD. The magnitude of the phase delay was quantified by fitting one regression line through 10 consecutive activity onsets preceding the light pulse, and another through 6 consecutive activity onsets following the pulse. The first 2 activity onsets following the light pulse were disregarded because of possible transition effects [64]. For the type II pulse, mice were exposed to 100 lux light for one hour at ZT16 during a standard 12:12 LD cycle. Following the pulse, mice were released into DD. The magnitude of the phase delay was quantified by fitting one regression line through 6 consecutive activity onsets preceding the light pulse, and another through 6 consecutive activity onsets following the pulse. This time, only one activity onset following the light pulse was disregarded [64]. For both light pulses, the magnitude of the phase delay was calculated as the time difference between the two regression lines on the first day after the light pulse. Negative masking was also computed for the type II pulse, using the same method employed in the videotracking screen. Administration of an mGlu2/3 negative allosteric modulator to wildtype mice prior to a type I phase-shifting light pulse To further investigate the hypothesis that group II metabotropic glutamate receptors modulate photic entrainment, the mGlu2/3 negative allosteric modulator RO4432717 was administered to wildtype mice in a standard phase-shifting paradigm. The pharmacology of RO4432717 (previously known as compound 7i) is described elsewhere [68]. 22 male wildtype C57Bl/6J mice (aged 80 days) were individually housed in large cages (44 cm long × 26 cm wide × 12 cm high), fitted with running wheels (18 cm diameter), under a 12:12 LD cycle at 400 lux. All mice were kept in DD for 1 day prior to the administration of a 30 min type I phase-shifting light pulse (400 lux) at CT16. 3 mice underwent no pre-treatment, while 6 mice received an intraperitoneal injection of saline (0.3 ml) at CT12 on the day of the light pulse. The remaining 13 mice received an intraperitoneal injection of RO4432717 at CT12. Three dosages were used: 3 mg (4 mice), 10 mg (5 mice) and 30 mg (4 mice). Injection volumes were kept constant at 0.3 ml. The magnitude of the phase delay was calculated using the method described above. Home-cage object interaction test To evaluate whether the altered wheel-running behaviour of Grm2/3 -/mice was a consequence of increased object neophobia (toward their running wheels), a simple object interaction task was conducted within the home-cage. The results of this test are presented in S1 Fig. Statistical Analyses All statistical analyses were performed with SPSS 22.0 (IBM, Armonk, New York). Unless otherwise stated, all reported statistics are the result of analyses of variance, with genotype included as the principal independent variable. Differences were considered to be statistically significant at p-values < 0.05. Greenhouse-Geisser corrections were applied where appropriate, but uncorrected degrees of freedom are reported in order to preserve the transparency of the statistical design. For analyses involving both experimental cohorts, further analyses of variance were conducted with cohort included as an extra independent variable. However, no additional effects or interactions were observed. In all figures, Ã indicates a p-value 0.05, ÃÃ indicates a p-value 0.01, and ÃÃÃ indicates a p-value 0.001. Error bars depict the standard error of the mean. M = mean. Wheel-running and video-tracking analyses: Negative masking during nocturnal light pulses. Negative masking was unaffected by genotype, regardless of the method used to assay locomotor activity. Genotype had no effect on the degree of activity suppression induced by a type II light pulse in either the wheel-running screen (F 1,45 = 0.994, P = 0.324; Fig 2E) or the video-tracking screen (F 1,22 = 0.111, P = 0.743; Fig 2F). Wildtype mice are more sensitive to the circadian effects of light following the administration of an mGlu2/3 negative allosteric modulator To discount the possibility that the increased light sensitivity of Grm2/3 -/mice is a consequence of altered neurodevelopment, we evaluated the impact of the mGlu2/3 negative allosteric modulator RO4432717 on the phase-shifting responses of wildtype C57Bl/6J mice. There was no difference in the magnitude of phase delays (induced by a type I light pulse at CT16) between untreated wildtype mice and wildtype mice injected with saline at CT12 (F 1,7 = 0.096, P = 0.766; Fig 2G). Relative to saline-treated mice, phase delays were greater in mice injected with RO4432717 at CT12. This difference was statistically significant at (Fig 2C) and type II (Fig 2D) phase-shifting light pulses are larger in Grm2/3 -/than Grm2/3 +/+ mice. (E & F) Grm2/3 +/+ and Grm2/3 -/mice show similar levels of negative masking during a type II light pulse, as assayed by wheel-running ( Fig 2E) and video-tracking (Fig 2F). (G) In wildtype C57Bl/6J mice, phase delays induced by a type I light pulse are enhanced following the administration of the mGlu2/3 negative allosteric modulator (NAM) RO4432717. The diurnal rest-activity rhythms of Grm2/3 -/mice are dependent on the assay used to measure locomotor behaviour Wheel-running activity analyses. Over 14 days of continuous recording under 12:12 LD, total wheel-running activity was markedly reduced in Grm2/3 -/mice relative to Grm2/3 +/+ mice (F 1,45 = 8.165, P = 0.006; Fig 3A). This reflects the fact that Grm2/3 -/mice were significantly less active than Grm2/3 +/+ mice during the dark phase (F 1,45 = 8.503, P = 0.006; Fig 3C). By contrast, genotype had no effect on activity during the light phase (F 1,45 = 1.257, P = 0.268; Fig 3B); indeed, numerically speaking, Grm2/3 -/mice were more active than Grm2/3 +/+ mice during the light phase. Consequently, the percentage of total daily activity which occurred during the light phase was greater in Grm2/3 -/mice than Grm2/3 +/+ mice (F 1,45 = 7.585, P = 0.008). (Note that mean values for all wheel-running analyses are included in Table 1, and representative actograms of 12:12 LD are depicted in Fig 4A). Video-tracking activity analyses. When mice were housed in cages lacking running wheels, and locomotor activity was determined with near-infrared cameras, the rest-activity profiles of Grm2/3 -/mice were very different from those derived using running wheels (see Fig 3D). As with running wheels, light phase activity was greater in Grm2/3 -/mice than Grm2/ 3 +/+ mice (F 1,22 = 6.166, P = 0.021; Fig 3E), although the percentage of total daily activity which occurred during the light phase did not vary according to genotype (F 1,21 = 0.625, P = 0.438). Crucially, however, genotype had no effect on dark phase activity (F 1,21 = 0.217, P = 0.646; Fig 3F) or total activity levels (F 1,21 = 0.395, P = 0.536). There were no effects or interactions involving recording day for any of these analyses, reflecting the fact that the activity profiles of Grm2/3 +/+ and Grm2/3 -/mice were relatively consistent across the three recording sessions. Note that specific parameters such as period length cannot be extracted from isolated 24 h video recordings. PIR activity analyses. When mice were housed in cages lacking running wheels, and locomotor activity was determined with PIR motion detectors, the rest-activity profiles of Grm2/ Sleep & Circadian Rhythms in Grm2/3 Double Knockout Mice 3 -/mice were again very different from those derived using running wheels (see Fig 3G). As with running wheels, light phase activity was greater in Grm2/3 -/mice than Grm2/3 +/+ mice (F 1,22 = 2.551, P = 0.124; Fig 3H), and the percentage of total daily activity which occurred during the light phase was higher in Grm2/3 -/mice (F 1,22 = 3.991, P = 0.058), although neither of these effects reached statistical significance. More importantly, however, genotype had no effect on dark phase activity (F 1,22 = 0.606, P = 0.445; Fig 3I) or total activity levels (F 1,22 = 0.122, P = 0.730). (Note that mean values for all PIR analyses are included in Table 2, and representative actograms of 12:12 LD are depicted in Fig 4B). Discussion The present study investigated sleep and circadian rhythms in Grm2/3 double knockout (Grm2/3 -/-) mice and wildtype (Grm2/3 +/+ ) controls. We report three novel findings. Firstly, Grm2/3 -/mice displayed a decrease in immobility-determined sleep time and an increase in immobility-determined sleep fragmentation. Secondly, Grm2/3 -/mice demonstrated heightened sensitivity to the circadian effects of light, as determined using running wheels. More specifically, they showed increased period lengthening in LL, and greater phase delays in response to type I and type II phase-shifting light pulses. Thirdly, the wheel-running activity of Grm2/ 3 -/mice was significantly perturbed under a standard 12:12 LD cycle. Table 2. Descriptive statistics for selected rest-activity parameters derived from 14 consecutive days of passive-infrared (PIR) data. In which brain region(s) do group II metabotropic glutamate receptors exert their influence on REM sleep? The amygdala is a promising candidate, since it is known to influence sleep regulation [69][70][71][72][73][74], and Grm2 is highly expressed in this region [26,28,30]. Indeed, Dong and colleagues observed reduced REM sleep in wildtype rats following microinjection of the mGlu2/3 agonist LY379268 directly into the basal amygdala [46]. Another region of interest is the thalamic reticular nucleus, which forms part of the ascending arousal system; mGlu2/3 antagonists are known to modulate the inhibitory output of this structure [51,52]. In addition to reduced sleep time, Grm2/3 -/mice exhibited an increase in sleep fragmentation. Consistent with this observation, both the mGlu2/3 antagonist LY341495 and the mGlu2 negative allosteric modulator RO4491533 increase the number of transitions between sleep and wakefulness in wildtype rats [44]. Hence, group II metabotropic glutamate receptors may be involved in the putative 'flip-flop' circuit involving the ventrolateral preoptic nuclei that is hypothesised to control sleep onset and offset [75]. The inability of Grm2/3 -/mice to remain asleep for sustained periods might also explain the overall reduction in their sleep time. mGlu2 and/or mGlu3 modulate the sensitivity of the circadian system to light Period length in LL was significantly greater in Grm2/3 -/mice than Grm2/3 +/+ mice, despite the fact that period length was unaffected by genotype in DD. Moreover, following both type I and type II phase-shifting light pulses, phase delays were greater in Grm2/3 -/mice than Grm2/ 3 +/+ mice. Negative masking during the type II pulse was unaffected by genotype, although this may reflect a ceiling effect, since the suppression of wheel-running activity was very high in mice of both genotypes. Consistent with our observations in Grm2/3 -/mice, light-induced phase delays in wildtype C57Bl/6J mice were enhanced by the administration of the mGlu2/3 negative allosteric modulator RO4432717. While it has previously been reported that mGlu2/ 3-modulating drugs can influence the magnitude of light-induced phase delays in wildtype hamsters [41,42], our own data represent the first demonstration of such an effect in wildtype mice. It is also significant that we obtained analogous results in a transgenic model and an acute pharmacological model; this suggests that the heightened light sensitivity of the Grm2/ 3 -/mouse is not a consequence of altered neurodevelopment. Collectively, these results demonstrate that mGlu2 and/or mGlu3 can modulate the sensitivity of the circadian system to light, which raises the possibility that mGlu2/3-targeting compounds may be useful for the treatment of circadian rhythm disorders. The relevance of these findings to schizophrenia is unclear, however. In humans, the suppression of nocturnal melatonin secretion by light is the standard measure of circadian photosensitivity; there is some evidence that this response is upregulated in bipolar disorder and seasonal affective disorder [76], but such studies have not been undertaken in individuals with schizophrenia. It is possible that the enhanced light-induced phase delays shown by Grm2/3 -/mice reflect the absence of mGlu2 and/or mGlu3 from RHT-SCN synapses. Photic entrainment cues are transmitted from the eye to the SCN via the axons of photosensitive retinal ganglion cells, which together make up the RHT [77,78]. While ionotropic glutamate receptors mediate the transduction of light information from the retina to the SCN via the RHT [79][80][81][82][83][84], metabotropic glutamate receptors may play a modulatory role at RHT-SCN synapses [40][41][42]. Consistent with the fact that group II metabotropic glutamate receptors are presynaptic inhibitory autoreceptors, mGlu2/3 agonists serve to inhibit NMDA-evoked calcium influx in SCN neurons [40], presumably via a reduction in presynaptic glutamate release. By the same logic, the antagonism or deletion of group II metabotropic glutamate receptors might be expected to increase presynaptic glutamate release at RHT-SCN synapses, explaining the enhanced light-induced phase delays witnessed in Grm2/3 -/mice and in wildtype C57Bl/6J mice following the administration of RO4432717. It should also be noted that Grm2 is expressed in the retina, in cholinergic amacrine cells [85][86][87], although its role here is not well understood. Grm3 is not expressed in the mammalian retina [85][86][87]. Origins and implications of the perturbed wheel-running activity of Grm2/ 3 -/mice Under 12:12 LD, the diurnal wheel-running rhythms of Grm2/3 -/mice were characterised by increased activity fragmentation and reduced consistency, while total activity levels were substantially reduced. By contrast, general home-cage activity levels-assessed with both nearinfrared cameras and PIR motion detectors-were unchanged in Grm2/3 -/mice. The consistency of home-cage diurnal rest-activity rhythms, as measured with PIR motion detectors, was also unaffected by genotype. This implies that the perturbed diurnal wheel-running rhythms of Grm2/3 -/mice are in some way related to the use of the wheels themselves. There are several possible reasons for the significantly reduced wheel-running activity of Grm2/3 -/mice, which are not mutually exclusive. It could reflect heightened object neophobia--a manifestation of increased anxiety-since all mice were housed in empty home-cages prior to the introduction of running wheels. This seems unlikely, however, given that Grm2/3 +/+ and Grm2/3 -/mice showed equal exploration of another novel object (a transparent play-tube) immediately after its introduction to the home-cage (see S1 Fig). Moreover, a previous study found no evidence of altered anxiety in Grm2/3 -/mice in either the elevated plus maze, open field, black and white alley, or novelty-suppressed feeding tasks [54]. A more straightforward explanation is a deficit in motor function. In support of this hypothesis, Grm2/3 -/mice demonstrate mild but significant impairments in standard tests of motor coordination; they are impaired at both the accelerating rotarod task and the multiple static rods task [54]. A final possibility is that Grm2/3 -/mice are less motivated to engage in wheel-running activity; perhaps the physical act of wheel-running is inherently less rewarding for Grm2/3 -/than Grm2/ 3 +/+ mice. Long-term voluntary wheel-running is known to be rewarding for rodents, and produces plastic changes in mesolimbic reward neurocircuity [88]. Significantly, striatal dopamine is reduced in Grm2/3 -/mice, particularly in the nucleus accumbens [89], a brain region involved in reward and motivation [90,91]. It is somewhat more difficult to explain why the PIR-derived diurnal rest-activity rhythms of Grm2/3 -/mice were less fragmented than those of Grm2/3 +/+ mice, when their diurnal wheel-running rhythms were more fragmented. By contrast, Grm2/3 -/mice demonstrated greater light phase activity than Grm2/3 +/+ mice in all three assays (i.e. wheel-running, PIR and video-tracking). This is likely a direct consequence of their reduced light phase sleep time. Interestingly, our results deviate slightly from those of previous experiments that have studied diurnal rest-activity rhythms in Grm2/3 -/mice. Lyon and colleagues observed that Grm2/ 3 -/mice were less active than Grm2/3 +/+ mice in cages lacking running wheels, particularly in the dark phase [55]. These results should be treated with caution, however, as activity was assessed over a relatively short period (less than 3 light/dark cycles), and recording began immediately after transferring the mice from the familiar home-cage environment to a novel activity-monitoring cage [55]. Furthermore, mice in this experiment were group-housed in the home-cage prior to single-housing in the activity-monitoring cages [55]. By contrast, mice were singly-housed throughout the present study. Relative merits of wheel-running, video-tracking and PIR assays The Grm2/3 -/mouse clearly demonstrates that the rest-activity phenotype of a mouse can vary markedly depending on the method used to assay locomotor behaviour. This is perhaps unsurprising given that wheel-running is a measure of voluntary exercise rather than general homecage activity. Indeed, wheel-running is a complex behaviour that may be subject to multiple influences including arousal, motivation, anxiety, motor coordination, and sensitivity to reward [92][93][94]. Hence, wheel-running might not be the most appropriate assay for disease-relevant mouse models that show alterations in one or more of these parameters. The present study is the first to use PIR motion detectors to simultaneously assess sleep and diurnal rest-activity rhythms in a transgenic mouse. The use of PIR motion detectors for sleep estimation was recently validated with simultaneous EEG-based sleep determination in wildtype mice [Brown et al., in review]. The similarity of the PIR-derived and video-derived sleep data presented here serves as further validation of this novel technique. PIR assays offer two main advantages over video-tracking assays: firstly, they enable the estimation of sleep over a longer period of time, providing a more representative view of an animal's sleep profile; and secondly, they enable the computation of specific parameters that cannot be extracted from isolated 24 h video recordings, such as period length. Sleep disruption in neuropsychiatric disorders As outlined previously, up to 80% of schizophrenia patients are afflicted by SCRD [2]. The prevalence of SCRD in schizophrenia has been attributed to the use of antipsychotic medication and the absence of social routine [10,11], but, contrary to these explanations, SCRD is seen in both medication-naïve patients [11] and patients that follow a fixed daily routine [7]. Instead, the co-morbidity of schizophrenia and SCRD may reflect dysfunction in common brain mechanisms, such as the glutamate system, which is clearly relevant to sleep [23][24][25], photic entrainment [14][15][16] and schizophrenia [12,13]. At the receptor level, multiple lines of evidence implicate group II metabotropic glutamate receptors in sleep regulation and photic entrainment [14,[39][40][41][42][43][44][45][46][47][48][49][50][51][52], while there is also a robust association between the Grm3 locus and schizophrenia [32][33][34][35][36]. The observation of reduced and fragmented sleep in Grm2/3 -/mice implies-but does not prove-that abnormal group II metabotropic glutamate receptor signalling may be relevant to sleep disruption in schizophrenia. Having said this, it is important to acknowledge that the Grm2/3 -/mouse is not a 'disease model' in itself; although there is evidence for the altered dimerisation of mGlu3 in schizophrenia [53], neither mGlu2 or mGlu3 are completely absent in the brains of schizophrenia patients. Group II metabotropic glutamate receptors could be relevant to SCRD in neuropsychiatric disorders besides schizophrenia. Grm3 is also associated with bipolar disorder [95,96], and, in contrast to schizophrenia, reduced REM sleep is very common in this condition [97]. This is significant given that REM sleep is reduced in wildtype rats after the administration of various mGlu2/3-modulating drugs [43][44][45][46][47][48][49][50]. Moreover, there is evidence that circadian photosensitivity is elevated in bipolar disorder [76], just as it is in Grm2/3 -/mice. Conclusions The results of the present study add to a growing body of evidence implicating group II metabotropic glutamate receptors in both photic entrainment and sleep regulation pathways. Consistent with the effects of a wide range of mGlu2/3-modulating drugs, we provide the first evidence that the genetic ablation of mGlu2 & 3 increases sleep fragmentation and reduces sleep time in mice. Our findings are also the first to demonstrate that the genetic ablation of mGlu2 & 3 heightens the sensitivity of the circadian system to light. These observations are compatible with our hypothesis that SCRD and specific neuropsychiatric disorders share common mechanistic origins [10,11], and tentatively suggest that altered group II metabotropic glutamate receptor signalling could be relevant to sleep disruption in schizophrenia, and to sleep and circadian disruption in bipolar disorder. In addition, we report that the rest-activity phenotype of Grm2/3 -/mice is critically dependent on the method used to assay locomotor behaviour. We argue that wheel-running might not be the most appropriate assay for diseaserelevant mouse models with altered arousal, motivation, anxiety, motor coordination and/or sensitivity to reward, since each of these parameters has the potential to influence wheelrunning activity. Supporting Information S1 Fig. Home-cage object interaction is unaltered in Grm2/3 -/mice. Genotype has no effect on interaction with a novel object during the first 10 min (A) or first hour (B) after its introduction to the home-cage (at ZT16), or during the entirety of the subsequent dark phase (C). The object was a transparent acrylic play-tube. Object interaction was defined as the amount of time spent in contact with the play-tube (including time spent within it). (TIF) S1 Author Contributions Conceived and designed the experiments: DP AJ LAB SNP. Performed the experiments: DP AJ. Analyzed the data: DP AJ. Contributed reagents/materials/analysis tools: SG LAB. Wrote the paper: DP AJ LAB SKET SH SG PJH DMB RGF SNP.
7,712.6
2015-05-07T00:00:00.000
[ "Biology", "Medicine" ]
Seminal vesicle inter- and intra-fraction motion during radiotherapy for prostate cancer: A review E.M.C One of the common treatment modalities for prostate cancer (PCa) is external-beam radiotherapy [1]. Considering the relatively low alpha/beta ratio for PCa [2,3], hypofractionation could yield higher tumour control rates with acceptable genitourinary and gastrointestinal toxicity rates [4]. Dose-escalation has shown improved treatment outcomes [5] and the use of modern imageguidance techniques, like fiducial markers, have lowered the margin needed around the prostate and thereby lowered side effects [6,7]. Furthermore, multiple randomized trials on low and favourable intermediate risk PCa reported a non-inferiority regarding tumour control and toxicity rates of moderate hypofractionation [8,9] and ultra-hypofractionation [10] compared to conventional fractionation schemes. Here, ultra-hypofractionation is defined as a dose per fraction of 5 Gray (Gy) or more. A next logical step would be the use of ultra-hypofractionation for high-risk patients, but this is challenging as the entire seminal vesicles (SV) are normally included in the target volume [11]. The SV belong to the male reproduction system and are about 3-5 cm long and 1 cm in diameter [12], however their exact shape and size can differ substantially. The SV are attached bilaterally to the prostate on the cranioposterior side and they lie superior to the rectum, inferior to the fundus of the bladder and posterior to the prostate [12]. The motion of the SV, similarly to the prostate, is caused by changes in bladder and rectal filling status. The SV can show tumour involvement [11], the probability of which can be predicted with the use of nomograms [13,14]. Recently, the addition of MRI imaging to these clinical prediction tools was shown to increase the robustness of these models [15][16][17]. Due to their inter-and intra-fraction motion, the SV require a relatively large planning target volume (PTV)-margin [18][19][20][21], which in combination with a high fraction dose could result in unacceptable dose to the organs at risk and thereby higher toxicity rates. A number of papers have recently been published showing the feasibility of ultra-hypofractionation (5 fractions of 7 Gy or 7.25 Gy) in small groups of patients including the SV in the clinical target volume (CTV) using different treatment modalities [22][23][24]. To safely introduce ultra-hypofractionation for high-risk PCa patients, strategies to optimize PTV-margins around the SV are tainties that contribute to a PTV-margin is crucial in this process. The last review on this topic was published in 2001 [25]; since then several articles have been published with methodologies that are more in line with the current technological advancements in PCa treatment. Therefore, this article critically reviews all relevant existing literature since 2001 on the inter-and intra-fraction motion of the SV during external-beam radiation of PCa with the aim of improving the understanding of these treatment uncertainties, which is needed to design adaptive treatment strategies for PTV-margin reduction. Search strategy In collaboration with the Erasmus MC Medical Library, Embase, Medline, Web of science Core collection, Cochrane CENTRAL register of trials and Google scholar were searched for relevant publications. This search was first performed on the 7th of February 2020 and last updated on the 18th of January 2021. There were no restrictions regarding date of publication or language in the initial search. See Appendix A for the detailed search queries. In-and exclusion process These searches yielded 646 unduplicated results. All articles before 2001 were excluded as the last review on this subject dates from 2001 [25] and the image guidance for prostate treatments has changed significantly since then. Using Endnote (version X9 build 12062), these results were screened on title/abstract and full text afterwards. This was done by VB with MM as second reviewer. Publications that were not written in the English language, as well as publications without a specific record of SV motion, deformation, volume changes and/or PTV-margins were excluded. Publications with abstracts referring to quantitative values for motion, deformation, volume changes and/or margins of the prostate and the SV were eligible for full text screening. This yielded 170 publications. Translations, rotations, deformations, volume changes and/or margins of the entire SV had to be reported for inclusion in the final review. Studies in which the prostate and the SV are combined in a single CTV or PTV and analysed as such were excluded, as well as studies which only incorporated part of the SV. After screening done by VB and MM, one article on volume changes was added outside of this search. In total 22 publications were included in this review Data extraction The general data extracted from the publications, if provided, were the author and year of publication, number of patients, number of scans (planning and repeat scans), average patient age, fractionation scheme and tumour stage. The extracted data regarding the SV motion were image modality, specific inter-or intrafraction motion, reference point of motion, type of image registration used, rectal and bladder preparation, motion surrogate used (e.g. centre of mass (COM) of the SV), and finally the SV displacement in the form of the mean, the standard deviation (SD) and the systematic and random errors, in mm or degrees, along the 3 principal axes. If present the, anisotropic, PTV margins were also extracted. Hence, we have used the same formalism to calculate PTV margins, if not reported, from datasets. To compare publications reporting systematic errors to those reporting means of motion, for the latter the standard deviation of the group mean was used as the systematic error. A limitation for the application of this margin recipe to SV is the lack of inclusion of rotations and deformations. The margins stated in this review are based on conventional fractionation schemes and will have to be adjusted when hypofractionation is used. Another limitation is that this recipe is only valid for conventional fractionation schemes, with a need to increase the margin when hypofractionation is used. Therefore, the margins in this review only indicate a lower limit. Table 1 summarizes the error parameters used in this study. Results The number of publications regarding inter-fraction motion, volume changes and/or margins exceed those reporting intrafraction motion, volume changes and/or margins by 19 to 4. One article describes both. The number of patients ranges between 9 and 90 and the number of scans used for data collection ranges between 21 and 771. Multiple image modalities (CBCT, CT or MRI) have been used as well as multiple points of reference (bony anatomy, prostate or first image in series) to which the motion was measured. A range of protocols to control bladder and rectal filling, such as the use of laxatives [28,30,45,47] or instructions to drink a certain amount of water before treatment [28][29][30]32,34,38,47], have been employed in the studies. Table 2 summarizes the 22 articles included in this review. Fig. 1a shows the mean values for inter-fraction translation, M inter , of the SV. 7 out of 9 articles used a prostate match (matched on fiducials or COM of the prostate) to obtain these values. The means were derived from relative values, i.e. negative and positive directions of translation. As expected from unbiased studies, M inter is below or around 1 mm, with the exception of two articles that reported values up to À3.3 mm [35,40]. Fig. 1b shows the mean intra-fraction translation of the SV, M intra . The reported intrafraction translation depends strongly on the reference point and shows values of À1.5 up to 7 mm. Only two publications [35,44] reported the intra-fraction translation, both relative to the prostate, with values ranging from À0.4-1.2 mm. The systematic error for inter-fraction translation, P inter , is shown in Fig. 1c. These systematic errors vary from 1 to 7 mm with only 2 publications reporting values above 4 mm. Higher values for systematic errors were reported in the anteroposterior (AP) direction, 1.7-7.3 mm, and craniocaudal (CC) direction, 1.3-4.5 mm, compared to the left-right (LR) direction, 1.0-2.0 mm with one outlier of 3.6 mm [27]. The systematic errors obtained from a match on bony anatomy [30,42] appear to be larger, all show values >3 mm, than the systematic errors obtained from a prostate match of which 4 out of 6 publications show values <3 mm. Fig. 1d shows the systematic errors of intra-fraction translation, P intra . The range shown, 1.6-4.1 mm, is smaller to that of P inter , 1.4-7.3 mm. However, these datasets use different reference points: P inter is reported relative to the prostate and other reference points, whereas P intra is only reported relative to other reference points than the prostate. Besides translations, rotations also have an impact on treatment uncertainty. Two studies were identified in which SV rotationsrelative to the prostate -were analysed. First, van der Burgt et al. [46] reported on inter-fraction rotations of the whole SV, after a prostate match. Three groups of 30 patients, with each 8 CBCTs, were divided by level of SV invasion: none, minimal (<5 mm) and extensive (>5 mm). Means, systematic (R rotation ) and random (r rotation ) rotations were given around the LR, CC and AP axis. The means of the rotations in the LR-axis for the minimal and extensive group, 2.0°and 2.3°respectively, and the CC rotation for the extensive group, 1.0°, were significantly different from 0. The systematic and random errors of the LR rotations were found to be higher, ranging from 5.0°À6.7°, compared to the rotations in the CC and AP-axes, ranging from 1.8°-2.4°and 1.6°-2.7°respectively. Two rotations were significantly lower in the extensive group compared to the no invasion group: R rotation in AP-axis (1.6°vs 2.3°respectively) and r rotation in LR-axis (5.2°vs 6.3°). Secondly, de Boer et al. [28] analysed inter-fraction rotations around the LR-axis for 20 patients with repeat CBCTs. They found a mean rotation around the LR-axis of À0.4°, a R rotation of 7.2°and a r rotation of 6.4°. These rotations were significantly correlated (p < 0.001) with prostate translations in the CC and AP direction and with prostate rotations around the LR and AP axes. Apart from translations and rotations, deformations are also considered a source of uncertainty in the treatment of SV. Deformations of the SV were discussed in 5 publications of which one reported intra-fraction deformation and one reported both intraand inter-fraction deformation. The deformations were measured after a prostate match in all cases. Sheng et al. [44] described intra-fraction deformations of 15 patients with 5 pairs of CBCT (before and after treatment). Mean edge-to-edge distance in millimetres (with 95% data range) for Left, Right, Cranial, Caudal, Anterior and Posterior border were reported to be <1.1 mm. Li et al. [35] reported on both intra-and inter-fraction deformation. Similarly, all intra-fraction deformations were reported to be <1.1 mm. In contrast, the inter-fraction deformations showed values up to 2.8 mm (caudal border) and À2.9 mm (posterior border). Interfraction deformations were studied by Hollander et al. [33] in 10 patients with weekly verification scans (66 scans in total). They found mean edge-to-edge displacements < 0.6 mm of all borders, except for the anterior border with a deformation of 2.4 mm (-3.9-8.8 mm). Van der Wielen et al. [47] reported inter-fraction deformations for 21 patients with 3 repeat CT scans. Standard deviations along local surface normals for lateral SV, SV tip, Anterior SV and Posterior SV were 1.7 mm, 2.3 mm, 2.4 mm and 2.6 mm respectively. Lastly, Mayyas et al. [39] studied 10 patients with 20 CBCTs and looked at percentage of CBCTs in which deformation vector fields exceeded 3, 5 or 10 mm. For both 3 and 5 mm poste- Table 1 Definitions of used error parameters; x = individual measurements; n = number of measurements per patient; N = number of patients in the study; m p = mean per patient; SD p = standard deviation of the patient mean. Error parameter Definition Patient mean (lÞ The anisotropic PTV-margins, including both the margins reported in the publications and the margins calculated by us using the van Herk formula, are shown in Fig. 1f. PTV-margins based on systematic and random errors are reported to be around 8 mm, a value widely used in clinical practice for the SV [19,20]. Larger values (>9 mm margins) were found in 2 out of 8 articles with reported PTV-margins up to 10.5 mm [38] and 14.9 mm [42]. When the systematic and random errors are measured relative to the bony anatomy, the PTV-margins are larger than those based on a prostate match. Due to the absence of published random errors, no margins correcting purely for intra-fraction motion can be/are reported. Fig. 2 shows the different proposed PTV margins, maximum stated values isotropically applied, in a typical prostate patient case. To visualize the effect of preparation protocols on the motion of the SV, in the form of the inter-fraction mean, systematic error, random error and PTV-margin were plotted for rectal and bladder preparation protocol (figs. C.1 and C.2, Appendix C). From these figures, no apparent trend between rectal and bladder preparation and magnitude or direction of motion was observed. Similarly, the effect of rectal and bladder preparation on intra-fraction motion is inconclusive. Discussion This review focuses on understanding the inter-and intrafraction motion of the SV during external-beam radiation therapy of PCa and the PTV-margins needed to correct for this motion. This is required to devise safe PTV volume reduction strategies to enable the ultra-hypofractionated treatment of high risk PCa. The literature reported in this review show an extensive variety in methods used for obtaining and reporting motion, making a secondary analysis or generating average values not possible. Inter-vs intra-fraction translations The mean inter-fraction translation, M inter , of 7 out of 9 publications is below 1.5 mm, suggesting a limited group mean error. Two publications report means above 2 mm, both of which report on a small set of 10 patients [35,40]. For the intra-fraction translation 2 of the 3 available publications report means, M intra , up to 1.5 mm. Regarding study [32] reporting means of up to 7 mm, it remains unclear whether relative or only absolute values of translation were reported, as well as which reference point was used. Overall, the values of M inter and M intra are comparable and in the order of 1 mm, which would be expected from unbiased data. However, especially on intra-fraction translation, the number of publications are still limited with only four studies. The systematic inter-fraction error, P inter , shows values between 1 and 3.5 mm. Note that the P inter values reported by Chin et al. [27] and Frank et al. [30] were derived from the SD of the M inter that was given. There are only two reports that discuss the systematic errors of intra-fraction translation, P intra [29,32]. Both articles report motion of 1.5-4 mm. These values for P intra were derived by us by using the SD of the group mean. Comparing Seminal vesicle inter-and intra-fraction motion during radiotherapy for prostate cancer: A review anisotropic PTV-margins of 2-5 mm and 4-10 mm after a prostate vs a bone match respectively. Similarly, Meijer et al. [18] reported an isotropic margin of 13 mm when matched on bony anatomy compared to 8 mm when matched on the prostate. This indicates that there is at least some level of correlation between the interfraction motion of the prostate and of the SV. Similar to the prostate, the motion of the SV is caused by changes in rectal and bladder distention. However, the reported levels of correlation between prostate and SV motion vary. All publications note that the SV can move semi-independently from the prostate and the amplitude of motion is larger. Smitsmans et al. [45] reported that as much as 42% of the AP SV inter-fraction translation was correlated to the LR prostate gland rotation. Similarly, Liang et al. [36], showed that inter-fraction translation of the prostate and the SV in the AP direction was correlated (R 2 of 0.7), both driven by rectum and bladder changes. No correlations were found for the other directions. De Boer et al. [28] show an inverse correlation between the LR rotations of the SV and the prostate LR rotation. A large interpatient variety in correlation of intra-fraction SV and prostate translation was shown by Gill et al. [32]. The reported Pearson correlation coefficients, R, ranged from À0.23 to 0.82, with the 7 out of 10 patients showing no linear correlation trend. Consequently, imaged-guided strategies that only focus on the prostatic gland will not fully compensate for SV motion. Gill et al. [32] showed that there appears to be a plateau in SV displacement that was reached 10 min after starting radiation delivery. No significant increase in displacement was seen after this time. It is unclear how this corresponds exactly to on-table time. De Muinck Keizer et al. [49] report that the extent of intrafraction motion of the prostate is reached after 30 min on-table time. Rotations, deformations and volume changes Means of inter-fraction rotations were discussed by two publications [28,46], most of which <1°, as expected in unbiased data, with only three rotations in one publication [46] significantly different than 0. R rotation and r rotation around the LR-axis were also reported in both articles and range between 5.0°and 7.2°. Van der Burgt et al. [46] reported that these rotations around the LRaxis were larger than the rotations around the CC and AP-axes with max rotations of 2.4°and 2.7°respectively. This is in line with Hoogeman et al. [50] who described that prostate+SV rotations were largest in the LR-axis with R rotation and r rotation of 3.6°and 5.1°respectively. These rotations were significantly correlated to differences in rectal volume (p < 0.0001) [50]. Mean deformations after prostate match were mentioned by five publications [33,35,39,44,47]. All five reported the highest deformations in the anterior, caudal and posterior borders. This is somewhat in line with the largest translations being in AP and CC axis and the largest rotations being around the LR-axis and can be explained by rectal and bladder volume changes as well [47]. Mean intra-fraction deformations, <1.1 mm [44], appeared to be smaller than the mean inter-fraction deformation, <3 mm [33,47]. This was also described by Li et al. who reported both [35]. Mayyas et al. appear to report higher deformations than the previously mentioned articles (1% >10 mm) [39]. However, this can be explained by the fact that only Mayyas et al. [39] did not use means to report their deformations. Important to note here is that all deformations mentioned are measured after prostate matching. No residual deformations after correction for SV translation were described. The publications reporting on the magnitude of volume changes show different results, which can be, partially, explained by the different experimental methods. Where Miralbell et al. [41] used the planning scan as reference, Bairstow et al. [26] used the mean SV volume as a reference. The latter also only reported on two extreme cases from their population, where Miralbell et al. [41] reported on all 9 patients. Liu et al. [37] reported a study in which 28 patients with at least 15 follow-up CT-scans were analysed. The volume, compared to the planning scan, decreased significantly in 3 cases and increased significantly in one case. In contrast, Frank et al. [31] found no significant volume changes in 15 patients with repeated CT-on-rails images. As these varying results suggest, no consensus regarding the extent of these volume changes has been reached and further research is needed to clarify the geometrical and clinical effect of these volume changes [26]. PTV-margins to account for SV motion The three studies reporting PTV-margins [34,40,42] all used the van Herk margin recipe (Eq. 1) [48] except for one article which used an alternate version for a 2D dose distribution: 2.15 R + 0.7 r [45]. This review compares these reported PTV-margins, with margins we calculated from reported systematic (R) and random errors (r) using the same van Herk margin recipe [48]. One publication reports both systematic and random errors and a PTVmargin. The margins recalculated by us are very similar to those reported [42] (i.e. 4.4 vs 4.4 in LR, 10.0 vs 9.9 in CC and 7.4 vs 7.5 in AP respectively). Most publications included in this review report anisotropic PTV-margins for the SV of approximately 8 mm (see Fig. 2). This value is also used in multiple studies looking at the effect of margins on target coverage. Meijer et al. [18] showed that an isotropic PTV-margin of 3 mm for the prostate and 8 mm for the SV ensures 95% CTV-coverage for 90% of the patients using a prostate fiducial match. Mutanga et al. [19] reported that an isotropic 8 mm expansion for the SV resulted in a clinically acceptable coverage. Thörnqvist et al. [21] found that an isotropic PTV-margin of 7 mm resulted in 95% coverage of the target volume for 18/19 patients. Stenmark et al. [20] looked at the coverage for the proximal 1 cm as well as the entire SV. For 95% geometrical coverage of the CTV for 90% of the patients 5 mm and 8 mm isotropic margins were required when treating the partial SV and the full SV respectively. Two publications reported SV margins > 9 mm, one of which used a bony anatomy match to register the SV motion [42]. Using a prostate match, Mak et al. [38] reported larger margins, i.e. 10 mm in the CC-direction, possibly limited by the 5 mm CT slice thickness in this direction. Sheng et al. [44] reported a 5 mm isotropic margin around the SV to ensure a 95% coverage in 90% of the fractions. However, this margin assumes intra-fraction motion tracking of the prostate. The literature on intra-fraction motion of the SV is still too limited to extract a PTV-margin based on intra-fraction motion alone. Translation consistently appears to be the smallest in the LR direction ( Fig. 1a-f). This offers opportunities of anisotropic PTVmargins. Smitsmans et al. [45] reported margins for the SV of 4.6 mm and 7.6 mm for the LR-and AP-direction respectively, not taking into account deformation and rotation. In addition, the rotations of the SV are largest around the LR axis, which will mostly contribute to motion in CC and AP direction. Most dosimetric studies report isotropic PTV-margins in the order of 8 mm [18][19][20][21]. Margin reduction in the LR-direction might have a limited clinical impact, considering most toxicity comes from the bladder and rectum that lie inferior and superior to the SV. Influencing factors Bladder and especially rectal volume changes are known to play a significant role in prostate inter-and intra-fraction motion [25]. For the SV similar patterns of correlations between rectal and bladder filling and SV motion have been observed [27,29,30,38,51]. However, fig. C.1, in Appendix C, shows that different efforts to control the rectal filling status do not have a clear effect on the amplitude of inter-and intra-fraction motion. Similarly to rectal filling status, no apparent trend is visible in the amplitude of SV motion, with respect to bladder preparation ( Fig. C.2, in Appendix C). The absence of a correlation between rectal and bladder filling protocols and the amplitude of systematic and random errors in our study can be, at least partially, explained by their mixed success rate to effectively control the filling status, as shown by a review on this topic [52]. Only one study reported on the effect of tumour invasion on SV motion. Van der Burgt et al. [53] compared the differences in interfraction motion between patient groups with different levels of tumour invasion in the SV. The random displacements in the group with extensive invasion were statistically significantly lower than those of the minimal and the no invasion group. However, this reduction was small and the SV motion remained considerable. Limitations There are several limitations in the van Herk margin recipe as in equation 1 that are relevant in applying the margin formula to SV. First of all, only translations are taken into account. Rotations, deformations, and volume changes all contributing to errors in the treatment of SV, are ignored. Studies that do include rotations show that rotational errors can cause a loss in tumour control probability [54], especially for non-spherical targets [28]. An example can be found in de Boer et al. [28] who state a margin of 11.6 mm including rotational errors of the SV and 8.2 mm when correcting for them. Including rotations will lead to anisotropic and location specific margins as the margin will be dependent on the distance to the rotation axes, generally assumed to lie near the apex of the prostate [55,56]. As the correlation between the prostate and SV rotations is limited, there is a residual deformation of the SV in the order of 2-3 mm SD that needs to be taken into account [30]. Hence, deformations, rotations, and volume changes that are not fully corrected for before the start of treatment lead to an increased PTV margin to ensure CTV coverage and the van Herk recipe will only give a lower limit of the margin required. Another limitation is that the van Herk margin is valid for conventional fractionation. To translate the results from the referenced publications to a hypofractionated treatment scheme, the margin will have to be increased. In a treatment consisting of only a few fractions, the average random error might deviate from zero, resulting in an additional systematic error [55,57,58]. As an indication, the PTV margin will have to be increased from 8 mm to 8.5 and 9.2 mm respectively for a 5 and 2 fraction treatment, based on a calculation using equal systematic and random errors. Possibilities for margin reduction With conventional image guided radiotherapy (IGRT) [59] PTVmargin reduction for the SV has been achieved, but remains with 8 mm substantial [18][19][20]. Further margin reduction with IGRT might be difficult to achieve and therefore ultrahypofractionation for patients with a target volume including the SV remains challenging. Correcting for inter-fraction motion Inter-fraction motion can be corrected off-and online by adaptive radiation therapy (ART). ART for prostate has been extensively studied and reported [60][61][62][63][64][65][66]. However, only a limited amount of publications on ART for the SV exist. Xia et al. reported on a library-of-plans approach [67] whereas De Boer et al. [28] used a hybrid registration technique, prostate markers followed by a soft-tissue registration of the SV. Both showed promise in possible margin reduction around the SV. However, most recent research regarding prostate ART and margin reduction still focusses on prostate only and is fuelled by the developments of MR-guided radiation treatment systems [68,69] Correcting for intra-fraction motion In contrast to inter-fraction motion, intra-fraction motion is more complex to take into account. A straightforward solution is to minimize fraction duration as the displacement increases with time [29,32,35]. Intra-fraction motion correction of the prostate has been demonstrated using Calypso 4D tracking [70], real-time tracking using the CyberKnife [71], a library of plans [72], and soft tissue gating using the MRidian [69]. However, the challenge remains how to apply intra-fraction motion management for adjacent targets, in this case the prostate and the SV, that move semicorrelated. Beam-per-beam online replanning with all its challenges could pose a solution [68,73]. Conclusion This extensive literature review shows that the inter-and intrafraction motion of the SV is substantial and largely uncorrelated with prostate motion. Main factors influencing the prostate and SV motion are differences in rectal and bladder filling. Strategies to control rectum and bladder filling status, and thereby reduce treatment uncertainties, appear to lack effectiveness. When calculating PTV-margins for the SV, translations, rotations and deformations need to be taken into account as they can be substantial, even after an initial match on the prostate. To reduce PTV margins around the SV, their inter-and intra-fraction motion needs to be adequately accounted for. Further research is required to quantify the safety and feasibility of PTV-margin reduction for the SV, in particular in context of ultra-hypofractionation for high risk prostate cancers, which will be subject of further studies in our institute. Support This work was in part funded by a research grant of Accuray Inc., Sunnyvale, USA. Conflicts of interest This work was in part funded by a research grant of Accuray Inc., Sunnyvale, USA. Erasmus MC Cancer Institute also has a research collaboration with Elekta AB, Stockholm, Sweden and Varian Medical Systems, Inc., Palo Alto, USA.
6,564.6
2022-02-01T00:00:00.000
[ "Medicine", "Physics" ]
Trifunctional Epoxy Resin Composites Modified by Soluble Electrospun Veils: Effect on the Viscoelastic and Morphological Properties Electrospun veils from copolyethersulfones (coPES) were prepared as soluble interlaminar veils for carbon fiber/epoxy composites. Neat, resin samples were impregnated into coPES veils with unmodified resin, while dry carbon fabrics were covered with electrospun veils and then infused with the unmodified epoxy resin to prepare reinforced laminates. The thermoplastic content varied from 10 wt% to 20 wt%. TGAP epoxy monomer showed improved and fast dissolution for all the temperatures tested. The unreinforced samples were cured first at 180 °C for 2 h and then were post-cured at 220 °C for 3 h. These sample showed a high dependence on the curing cycle. Carbon reinforced samples showed significant differences compared to the neat resin samples in terms of both viscoelastic and morphological properties. Introduction Epoxy resins are widely used as highly crosslinked materials in different fields ranging from civil to automotive and aerospace. Epoxy resins are the preferred choice in the aerospace field because of their high glass transition temperatures combined with their high stiffness and their solvent resistance [1][2][3][4][5]. However, the use of epoxy composites for primary aircraft structures was initially limited by the inherent brittleness of the unmodified epoxy resins. These limitations were overcome by the development of several toughening strategies. Among the different strategies identified, the use of engineering thermoplastics as toughening agents were the most successful. Cycom™ 977-2 and Hexcel 8552 are two examples of structural toughened prepregs, based on the addition of engineering thermoplastics. Both systems are qualified for use in primary aircraft structures. The two systems, after curing, present glass transition temperatures of 212 • C and 200 • C, respectively. However, most existing toughening strategies have been designed for prepreg processing only. Recently, the development of infusion-based techniques stressed the need for the development of an alternative toughening strategy that could avoid the increase of the resin viscosity found with thermoplastic blending. Cytec Engineered Materials addressed this need by developing the Priform™ technology that is based on the use of soluble melt extruded fibers based on their proprietary thermoplastic toughening agents [6]. This approach led to the development of materials manufactured by infusion but with the same properties of Cycom™ 977-2. The use of soluble filaments dispersed in epoxy resins was proposed by using electrospun thermoplastic fibers as an alternative approach to the use of melted filaments or films. Several studied Materials 2018, 11 have been reported on the use of soluble electrospun fibers [7][8][9][10] but the effects of epoxy blend composition (i.e., epoxy monomer type and hardener type) on the fiber dissolution has not been analyzed thoroughly. Since different epoxy monomers and several types of hardeners are available, it is possible to produce epoxy formulations with different properties [11]. Therefore, the effect of formulation components, which include multifunctional epoxy monomers, is of high practical interest. Multifunctional epoxies have higher crosslink densities than difunctional monomers [12,13] resulting in higher strengths, stiffness, and glass transition temperatures (Tg). T300/914, for example, is a commercial prepreg system consisting of carbon fibers embedded within a blend of tetraglycidyl 4,4-diaminodiphenyl-methane (TGDDM) and triglycidyl-p-aminophenol (TGAP) hardened with dicyandiamide (DICY) and/or diaminodiphenylsulfone (DDS), to which a small percentage of polyethersulfone (PES) is added [14]. Hourston et al. [15] reported the optimization of toughness for epoxy matrixes by mixing di-and tri-functional epoxide monomers. The effect on cure kinetics of a trifunctionalepoxy prepolymer TGAP on a thermoplastic-modified epoxy blend was reported by Bonnaud et al. [16]. The TGAP monomer increased the cure rate of the difunctional epoxy-based formulation. In a previous paper, the effect of the polymer molar mass on epoxy modified with electrospun veils obtained from copolyethersulfones was reported [17]. In a recent paper, we showed the effect of blending different ratios of TGAP and DGEBA on the properties of epoxy blends modified with copolyethersulfones [11]. The use of copolyethersufoneselectrospun veils with different molar mass was analyzed in a recent paper [17]. However, the study was limited to the use of DGEBA as epoxy monomer. In the present paper, we extend the study by using the trifunctional epoxy monomer triglycidyl-p-aminophenol (TGAP) in substitution of the difunctional diglycidylether of bisphenol A (DGEBA). Materials Theepoxy resin used was triglycidyl-p-aminophenol (TGAP) (Huntsman, Basel, Switzerland) with an equivalent weight of 101. The curing agent was 4,4 -methylene bis(2,6-diethylaniline) (MDEA) supplied by Lonza, Basel, Switzerland. The thermoplastic polymer was a copolyethersulfone, synthetized by the authors, with a number average molar mass of 9000 g/mol (coPES 9k) and bearing amino-phenol end groups. The details regarding the synthesis of coPES are reported elsewhere [18]. Plain carbon fabrics (C-200T from Prochima, Milan, Italy) with an areal weight of 200 gsm (grams per square meter) were used for the preparation of the reinforced samples. Electrospun Veil Preparation In the first step of coPES 9k veil production, 5.00 g of the polymer were dissolved in a solvent mixture (5.00 mL N,N-dimethylformamide (DMF) and 5.00 mL of Toluene) and stirred for 2 h at 40 • C. After the complete polymer dissolution, the solution was placed in a 3-mL medical syringe and electrospun at a flow rate of 60 µL/min, 21 kV ddp and a 10-cm needle-collector gap onto a rotating drum (200 rpm). The polymer fibers were spun directly onto the carbon fabric that was stuck on the rotating collector using conductive carbon tape. To obtain the desired wt% ratios of toughener in the composite, a selected gsmamount of coPES 9k veil was placed through the interlaminar regions according to the areal density of the carbon fabric. Neat Resin Preparation Cured neat resin samples were prepared to study the effect of the neat matrix without carbon fiber reinforcements. For this purpose, a selected amount of electrospun veils were laid up in the aluminum dish. In a second step the unmodified epoxy resin (preheated at 130 • C for 5 min) was poured on the veils to impregnate them. The dish was transferred to the oven, which was set at 130 • C and kept at this temperature for 30 min. The oven temperature was increased by 2 • C/min up to 180 • C and held at that value for 3 h. This cure rate was selected as it is the commonly used for aerospace qualified prepreg system. After this curing cycle, the samples were also post-cured at 220 • C for 3 h to fully develop their network. At the end of the curing cycle, the samples were left to cool down slowly at room temperature. Composite Laminate Production Six layers of dry carbon fabrics with electrospun veils were stacked on a steel plate. Around the perimeter of the layered stack, an adhesive silicone tape was placed to provide a suitable seal and a flexible vacuum bag was placed on top. An inlet tube and an outlet tube were put inside the vacuum bag. The inlet tube was connected by a valve to a pot filled with unmodified epoxy resin while the outlet tube was connected to a vacuum pump. The vacuum was applied while the inlet valve was closed to compact the layers and to remove excess air. The steel plate with all the stacked layers were placed in an oven preheated to 130 • C. The epoxy resin was vacuum infused into the stacked layers, which was maintained at 130 • C under a constant vacuum (75 cmHg). The temperature was kept at 130 • C for 30 min and then increased by 2 • C/min up to 180 • C and held at that value for 3 h. A similar cure cycle was used for the prepreg composites. However, in this case, the epoxy/coPES blends were used to impregnate the carbon fabrics which were then laid on the steel plate. The obtained stack was compacted using a vacuum bag at room temperature for 15 min. The stack was placed in an oven preheated to 130 • C for 30 min. The temperature was then increased to 180 • C and held at that value for 3 h. After this curing cycle the laminates were also post-cured at 220 • C for 3 h. At the end of the curing cycle, the samples were left to cool down slowly at room temperature. Hot-Stage Microscopy Hot-stage microscopy was used to observe the dissolution of coPES fiber in an epoxy resin at different temperatures. A Linkam THMS 600 hot stage (The McCrone Group, Westmont, IN, USA) with TP-90 controller was fitted to an Olympus BX60 optical microscope (Milan, Italy). The unmodified epoxy resin was first preheated to the testing temperature. Next, a drop of resin was placed on a thin glass microscope slide where coPES fibers were spun before, after that another thin glass slide was placed on top of the veil. The prepared sample was subsequently held in the hot stage at the testing temperature and observed under the optical microscope, with the time needed to dissolve the veil being noted. This procedure was repeated for testing temperatures of 130, 120, 110, 100, 90, 80, 70 and 60 • C. Dynamic Mechanical Analysis (DMA) DMA of cured samples was carried out in single cantilever bending mode using a dynamic mechanical thermal analyzer (TRITEC by Triton Technology, Mansfield, MA, USA). All specimens were vacuum dried at 40 • C overnight before testing. The test was performed in accordance to ASTM E1640 by using samples of size 30 mm × 10 mm × 5 mm. All the cured samples were analyzed at a frequency of 1 Hz, with a 2 • C/min heating rate. The maxima on tanδ versus temperature curves were determined to identify the α-relaxations associated with the glass transitions. All samples were vacuum dried at 40 • C overnight before testing. Scanning Electron Microscopy (SEM) SEM micrographs were obtained with a SEM EVO-MA15 by Zeiss, Cambridge, UK. The electrospun veils, the cured matrixes and the cured laminates was analyzed. The cured neat matrixes were fractured in liquid nitrogen and then etched, using a mixture of sulfuric acid/distilled water (3:2) before sputter-coating. For the etching treatment, the samples were immersed in the acid mixture and stirred for 20 min. The acid mixture was used to etch the epoxy phase increasing the contrast between the thermoplastic and epoxy phases. The cured laminates were polished before etching. All etched samples were gold sputtered before the analysis without any other treatment. Results and Discussion The results on the neat resin samples are presented first, while the data for the composite laminates reinforced with carbon fibers are discussed after. Neat Resin Samples To evaluate the dissolution rate of the electrospun fibers at different temperatures, hot-stage microscopy was used. Figure 1 shows three representative screenshots of the experiment performed at 100 • C. Figure 1a shows the veil immersed in the epoxy resin at the start of the test. After few seconds, the electrospun fibers were no longer visible (Figure 1b), and, after 1 min, all fibers disappeared because the veil was fully dissolved in the resin. The time at which no fibers were visible was recorded as the dissolution time. mixture and stirred for 20 min. The acid mixture was used to etch the epoxy phase increasing the contrast between the thermoplastic and epoxy phases. The cured laminates were polished before etching. All etched samples were gold sputtered before the analysis without any other treatment. Results and Discussion The results on the neat resin samples are presented first, while the data for the composite laminates reinforced with carbon fibers are discussed after. Neat Resin Samples To evaluate the dissolution rate of the electrospun fibers at different temperatures, hot-stage microscopy was used. Figure1 shows three representative screenshots of the experiment performed at 100 °C. Figure 1a shows the veil immersed in the epoxy resin at the start of the test. After few seconds, the electrospun fibers were no longer visible (Figure 1b), and, after 1 min, all fibers disappeared because the veil was fully dissolved in the resin. The time at which no fibers were visible was recorded as the dissolution time. All the dissolution times recorded for different temperatures are plotted in Figure 2. The graph shows the comparison of the dissolution times at different temperatures for the veils of coPES 9k in two different epoxy resins cured by MDEA: TGAP and DGEBA. The data of the fibers dissolution in the DGEBA/MDEA system were obtained previously and they are reported here for comparison purposes [17]. The coPESveil showed similar dissolution times in the two resins between 100 °C and 130 °C. However, for temperatures lower than 100 °C, the veil showed faster dissolution for TGAP/MDEA than for DGEBA/MDEA. At 90 °C, the veil dissolved in 7 min with DGEBA/MDEA while less than 1 min was needed to observe full dissolution with TGAP/MDEA. Furthermore, at temperatures lower than 80 °C the complete dissolution of the veils was not observed for DGEBA/MDEA, while using TGAP/MDEA, veil's dissolution still occurred. The veils showed full dissolution with TGAP/MDEA for temperatures down to 60 °C. These results confirmed that TGAP is a better solvent for coPES as it was already demonstrated in previous studies [11]. Others authors proved that TGAP increases the miscibility in epoxy/PES blends due to its higher compatibility with PES [16]. To prepare the neat resin samples, the unmodified epoxy resin was poured in the aluminum dish where the electrospun veils were previously laid up ( Figure 3). The impregnated veils were cured in an oven to study the veil dissolution. The coPES veil content varied from 10 wt% to 20 wt% in these samples. These polymer percentages were selected since they are usually reported in All the dissolution times recorded for different temperatures are plotted in Figure 2. The graph shows the comparison of the dissolution times at different temperatures for the veils of coPES 9k in two different epoxy resins cured by MDEA: TGAP and DGEBA. The data of the fibers dissolution in the DGEBA/MDEA system were obtained previously and they are reported here for comparison purposes [17]. The coPESveil showed similar dissolution times in the two resins between 100 • C and 130 • C. However, for temperatures lower than 100 • C, the veil showed faster dissolution for TGAP/MDEA than for DGEBA/MDEA. At 90 • C, the veil dissolved in 7 min with DGEBA/MDEA while less than 1 min was needed to observe full dissolution with TGAP/MDEA. Furthermore, at temperatures lower than 80 • C the complete dissolution of the veils was not observed for DGEBA/MDEA, while using TGAP/MDEA, veil's dissolution still occurred. The veils showed full dissolution with TGAP/MDEA for temperatures down to 60 • C. These results confirmed that TGAP is a better solvent for coPES as it was already demonstrated in previous studies [11]. Others authors proved that TGAP increases the miscibility in epoxy/PES blends due to its higher compatibility with PES [16]. To prepare the neat resin samples, the unmodified epoxy resin was poured in the aluminum dish where the electrospun veils were previously laid up ( Figure 3). The impregnated veils were cured in an oven to study the veil dissolution. The coPES veil content varied from 10 wt% to 20 wt% in these samples. These polymer percentages were selected since they are usually reported in toughening studies of PES/Epoxy blends [6]. After 10 min at 130 • C, the samples became transparent with full dissolution of the veils. At the end of curing the samples became opaque. The samples obtained were cut for DMA and SEM investigation. toughening studies of PES/Epoxy blends [6]. After 10 min at 130 °C, the samples became transparent with full dissolution of the veils. At the end of curing the samples became opaque. The samples obtained were cut for DMA and SEM investigation. The unmodified TGAP/MDEA system showed one main tanδ peak centered at 206 °C and a small tanδ at 166 °C before the post-curing (Figure 4a). Two-step decrease in the storage modulus (E') was observed for the system cured at 180 °C ( Figure 4b). For this system, the storage modulus started to increase at about 215 °C during the DMA. Similar behavior was observed by Kim and Inoue for TGAP cured by 4,4′-diaminodiphenylsulfone (DDS) at 180 °C for different curing times [19]. In their systems, Kim and Inoue observed the disappearance of the low temperature peak upon increasing of the curing time. The low temperature peak was correlated to an epoxy oligomer phase which was converted to a crosslinked phase leading to a more homogenous epoxy network after higher curing times. Similarly, in our samples, the lower temperature peak disappeared upon post curing at 220 °C leading to the presence of one single wide peak centered at 214 °C ( Figure 4a) and, for the storage modulus, only one step decrease was observed after post-curing ( Figure 4b). However, it must be noted that, in contrast to what was reported by Kim and Inoue, in our samples, we found an increase of the high temperature peak instead of a decrease upon post-curing. This difference could be interpreted as an improved conversion of the unreacted epoxy groups leading to higher Tg because post-curing was carried out at higher temperature (i.e., 220 °C) than for the first curing step (i.e., 180 °C). toughening studies of PES/Epoxy blends [6]. After 10 min at 130 °C, the samples became transparent with full dissolution of the veils. At the end of curing the samples became opaque. The samples obtained were cut for DMA and SEM investigation. The unmodified TGAP/MDEA system showed one main tanδ peak centered at 206 °C and a small tanδ at 166 °C before the post-curing (Figure 4a). Two-step decrease in the storage modulus (E') was observed for the system cured at 180 °C (Figure 4b). For this system, the storage modulus started to increase at about 215 °C during the DMA. Similar behavior was observed by Kim and Inoue for TGAP cured by 4,4′-diaminodiphenylsulfone (DDS) at 180 °C for different curing times [19]. In their systems, Kim and Inoue observed the disappearance of the low temperature peak upon increasing of the curing time. The low temperature peak was correlated to an epoxy oligomer phase which was converted to a crosslinked phase leading to a more homogenous epoxy network after higher curing times. Similarly, in our samples, the lower temperature peak disappeared upon post curing at 220 °C leading to the presence of one single wide peak centered at 214 °C (Figure 4a) and, for the storage modulus, only one step decrease was observed after post-curing (Figure 4b). However, it must be noted that, in contrast to what was reported by Kim and Inoue, in our samples, we found an increase of the high temperature peak instead of a decrease upon post-curing. This difference could be interpreted as an improved conversion of the unreacted epoxy groups leading to higher Tg because post-curing was carried out at higher temperature (i.e., 220 °C) than for the first curing step (i.e., 180 °C). The unmodified TGAP/MDEA system showed one main tanδ peak centered at 206 • C and a small tanδ at 166 • C before the post-curing (Figure 4a). Two-step decrease in the storage modulus (E') was observed for the system cured at 180 • C (Figure 4b). For this system, the storage modulus started to increase at about 215 • C during the DMA. Similar behavior was observed by Kim and Inoue for TGAP cured by 4,4 -diaminodiphenylsulfone (DDS) at 180 • C for different curing times [19]. In their systems, Kim and Inoue observed the disappearance of the low temperature peak upon increasing of the curing time. The low temperature peak was correlated to an epoxy oligomer phase which was converted to a crosslinked phase leading to a more homogenous epoxy network after higher curing times. Similarly, in our samples, the lower temperature peak disappeared upon post curing at 220 • C leading to the presence of one single wide peak centered at 214 • C (Figure 4a) and, for the storage modulus, only one step decrease was observed after post-curing (Figure 4b). However, it must be noted that, in contrast to what was reported by Kim and Inoue, in our samples, we found an increase of the high temperature peak instead of a decrease upon post-curing. This difference could be interpreted as an improved conversion of the unreacted epoxy groups leading to higher Tg because post-curing was carried out at higher temperature (i.e., 220 • C) than for the first curing step (i.e., 180 • C). The tanδ versus temperature behavior for the blends obtained by the impregnation of the coPESelectrospun veil is reported in Figure 5. The modified blends after the first curing step showed a main relaxation peak and a shoulder at lower temperatures ( Figure 5a). The peaks for each coPES composition appeared broader than for the unmodified resins. When increasing the coPES content, the peaks shifted to lower temperatures compared to the unmodified TGAP/MDEA. The tanδ behavior changed after post-curing at 220 °C ( Figure 5b) showing a main peak centered between 205 °C and 212 °C and a smaller peak centered between 158 °C and 161 °C. Similar results were confirmed analyzing the storage modulus curves (see Supplementary Materials, Figure S1a,b). The storage modulus curves showed higher modulus in the glass region for the modified systems compared to the unmodified resins. This can be explained as the results, for the modified blends, of the reinforcing effect of the stiff thermoplastic domains in these systems. For temperatures above 205°C, which corresponds to the Tg of pure coPES [17], the behavior was different with the modified blends displaying lower modulus than the unmodified blends. Varley et al. [20] reported three tanδ peaks after curing at 150 °C for 16 h for blends based on TGAP/DDS modified with polysulfone (PSF). The tanδ versus temperature turned to two-peak behavior after post-curing at 205 °C for 2 h. The change upon post-curing was explained as the results of the conversion of the unreacted epoxy species which was favored for systems displaying particulate morphologies. In addition, post-curing imparting greater mobility to all the species allowed further phase separation to occur resulting in the shift of the tanδ peak. The tanδ versus temperature behavior for the blends obtained by the impregnation of the coPESelectrospun veil is reported in Figure 5. The modified blends after the first curing step showed a main relaxation peak and a shoulder at lower temperatures ( Figure 5a). The peaks for each coPES composition appeared broader than for the unmodified resins. When increasing the coPES content, the peaks shifted to lower temperatures compared to the unmodified TGAP/MDEA. The tanδ behavior changed after post-curing at 220 • C (Figure 5b) showing a main peak centered between 205 • C and 212 • C and a smaller peak centered between 158 • C and 161 • C. Similar results were confirmed analyzing the storage modulus curves (see Supplementary Materials, Figure S1a,b). The storage modulus curves showed higher modulus in the glass region for the modified systems compared to the unmodified resins. This can be explained as the results, for the modified blends, of the reinforcing effect of the stiff thermoplastic domains in these systems. For temperatures above 205 • C, which corresponds to the Tg of pure coPES [17], the behavior was different with the modified blends displaying lower modulus than the unmodified blends. Varley et al. [20] reported three tanδ peaks after curing at 150 • C for 16 h for blends based on TGAP/DDS modified with polysulfone (PSF). The tanδ versus temperature turned to two-peak behavior after post-curing at 205 • C for 2 h. The change upon post-curing was explained as the results of the conversion of the unreacted epoxy species which was favored for systems displaying particulate morphologies. In addition, post-curing imparting greater mobility to all the species allowed further phase separation to occur resulting in the shift of the tanδ peak. The systems studied in the present paper presented relevant differences compared to the blends analyzed by Varley et al. [20]. First, the curing agent used was the MDEA in place of the DDS. MDEA is known to decrease the miscibility window for epoxy/PES blends [16]. Second, the amino-phenol end groups of the coPES used here allowed the co-reaction and an improved compatibility of coPES with the epoxy monomers as proved by several authors [21,22]. The use of MDEA led to phase separation in the early stage of curing, which due to the increased compatibility of our coPES with the epoxy monomers, presented a high amount of epoxy-curing species dispersed in the thermoplastic rich domains. Van Overbeke et al. [21] predicted, for similar blends cured at 160 °C, a content of 50% of epoxy-curing agent species remaining trapped in the thermoplastic-rich phase. In addition, the lack of affinity between MDEA and the coPES could further decrease the MDEA content in the coPES-rich phase resulting in stoichiometric imbalance with detrimental effect on epoxide conversion [23]. The presence of a high amount of not fully cured oligomer epoxy-curing agent species can lead to the decrease of the glass transition temperature of the thermoplastic rich phase. This decrease, occurring after the first cure step, was demonstrated by the low temperature peak occurring at temperatures (122-150 °C) lower than the Tg (205 °C, [17]) of the pure coPES. The systems studied in the present paper presented relevant differences compared to the blends analyzed by Varley et al. [20]. First, the curing agent used was the MDEA in place of the DDS. MDEA is known to decrease the miscibility window for epoxy/PES blends [16]. Second, the amino-phenol end groups of the coPES used here allowed the co-reaction and an improved compatibility of coPES with the epoxy monomers as proved by several authors [21,22]. The use of MDEA led to phase separation in the early stage of curing, which due to the increased compatibility of our coPES with the epoxy monomers, presented a high amount of epoxy-curing species dispersed in the thermoplastic rich domains. Van Overbeke et al. [21] predicted, for similar blends cured at 160 • C, a content of 50% of epoxy-curing agent species remaining trapped in the thermoplastic-rich phase. In addition, the lack of affinity between MDEA and the coPES could further decrease the MDEA content in the coPES-rich phase resulting in stoichiometric imbalance with detrimental effect on epoxide conversion [23]. The presence of a high amount of not fully cured oligomer epoxy-curing agent species can lead to the decrease of the glass transition temperature of the thermoplastic rich phase. This decrease, occurring after the first cure step, was demonstrated by the low temperature peak occurring at temperatures (122-150 • C) lower than the Tg (205 • C, [17]) of the pure coPES. Upon post-curing, the increased mobility of the coPES-rich phase allowed epoxy-curing agent species to demix from the thermoplastic phase purifying it and completing their conversion resulting in the formation of a higher crosslinked epoxy network with thermoplastic-rich phase dispersed. The tanδ curve versus temperature reported here showed distinct behavior compared to the blends reported previously [11]. However, it must be noted that the samples prepared in this paper underwent a more complex cure profile, needed to fully dissolve the electrospun veils, while, in the previous paper, the coPES powder was pre-dissolved in the epoxy monomer before adding MDEA to the blend. Morphological analysis was carried out on both sample before and after post curing. The results for post-cured samples are reported in Figure 6. The analysis revealed for all the samples the presence of a predominant particulate morphology. The sample with 10 wt% showed particles with diameters ranging 0.5-1 µm, while, for the sample at 15 wt%, the particles diameter varied within 1-2 µm. The sample with 20 wt% of coPES showed bigger particles (1.2-3 µm) and some zone where phase inversion occurred (Figure 7). SEM analysis confirmed the full dissolution of the electrospun veils. The morphological analysis for the samples at 10 wt% and 15 wt% of coPES confirmed the DMA trend showing two tanδ peaks due to the coPES-rich (i.e., the particles) and epoxy-rich phase. The morphological analysis of the sample with 20 wt%, with the appearance of phase inverted regions, confirmed the DMA behavior of this sample too, which showed a wider tanδ peak due to the superposition of the two phases. Upon post-curing, the increased mobility of the coPES-rich phase allowed epoxy-curing agent species to demix from the thermoplastic phase purifying it and completing their conversion resulting in the formation of a higher crosslinked epoxy network with thermoplastic-rich phase dispersed. The tanδ curve versus temperature reported here showed distinct behavior compared to the blends reported previously [11]. However, it must be noted that the samples prepared in this paper underwent a more complex cure profile, needed to fully dissolve the electrospun veils, while, in the previous paper, the coPES powder was pre-dissolved in the epoxy monomer before adding MDEA to the blend. Morphological analysis was carried out on both sample before and after post curing. The results for post-cured samples are reported in Figure 6. The analysis revealed for all the samples the presence of a predominant particulate morphology. The sample with 10 wt% showed particles with diameters ranging 0.5-1 µm, while, for the sample at 15 wt%, the particles diameter varied within 1-2 µm. The sample with 20 wt% of coPES showed bigger particles (1.2-3 µm) and some zone where phase inversion occurred (Figure 7). SEM analysis confirmed the full dissolution of the electrospun veils. The morphological analysis for the samples at 10 wt% and 15 wt% of coPES confirmed the DMA trend showing two tanδ peaks due to the coPES-rich (i.e., the particles) and epoxy-rich phase. The morphological analysis of the sample with 20 wt%, with the appearance of phase inverted regions, confirmed the DMA behavior of this sample too, which showed a wider tanδ peak due to the superposition of the two phases. Composite Laminate Samples Carbon fiber reinforced laminates were produced by infusion to validate the veil dissolution in composite processing. For this purpose, hybrid carbon fabrics, with electrospun fibers deposited onto their surfaces, were infused by the unmodified TGAP/MDEA resin. The amount of the veil deposited was balanced against the areal weight of the carbon fabric to obtain the desired amount Upon post-curing, the increased mobility of the coPES-rich phase allowed epoxy-curing agent species to demix from the thermoplastic phase purifying it and completing their conversion resulting in the formation of a higher crosslinked epoxy network with thermoplastic-rich phase dispersed. The tanδ curve versus temperature reported here showed distinct behavior compared to the blends reported previously [11]. However, it must be noted that the samples prepared in this paper underwent a more complex cure profile, needed to fully dissolve the electrospun veils, while, in the previous paper, the coPES powder was pre-dissolved in the epoxy monomer before adding MDEA to the blend. Morphological analysis was carried out on both sample before and after post curing. The results for post-cured samples are reported in Figure 6. The analysis revealed for all the samples the presence of a predominant particulate morphology. The sample with 10 wt% showed particles with diameters ranging 0.5-1 µm, while, for the sample at 15 wt%, the particles diameter varied within 1-2 µm. The sample with 20 wt% of coPES showed bigger particles (1.2-3 µm) and some zone where phase inversion occurred (Figure 7). SEM analysis confirmed the full dissolution of the electrospun veils. The morphological analysis for the samples at 10 wt% and 15 wt% of coPES confirmed the DMA trend showing two tanδ peaks due to the coPES-rich (i.e., the particles) and epoxy-rich phase. The morphological analysis of the sample with 20 wt%, with the appearance of phase inverted regions, confirmed the DMA behavior of this sample too, which showed a wider tanδ peak due to the superposition of the two phases. Composite Laminate Samples Carbon fiber reinforced laminates were produced by infusion to validate the veil dissolution in composite processing. For this purpose, hybrid carbon fabrics, with electrospun fibers deposited onto their surfaces, were infused by the unmodified TGAP/MDEA resin. The amount of the veil deposited was balanced against the areal weight of the carbon fabric to obtain the desired amount Composite Laminate Samples Carbon fiber reinforced laminates were produced by infusion to validate the veil dissolution in composite processing. For this purpose, hybrid carbon fabrics, with electrospun fibers deposited onto their surfaces, were infused by the unmodified TGAP/MDEA resin. The amount of the veil deposited was balanced against the areal weight of the carbon fabric to obtain the desired amount of the thermoplastic in the interlaminar region of the composite. The co-PES content varied, as for the neat resin samples, between 10 wt% and 20 wt%. The samples after post-cured were analyzed by DMA (Figure 8). The laminates containing 15 wt% and 20 wt% of coPES veils showed a sharp tanδ peak centered at 206 • C and 204 • C, respectively. The laminate modified by 10 wt% displayed a wide tanδcentered at 191 • C. This peak was much wider than the peaks observed for the other two modified systems and its height was lower. The results of the DMA analysis ( Figure 8) showed a different trend compared to the neat samples containing the same veil percentages (Figure 5b). The morphological analysis of the cryo-fractured laminates confirmed the findings of the DMA analysis. Interlaminar morphologies markedly different than for the neat resins were observed ( Figure 9). Thermoplastic-rich particles denser and bigger than those observed for the neat resin sample (1.3-2 µm) for the laminates containing 10 wt% of coPES were found (0.5-1 µm, Figure 6a). The morphology for the laminates with 15 wt% and 20 wt% of coPES were co-continuous, while, for the analogous neat resin samples, particulate morphologies with some areas displaying phase inversion for the 20 wt% neat resin sample were displayed. Materials 2018, 11, x FOR PEER REVIEW 9 of 11 of the thermoplastic in the interlaminar region of the composite. The co-PES content varied, as for the neat resin samples, between 10 wt% and 20 wt%. The samples after post-cured were analyzed by DMA (Figure 8). The laminates containing 15 wt% and 20 wt% of coPES veils showed a sharp tanδ peak centered at 206 °C and 204 °C, respectively. The laminate modified by 10 wt% displayed a wide tanδcentered at 191 °C. This peak was much wider than the peaks observed for the other two modified systems and its height was lower. The results of the DMA analysis ( Figure 8) showed a different trend compared to the neat samples containing the same veil percentages (Figure 5b). The morphological analysis of the cryo-fractured laminates confirmed the findings of the DMA analysis. Interlaminar morphologies markedly different than for the neat resins were observed ( Figure 9). Thermoplastic-rich particles denser and bigger than those observed for the neat resin sample (1.3-2 µm) for the laminates containing 10 wt% of coPES were found (0.5-1 µm, Figure 6a). The morphology for the laminates with 15 wt% and 20 wt% of coPES were co-continuous, while, for the analogous neat resin samples, particulate morphologies with some areas displaying phase inversion for the 20 wt% neat resin sample were displayed. The presence of the reinforcing fibers had a pronounced effect on the final properties of the composites. Several causes can explain this result: the presence of the carbon fibers in the laminate acted as nucleating agent and hindered thermoplastic diffusion across the sample [24]; and the carbon fiber surface promoted stoichiometric gradients affecting the phase separation behavior [25]. Varley and Hodgking [24] studied and explained the different morphological behaviors in terms of the improved nucleation and agglomeration effect experience by the fiber surfaces. Similar effects seem to have occurred in the specimens analyzed here. of the thermoplastic in the interlaminar region of the composite. The co-PES content varied, as for the neat resin samples, between 10 wt% and 20 wt%. The samples after post-cured were analyzed by DMA (Figure 8). The laminates containing 15 wt% and 20 wt% of coPES veils showed a sharp tanδ peak centered at 206 °C and 204 °C, respectively. The laminate modified by 10 wt% displayed a wide tanδcentered at 191 °C. This peak was much wider than the peaks observed for the other two modified systems and its height was lower. The results of the DMA analysis ( Figure 8) showed a different trend compared to the neat samples containing the same veil percentages (Figure 5b). The morphological analysis of the cryo-fractured laminates confirmed the findings of the DMA analysis. Interlaminar morphologies markedly different than for the neat resins were observed ( Figure 9). Thermoplastic-rich particles denser and bigger than those observed for the neat resin sample (1.3-2 µm) for the laminates containing 10 wt% of coPES were found (0.5-1 µm, Figure 6a). The morphology for the laminates with 15 wt% and 20 wt% of coPES were co-continuous, while, for the analogous neat resin samples, particulate morphologies with some areas displaying phase inversion for the 20 wt% neat resin sample were displayed. The presence of the reinforcing fibers had a pronounced effect on the final properties of the composites. Several causes can explain this result: the presence of the carbon fibers in the laminate acted as nucleating agent and hindered thermoplastic diffusion across the sample [24]; and the carbon fiber surface promoted stoichiometric gradients affecting the phase separation behavior [25]. Varley and Hodgking [24] studied and explained the different morphological behaviors in terms of the improved nucleation and agglomeration effect experience by the fiber surfaces. Similar effects seem to have occurred in the specimens analyzed here. The presence of the reinforcing fibers had a pronounced effect on the final properties of the composites. Several causes can explain this result: the presence of the carbon fibers in the laminate acted as nucleating agent and hindered thermoplastic diffusion across the sample [24]; and the carbon fiber surface promoted stoichiometric gradients affecting the phase separation behavior [25]. Varley and Hodgking [24] studied and explained the different morphological behaviors in terms of the improved nucleation and agglomeration effect experience by the fiber surfaces. Similar effects seem to have occurred in the specimens analyzed here. Conclusions Viscoelastic and morphological properties of neat resins and fiber reinforced laminates modified with soluble coPES veils have been analyzed. The effect of different content of coPES veils and of different curing cycle was studied. The full dissolution of the coPES veil occurred after few minutes at the processing temperatures of interest because of the high solubility of coPES in the TGAP. This result is interesting for further developments of the soluble veil approach because TGAP can be envisaged as a method to tailor the properties of complex epoxy blend when fiber dissolution is a requirement. TGAP is an interesting epoxy monomer in this regard, as it allowed increasing Tg over difunctional epoxy monomers [11]. The use of the multifunctional epoxy monomer TGAP resulted in a complex viscoelastic behavior after the first curing step at 180 • C for the unmodified blend which showed two tanδ peaks in the DMA trace. The additional post-curing at 220 • C led to a single tanδpeak which can be ascribed to the formation of a more homogenous cured network. The modified blends with different coPES veil content showed even a more complex behavior because coPES veil, after its dissolution, undergone under phase separation with the formation of different morphologies. Neat resin samples showed multiple relaxation after the first curing step that shifted after post-curing because of demixing from thermoplastic-rich phase upon curing. This finding suggests that final properties of TGAP, over its advantages, are strongly dependent upon curing conditions. Similarly, when the resin was infused into carbon fabric hybridized with coPES veils the final properties of the system changed compared to neat resins. These results confirmed TGAP as a monomer with potential benefits for improving the final properties of soluble veil modified composites but, at the same time, as a monomer that should be employed carefully in formulation design.
9,338
2018-03-01T00:00:00.000
[ "Materials Science" ]
Atmospheric Tides in the Middle and Upper Atmosphere of Mars at Northern High Latitudes: A Comparison of MAVEN‐EUVM and MRO‐MCS Observations With Model Results Much of the variability in the Martian thermosphere can be attributed to vertically propagating atmospheric tides that are known to achieve significant amplitudes in this region. Concurrent observations from different altitudes have been used previously to discern the vertical propagation characteristics of tides but have primarily focused on low latitudes. The spectrum of tides and their vertical evolution are thereby less constrained at high latitudes. Few studies that have focused on high latitudes identified wavenumber‐3 structures which were interpreted to originate mainly from the non‐migrating tides SE1 and DE2. This paper presents the first analysis of MAVEN‐EUVM solar occultation observations to deduce atmospheric tides in the Martian thermosphere. These are compared to tides observed by MRO‐MCS in the middle atmosphere for six cases at high northern latitudes. To identify vertical propagation, wave signatures in the middle and upper atmosphere are compared and are found to be dominated by a mix of zonal wavenumbers‐2 and ‐3 in fixed local time. MCS observations show eastward propagating tides dominate, specifically highlighting SE1 near 76 km. Additionally, these observations indicate the presence of stationary planetary waves and terdiurnal tides. Mars Climate Database also indicates the presence of SE1, DE2, DE1, S0, TW1, and T0 tides. A change in the dominant wavenumber component with local time is seen, which is attributed to the presence of all three diurnal, semidiurnal and terdiurnal components at these latitudes. The significant decrease in the diurnal tide amplitude indicates the effect of zonal mean wind on vertical propagation. Introduction Atmospheric tides are now a well-known ubiquitous dynamic feature in observations of the Martian atmosphere.Tides appear as global-scale oscillations in atmospheric variables such as density, pressure, temperature and winds with periods that are harmonics of the Martian solar day.These oscillations are mainly excited due to the cyclic absorption of solar radiation by a rotating planet and its atmosphere (Forbes et al., 2002;Zurek, 1976).Certain tides generated in the lower atmosphere have the characteristics of vertically propagating waves and are capable of carrying energy and momentum from their source region to the upper atmosphere (Conrath, 1976; Supporting Information may be found in the online version of this article.Wilson, 2002;Zurek, 1976).The energy carried by them is deposited at higher altitudes via dissipation, contributing to the acceleration of the background atmosphere (Forbes et al., 2002;Jones et al., 2019). While tides arise mainly as a direct response to solar forcing, they are strongly influenced by the interaction of solar forcing with zonal inhomogeneities in surface topography and thermal properties such as surface albedo, dust loading, and water-ice clouds (Kleinböhl et al., 2013;Withers et al., 2003;Zurek, 1976).Following the mathematical representation and naming convention from Forbes et al. (2002), the atmospheric response to such forcing can be represented by A n,s (z,θ) cos nΩt + sλ ϕ n,s (z,θ)), (1) where A n,s and ϕ n,s are the amplitude and phase of the wave, z and θ are the altitude and latitude, t is universal time (UT), t LT is local time (LT), Ω = 2π/sol is the planetary rotation rate, λ is the longitude, n is the period where n = 1, 2, 3 refers to diurnal, semidiurnal and terdiurnal or periods of 1 sol, 1/2 sol, and 1/3 sol, respectively; s represents the zonal wavenumber or the number of cycles per 360°longitude.Equation 2 describes the same oscillation in terms of local time, which more conveniently matches the longitude-time dependencies in typical spacecraft observations.It is clear from Equation 2 that the spectrum of tides can be classified broadly into two categories viz., the s = n components called migrating tides, which appear as longitude-independent features in a fixed local time reference frame, and the s ≠ n components called "non-migrating tides" that are longitude-dependent and can migrate eastward or westward with phase speeds greater than Ω = 2π/sol.An eastward (E) propagating diurnal (D) tide is referred to as DEs, where "s" stands for the zonal wavenumber.If the tide propagates westward (W), it is referred to as DWs, whereas for the eastward or westward propagating semidiurnal (S) tides, "S" replaces the "D" and they are called SEs or SWs, respectively. While the sun-synchronous migrating tides can cause large perturbations at thermospheric heights (Forbes et al., 2023), they do not produce variation with longitude in a fixed local time reference frame.Non-migrating tides, the subject of the present paper, can cause large amplitude variations with longitude at thermospheric heights (e.g., Keating et al., 1998;Withers et al., 2003).These tides are generated in the lower atmosphere and propagate into the upper atmosphere gaining amplitude as density decreases with height to maintain constant energy density. It can be seen from Equation 2that |s n| is the wavenumber associated with longitudinal variation in a fixed local time reference frame.A few previous studies have successfully delineated values of s and n; however, many of these were focused on identifying migrating tides in the lower atmosphere (Fan, Forget, et al., 2022;Fan, Guerlet, et al., 2022;Guerlet et al., 2023;Kleinböhl et al., 2013).Wu et al. (2015) successfully separated the values of s and n to identify both migrating and nonmigrating tides in the middle atmosphere using MCS observations.While these studies have largely focused on the middle and lower atmospheres, distinguishing between the different tides and minimizing aliasing has been particularly challenging in the upper atmosphere owing to the limited spatio-temporal satellite coverage. Observations that have repeatedly sampled regions closer to the equator at low and mid-latitudes have shown that the wavenumber-2 and -3 components dominate the variability in this region.These signatures have frequently been attributed to the DE1 and DE2 tides, confirming some previous modeling results.Lo et al. (2015) showed the presence of a strong wavenumber-2 signature, which was attributed to the DE1 tide in the MAVEN-IUVS (Mars Atmosphere and Volatile Evolution mission's Imaging Ultraviolet Spectrograph) densities derived from the CO + 2 UVD emissions at latitudes between 0-30°N.Concurrent observations from MAVEN-IUVS and MAVEN-NGIMS (Neutral Gas and Ion Mass Spectrometer) revealed strong wavenumber-2 signatures at low northern latitudes whose amplitude varied by 50% between 5°and 25°N which is consistent with the behavior of the DE1 tide (England et al., 2016).Schneider et al. (2020) showed the presence of a wavenumber-3 structure near the equator in the NO nightglow brightness, which is interpreted as the DE2 tide by measuring phase change with local time. At higher latitudes, observations have indicated that the SE1 and S0 tides are the primary contributors to the wave structure.Wavenumber-3 features were identified by Bougher et al. (2001) in the Mars Global Surveyor (MGS) accelerometer data at high northern latitudes between 60 and 65°N, which is attributed to the eastward propagating semidiurnal tide (SE1).Cahoy et al. (2007) also reported the contribution of SE1 tide to the zonal wavenumber-3 structure observed in the MGS electron densities between 60°and 85°N.Altitude-latitude structure of modeled relative density amplitudes sampled at locations of MGS aerobraking phases -I and -II showed that the SE1 amplitudes maximized at high northern latitudes (Angelats i Coll, 2004;Wilson, 2002).The presence of wave-like oscillations with three peaks was identified near ∼68°N in temperature profiles of the Phoenix Mars probe and observations from the Mars Climate Sounder (MCS) instrument aboard the Mars Reconnaissance Orbiter (MRO) spacecraft (Withers & Catling, 2010).The authors suggested that DE2 and SE1 tides are likely responsible for this variability.Recently, England et al. (2019) compared high latitude upper atmospheric observations from MAVEN-IUVS and MAVEN-NGIMS to MRO-MCS observations of the middle atmosphere and identified the presence of a strong wavenumber-3 signature at high southern latitudes, which have been attributed to the SE1 tide.While this study provided much needed insight into tides reaching the upper atmosphere at high latitudes, only two cases at high southern latitudes were analyzed, and no model comparisons were made over a large vertical range. The aforementioned studies make it evident that the prevailing nonmigrating tides influencing the longitudinal variability in the upper atmosphere vary from the equator toward the poles.Theoretical predictions suggest that the DE1 tide can propagate vertically only at latitudes close to the equator and are confined within ±30°, whereas the semidiurnal components may propagate vertically at higher latitudes (Forbes & Hagan, 2000).Tides that propagate vertically are a well-known mechanism for transferring energy and momentum by interacting with each other as well as the background circulation.General circulation model simulations have shown that upward propagating tides play a crucial contributing role in the winter polar warming features observed in the lower atmosphere and the thermosphere (Bell et al., 2007;Wilson, 1997).Moudden and Forbes (2008) investigated the effects of vertically propagating thermal tides on the zonal mean density, temperature and wind structure in the 90-160 km region using model simulations.By isolating the effects of the non-migrating tides from the migrating tides, the authors show that the changes in all zonal mean fields are dominated by the non-migrating tides owing to their longer vertical wavelengths that allow them to reach higher altitudes before dissipating.Forbes et al. (2020) performed a comprehensive climatological analysis of tides in the middle atmosphere using MCS temperature measurements and compared it with the Mars Climate Database (MCD v5.3).The findings of this study report good agreement in the amplitudes and structure at 76 km between model tides and MCS climatology.It also demonstrated that MCD captures the thermospheric wavenumber-2 structure exhibited in the MGS density data with comparable amplitudes and phases.While good agreement with MCD is seen near 105-115 km when compared to MGS data, the authors note significant knowledge gaps and differences between model and observations.Examples of these are the poorly constrained distribution of wind in the middle and upper atmosphere, the model's tendency to underestimate the vertical extension of dust and so on.Therefore, a thorough comparison with the model and validation against observed tides at higher altitudes is required to better predict thermospheric variability due to atmospheric tides.Kumar et al. (2022) have inferred vertical propagation of tides at low latitudes by combining observations over a large range of altitudes (from 60 to 165 km) from MAVEN-IUVS, MAVEN-NGIMS and MRO-MCS.Results from this study showed that the observations were dominated by wavenumber-2 and -3 signatures.Further, comparing with the model, the authors found that the likely underlying tides were the diurnal components DE1 and DE2.It was also noted that the model performed much better at lower than higher altitudes. Here, we focus on characterizing tides and vertical propagation by combining observations at different altitudes and comparing them with model results, similar to Kumar et al. (2022), but with a focus on the high northern latitude region.Data from MAVEN's Extreme Ultraviolet Monitor (EUVM) will provide the observations of the upper atmosphere, which will be compared with MRO-MCS observations of the middle atmosphere.As stated previously, Forbes et al. (2020) demonstrated that the model compares well with MCS observations in the middle atmosphere and in some cases in the upper atmosphere.Kumar et al. (2022) focused on low latitudes and also showed that the model compared well with MCS observations in the middle atmosphere.However, while the model captured the dominant wavenumber in the upper atmosphere, the amplitudes and phases did not agree with the observations.In this study, observations from high northern latitudes are compared with the Mars Climate Database, highlighting any discrepancies between observations and the model.Such a comparison becomes important to assess the model's ability to bridge the data gap in the region where observations are lacking, especially when inferring the propagation of tides from the middle to upper atmosphere.Furthermore, this comparison aims to help understand how the model's performance at high latitudes compares with that at low latitudes.The following sections are organized as follows: the data utilized in this study and data selection are described in Section 2, the methodology employed to analyze tides and corresponding results for a single case is described in Section 3, followed by discussion and conclusions in Sections 4 and 5, respectively. Data Description and Selection The data used in this study consists of temperatures derived from the solar occultation measurements made by the EUVM instrument aboard MAVEN and remote sensing measurements made with the MCS instrument on MRO.MCD consists of meteorological fields based on the Mars Planetary Climate Model (Mars PCM) and is used in conjunction with the observational data here, allowing for a comparative analysis between the results produced by the model and observations.This will be done through both sampling MCD at the location of the observations and a spatial-temporal decomposition of the MCD output.With a focus on high latitudes, the upper atmospheric observations are obtained from EUVM temperature profiles that span an altitude range of ∼120-200 km, whereas MCS provides observations of the middle atmosphere from 30 to 80 km.MCD covers this entire range of altitudes and the subsequent results will be referenced against both the middle and upper atmospheric observations. To combine and compare the different datasets, it is essential to have overlap in the date ranges and latitudes, which decides the ability to estimate wavenumbers, tidal amplitudes and phases, and vertical propagation characteristics.Since this study is focused on understanding the vertical propagation of tides at high latitudes, the data selection (i.e., cases) is driven by EUVM.The following subsections will discuss data from each instrument beginning with EUVM, followed by MCS as well as the model, along with the caveats associated with each dataset in detail. EUVM The Extreme Ultraviolet Monitor instrument consists of filter radiometers that can record solar EUV irradiance in three broad wavelength bands whose ranges span 0.1-7 nm, 17-22 nm, and 117-125 nm, respectively (Eparvier et al., 2015).The EUVM solar occultation measurements have been used to derive thermospheric density measurements using the extinction ratio (ER), which is the ratio of the observed solar intensity at a tangent height to that at the top of the atmosphere.The primary challenge in using solar occultation measurements to retrieve thermospheric densities is accounting for the spatial extent of the solar disk, which is larger than the thermospheric scale heights on Mars.This issue is mitigated by incorporating solar images and reference atmospheres in a forward model of the extinction ratio, which is used to determine the vertical profiles of the column density.The corresponding temperature profiles are then inferred from the derived densities by computing the pressure at each altitude, followed by applying the ideal gas law under the hydrostatic equilibrium assumption.Each retrieved temperature profile has a vertical sampling of less than ∼1 km and the optical depth approaches unity near 130 km.The description above is a very summary of the method used to obtain the vertical temperature profiles and further details can be found in Thiemann et al. (2018). The data distribution of all available EUVM observations for the years 2015, 2017, and 2018 can be seen in Figure 1a.Due to the nature of solar occultation measurements, the local time is fixed at the terminator near ∼6 a. m. and 6 p.m. for observations close to the equator.Generally, observations that are sampled ∼12 hr apart in local time give an excellent opportunity to implement methods that separate semidiurnal tides from diurnal tides (England et al., 2019;Guzewich et al., 2012;Lee et al., 2009;Steele et al., 2021).However, observations from EUVM at these fixed local times were taken over different months (see Figure 1a); therefore, analysis by implementing such a method is not possible since mixing observations from different months could lead to a bias in the results as tides are known to evolve with season (L s ) (Forbes et al., 2020).Therefore, this analysis focuses on fixed high latitudes, where the number of data points is maximized across a small range of local times.To ensure complete longitudinal coverage, observations are limited to latitudes north of 50°.This results in the selection of six cases, as detailed in Table 1, whose latitude-local time coverage is shown in Figures 1b and 1c.Note that for a particular latitude range, each case is divided into two segments such that the observations from these two segments are shifted by 4 hr in local time and less than 5°in L s .Analyses of these cases are performed separately, and results are compared.These results are expected to be broadly similar given the small separation in L s between any two segments.Cases are labeled in the order of increasing latitude; for example, the lowest latitudes sampled between 55-65°N provide two cases separated by L s and local time within this latitude range and are labeled as Case 1A and 1B, respectively. Densities and temperatures at thermospheric altitudes respond strongly to changes in solar EUV flux (Thiemann et al., 2018).Prior to performing tidal analysis, it is essential to remove the effects of changes in solar EUV from the EUVM data.In order to do this, the irradiance measured by EUVM from the 17-22 nm wavelength band is examined.Irradiance is also measured in the 0.1-7 nm wavelength band, but no irradiances are available from the year 2018 since this channel stopped measuring due to a fault in the instrument.Following England et al. (2022), data from days with high solar EUV are removed from the analysis by computing the residuals using the daily maximum solar irradiance and subtracting the 27-day rolling mean from it as shown in Figure 2a.This effectively removes the periodicities associated with the 27-day solar rotation period.Any day that falls outside the 2σ envelope around these residuals is excluded from the analysis.Solar EUV was found to be nearly constant during all time periods considered in this study except October 8 and 10 (in the year 2015), where high solar flux events were recorded; therefore, data from these days are excluded from the analysis. MCS The Mars Climate Sounder instrument is an infrared radiometer onboard the MRO spacecraft, which is in a polar sun-synchronous orbit (McCleese et al., 2007).MCS measures the limb of the atmosphere from the surface to ∼80 km with a vertical resolution of 5 km.Vertical profiles of atmospheric temperature used here are obtained from the three channels that measure the 15 μm absorption band of CO 2 (Kleinböhl et al., 2009).The observations taken by MCS are fixed near 3 a.m. and 3 p.m. local time, providing global coverage as the planet rotates under the spacecraft.This study uses MCS data selected for the same time periods (cases) identified by the EUVM dataset at the same latitudes but at different local times.The difference in local times sampled by the two instruments will likely show up as a difference in the phase of the wavenumber signature observed.If tidal phases (ϕ n,s ) are assumed to remain constant with t, then it can be seen from Equation 2that the phase of the tidal signature ϕ (defined as the longitude of the first peak, which is determined by the terms nΩt LT and ϕ n,s ) changes when sampled at different local times (t LT ).Since tidal phases are assumed to remain constant, the phase of the tidal signature ϕ (longitude of first peak) changes when sampled at different local times t LT .The MCS observations are selected to reflect the same date ranges spanned by the EUVM cases, but to ensure sufficient longitudinal coverage, a period of ∼one month (∼30 sols) of data centered on the EUVM date range is selected.Since tidal signatures in the Martian atmosphere evolve in timescales that span several weeks to months (Mazarico et al., 2008;Withers et al., 2003), it is reasonable to assume that the identified tidal signatures remain constant over the month of MCS data used in this analysis. Mars Climate Database (MCD) The MCD is a database that provides mean values and statistics of main meteorological variables derived from the Mars Planetary Climate Model (Mars PCM), formerly known as the LMD Mars Global Climate Model (LMD Mars GCM) developed by Laboratoire de Météorologie Dynamique.MCD Version 6.1 (Millour et al., 2017) consists of five broad datasets called "scenarios" that are generated by accounting for the variation in dust and solar EUV conditions in various combinations.In this study, we use the "Mars Years" scenario that pertains to individual Mars Years (MY 24 to MY 34), which best represent the day-to-day EUV forcing conditions and dust loading for a specific Mars year.The MCD outputs are stored on a 5.625°by 3.75°longitude-latitude grid and extend from the surface to ∼300 km.All variables in MCD are stored in 12 Martian months, each spanning 30°of solar longitude (L s ).As noted in Section 1, Forbes et al. (2020) used MCD to characterize tides, and comparison with MCS observations at 76 km showed good agreement at this altitude.Given this agreement between the model and observations in the middle atmosphere, here MCD outputs are compared with observations of tides in the middle and upper atmospheres from MCS and EUVM, respectively. Analysis and Results Having selected a subset of the data from any of the aforementioned datasets, to quantify the variation in temperature with longitude, a mathematical model in the form of a Fourier series is employed.Given a set of observations, the variation of temperature with longitude in a fixed local time reference frame is modeled by expressing Equation 2 as a sum of sinusoids, as shown below: where T is the temperature and the dependent variable, λ is the longitude and the independent variable, a 0 is the mean temperature, A n is the amplitude of the n th harmonic and ϕ n is the phase of the n th harmonic. To identify zonal wavenumbers, a least squares estimation method is applied to fit the four-component harmonic model described above to the derived temperature data.This procedure is hereafter referred to as the wavenumber 1-4 fit.Uncertainty in the wavenumber 1-4 fit is quantified by considering the spread in the data, which is essentially estimated by computing the mean squared error between the measured value and the fitted values.We apply this method to analyze the datasets selected from EUVM, MCS, as well as the MCD output.While there is no way of separating the individual contribution of the different tides present by utilizing the EUVM dataset alone, it is possible to infer this information by combining analysis results from the MCS observations and the MCD output.The remainder of this section will discuss the analysis and results of Case 1A as an example and the results from all other cases are provided in the Supporting Information S1. EUVM Results As mentioned in Section 2.1, complete longitudinal coverage is required to characterize tides.The location of the EUVM data as a function of longitude and latitude for Case 1A is shown in Figure 2b and illustrates that this criterion is satisfied.Data selected for Case 1A span latitudes between 55 and 65°N and a local time range of ∼2.8 hr LT (0.5-3.3 hr LT), which is much smaller than the period of the semidiurnal (12 hr) and diurnal (24 hr) tides.Therefore, it is assumed that all the data come from a single local time and will not affect the ability to identify wave signatures, although averaging across different phases of the wave will underestimate the wave amplitudes.At higher altitudes, errors are introduced in the temperature retrievals due to the assumption of a pure CO 2 atmosphere since the contribution of O cannot be neglected at altitudes over 200 km (Thiemann et al., 2018).Furthermore, above 190 km, observations are missing from multiple longitudes; therefore, the altitude range is limited between 130-190 km and these data are binned into 5 km altitude bins and 30°longitude bins.Figure 2c shows the binned temperatures that are then averaged to reveal a clear wave structure that can be identified at multiple altitudes. To characterize the wave structure, the wavenumber 1-4 fit is applied to the EUVM derived temperatures in each 5 km bin between 130 and 190 km. Figure 3 shows four sampled altitude bins that capture a clear wave structure with two peaks.The amplitude of the wave appears to change coherently between the altitude bins shown in Figure 3.The derived amplitudes show that the wavenumber-2 component is the strongest and its amplitude increases with altitude until ∼155 km.While the wavenumber-2 signature dominates this spectrum, other waves are also present, notably the wavenumber-1 and -3 components, which also display strong amplitudes. MCS Results MCS temperature profiles are obtained roughly 12 hr apart in local time and are fixed near 3 a.m. and 3 p.m. LT except at high latitudes owing to its orbit.This kind of observation allows for the separation of the diurnal from the semidiurnal tides by examining the summed and differenced temperatures from the two local times (England et al., 2019;Guzewich et al., 2012;Lee et al., 2009).Following the description in Guzewich et al. (2012), a tidal perturbation in the temperatures measured at longitude λ, local solar time t LT , altitude z and latitude θ can be described using Equation 2 as: where A n,s and ϕ n,s are the amplitude and phase of the oscillation and are functions of z and θ.For the case of sunsynchronous observations from MCS, the local solar time of the ascending and descending portions of the orbit are fixed near 3 a.m. and 3 p.m. local time, and can therefore be written as t asc -t des = 12 which relates a tidal perturbation in the temperatures at t asc given by T′ n,s (λ,t asc ) to that at t des given by T′ n,s (λ,t des ) and can be shown as follows: Therefore, at a given longitude, the magnitude of the diurnal tide is given by taking half of the difference between two observations taken at two local times 12 hr apart: Summing over all values that can be taken by n and s and substituting Equation 5in Equation 6, we get: ( 1) n T′ n,s (λ,t des ) ] ( 7) It is clear from Equation 7 that when n is even or n = 0 which represents stationary planetary waves, the RHS of Equation 7 cancels out and becomes zero, therefore leaving only oscillations with odd values of n to be present in T′ diff .Similarly, the magnitude of the semidiurnal oscillations can be given by taking half of the sum of two observations taken at two local times 12 hr apart and only oscillations with even values of n will be present in T′ sum and is given by: While this method is powerful in separating the diurnal and semidiurnal components, a few caveats are that it is not possible to distinguish higher order harmonics like the terdiurnal tides from the diurnal tides in the T′ diff field.Similarly, it is not possible to distinguish between the stationary planetary waves and semidiurnal components in the T′ sum field.Furthermore, observations are ∼12 hr LT apart only at latitudes near the equator and this difference in LT reduces as the spacecraft moves to higher latitudes and passes over the poles.Therefore, for each case analyzed here, the difference in local time between the ascending and descending portions of the orbit varies with the latitude being sampled.For some of the cases, measurements are obtained 11 hr apart, whereas others are 10 hr apart.Therefore, the analysis needs to be interpreted with some caution in the cases that are ∼10 hr apart in LT since the differenced temperature field may include the terdiurnal tide, which has a period of 8 hr of LT.The presence of the terdiurnal tide will be taken into account using a new technique outlined later in this section, along with a spectral decomposition of the full tidal spectrum from the model, which will be discussed in Sections 3.3 and 4, respectively. MCS data are available for the same latitude range as the EUVM data from 55 to 65°N for Case 1A.The temperature profiles are selected from two local time ranges which span ∼1h each and are centered at 3.2 and 14.3 hr LT.These data are then binned onto a 4 km by 30°altitude-longitude grid and are averaged.The diurnal components are identified by examining the difference between the ascending and descending portions of the orbit, whereas the semidiurnal components are identified by examining the sum of these data.A wavenumber 1-4 fit is performed at each 4 km altitude step to the summed and differenced temperature fields.The phase of each individual wavenumber component is defined as the longitude of the first peak or maxima east of 0°. As shown in Figure 4, the reconstruction of the wavenumber-2 and-3 components have the highest amplitudes in both the LT summed and differenced temperature fields above 60 km.On comparing the wave amplitudes at 76 km shown in Table 2, we see that the wavenumber-3 component in the LT sum has the biggest amplitude, followed closely by the wavenumber-2.From Figure 4, we also see that the observed changes in both amplitude and phase seem to be coherent from one altitude bin to the next, especially at altitudes upward of 60 km. Figure 4b shows that the phase (longitude of maxima) has an eastward tilt with height between 70 and 90 km in the reconstructed local time summed temperatures.The eastward progression in height alone is not sufficient to delineate the possible tides responsible for the wavenumber-3 structure in fixed LT, which could be the SE1 or SW5 tide or the stationary planetary wave SPW3 (s = 3, n = 0).Moreover, since the local times of the ascending and descending nodes are less than ∼12 hr apart, a contribution from the T0 tide may also be present. While there are clear advantages to the LT sum and difference method described above, there are a few caveats associated with it, the most important of which is the aliasing of the observations related to sampling from a sunsynchronous orbit with t LT = constant.For example, observations limited to just two local times cause stationary planetary waves to be aliased with the semidiurnal tides and it is therefore not possible to distinguish between them.In addition, as seen from Equation 2, it is not possible to distinguish between wave components that represent the same value of (|s n|) using data at only one altitude.While MCS observations are typically measured "in-track", which is fixed near 3 a.m. and 3 p.m. LT, additional local times have also been measured in modified observation campaigns, giving "cross-track" and "off-track" measurements (Kleinböhl et al., 2013). While this strategy provides a total of six local time bins near the equator to about 60°N and S, the local-time sampling is still sparse since the observed local times are clustered near the "in-track" observations.Kleinböhl et al. (2013) used MCS observations at these additional local times to study the migrating semidiurnal tide.The authors showed that the semidiurnal tide had a dominant response throughout the Martian year but did not investigate the non-migrating tides.Wu et al. (2015) investigated both migrating and nonmigrating tides using MCS observations at additional local times and reported the presence of DW3, DW2 and SW1 tides for the first time in the middle atmosphere.However, the authors did not investigate the presence of stationary planetary waves or terdiurnal tides.Here, similar to Wu et al. (2015), MCS data at additional local times are used to derive the amplitudes of the nonmigrating tides and these are compared with the results obtained by the LT summed and differenced temperature fields.This is done by fitting the tides inferred from the LT summed and differenced temperatures to all available data in the longitude-local-time space.The amplitudes obtained for the tides are then compared to the results obtained from the LT sum-difference method.A controlled experiment in Appendix A shows the application of this method to idealized tides that are sampled at MCS local times, which yields good results.Additionally, tidal amplitudes obtained from a control case that samples MCD at the locations of all available MCS observations (rather than just two local times) are also compared with those obtained from an FFT of the full diurnal cycle from MCD for Case 1A to establish a degree of uncertainty on the resultant fit.It is important to note that this method requires adequate local time sampling to separate the tides of interest.In Cases 3A and 3B, however, the local time samples are too sparse to effectively distinguish between these tides, rendering the method unsuitable for these cases. The method described above is applied to the MCS data by fitting tides inferred from the local time sum and difference temperature fields.For case 1A, these tides are the eastward propagating SE1, DE1, and DE2, the zonally symmetric tides S0 and T0, along with the westward propagating tides that give rise to the same |s n| zonal wavenumber, which are SW4, SW5, DW3, DW4, and TW1.The stationary planetary wavenumber-2 and -3 (SPW2 (s = 2, n = 0) and SPW3 (s = 3, n = 0)) are also fitted to this data since these can be aliased into the LT summed temperature field and cannot be separated out.The MCS temperature profiles at the available local times are binned and averaged in 0.5 hr LT bins and 30°longitude bins.All the selected tides are fitted together to the data between 30-80 km in 4 km altitude steps.The amplitudes of all the fitted eastward and westward propagating tides as a function of altitude are shown separately in Figures 5a and 5b, respectively.The wave amplitudes of the eastward propagating components are clearly larger than their westward counterparts, which is within the expectation and confirms results from previous studies (Forbes et al., 2002;Wilson, 2000).SE1 dominates this tidal spectrum above 72 km, followed closely by DE2 whose amplitude is bigger than the wavenumber-3 amplitude in the LT differenced fields, as shown in Table 2. Similarly, S0, which contributes to the wavenumber-2 structure, is found to have a lower amplitude than indicated in the LT summed fields.Both SPW2 and S0 give rise to the same |s n| = 2 zonal wavenumber structure in a fixed local time reference frame. Figure 5a shows that SPW2 could be present in the tidal field with non-negligible amplitudes and becomes comparable to S0 near 76 km.The terdiurnal tide T0 is shown to be negligible, and the amplitude of the TW1 tide is very small up to 72 km.However, between 76 and 90 km, the amplitude of TW1 becomes comparable to S0 but remains smaller than the dominant tides.It is important to note that MCS observations are less reliable over 76 km, as demonstrated in Figure A3, where the uncertainties on the wave amplitudes increase above this altitude. In general, the method seems to confirm the dominance of the SE1 tide near 76 km, but the incomplete coverage in local time could be underestimating its amplitude while also leading to an overestimation of SPW2, which can be seen in the MCD based analysis in Figure A3.The presence of DE2 with significant amplitude in Figure 5a, but not in the LT differenced fields, is likely due to the presence of terdiurnal tides which cannot be separated from the diurnal components.Given the available local times, the tidal amplitudes derived from this method are most reliable for the four or five biggest tides present.In order to fully distinguish between the minor components, additional local time coverage will be required. MCD Results midpoint of the date range.To emulate the observations from Case 1A, the model is sampled at three latitudes, 55, 60 and 65°N, respectively.The longitudinal resolution of the samples is fixed to the same grid size as that of the model itself which is 5.625°, whereas the local times are sampled at 0.5 hr steps. To capture EUVM observations, the MCD output was further constrained by sampling local times between 0.5-3.3hr to match the observations.The temperatures obtained from the MCD are then analyzed using the same techniques discussed in Section 3.1.MCD is sampled at the same four altitudes as the EUVM observations and is shown in Figure 6e, where a clear wave signature with two peaks can be seen at all altitudes.The altitude of 155 km is chosen as the reference for comparison across all cases due to the absence of EUVM temperature data below 145 km for Cases 3A and 3B.From the wave amplitudes in Table 2, at 155 km, we see that the wavenumber-2 component is dominant, which is in good agreement with the observations.It is important to note that although there is agreement in dominant wavenumber, MCD derived wave amplitudes are significantly underestimated and the mean temperatures do not agree very well. In addition to comparing MCD sampled at the exact location as the observations, a spectral decomposition is also performed using the full MCD longitude-local time sampling at the same latitudes and altitudes as EUVM.The Fourier decomposition of the MCD derived temperature field for Case 1A at 145 and 155 km is shown in Figures 6c and 6d.It is evident that the semidiurnal component SE1 is dominant at both altitudes.At 145 km, the S0 tide is observed as the second strongest.The same components are also present at 155 km, with SE1 still being dominant. To best replicate the conditions of the MCS observations, in addition to sampling MCD at the same latitudes as Case 1A, the same altitude of 76 km is chosen, following which the MCD derived temperatures are averaged and differenced using ∼1 hr LT samples that are separated in local time by ∼11 hr as described in Section 3.2.The MCD derived wave amplitudes in Table 2 show that the wavenumber-3 component is strongest, agreeing well with observations.However, the contribution from the diurnal component is slightly greater than that from the semidiurnal component, which is in contrast with the observations that see a clear dominance of the semidiurnal component.The mean temperatures from the MCD seem to be in good agreement with the observations at 76 km.Changes in amplitude and phase with altitude are shown in Figure 7 for (a) the wavenumber-2 and -3 LT summed and differenced MCS temperatures, (b) the MCD derived wavenumber-2 and -3 LT summed and differenced temperatures, and (c) the individual tides S0, SE1, DE1 and DE2 obtained from fitting to all available LTs as described in Section 3.2.Figures 7a-7d show the phase as a function of altitude and an eastward tilt is seen in the LT summed wavenumber-2 and -3 temperature fields, which is less evident in the LT differenced fields, especially at the higher altitudes.While model phases also resemble this eastward tilt in both the LT summed and differenced fields, the phases seem to be offset from the observations.Figures 7e and 7h show the amplitude as a function of altitude, where the derived wave amplitude increases with altitude above 60 km in the LT sum field and maximizes near 76 km. Figure 5a indicates that the eastward propagating SE1 and S0 are the primary contributors to the wavenumber-2 and -3 structures in the LT summed fields, whereas DE1 and DE2 are the main contributors to these wave signatures in the LT differenced fields.In addition to the most likely tides listed above, MCD amplitudes of the wavenumbers 2 and 3 components reconstructed from the model derived LT sum and difference temperatures are computed.First, we compare the MCS LT summed and differenced amplitudes to the MCD amplitudes and see that the best agreement is for the LT summed wavenumber-3 component, which is also the dominant component.The amplitudes of the LT summed wavenumber 2 component are underestimated by the model.Model derived amplitudes for the LT differenced wavenumber 2 component agree within the fitted uncertainties at most altitudes, whereas the amplitudes of the LT-differenced wavenumber-3 components are overestimated.Second, we compare the amplitudes of each individual tide obtained by fitting to all available MCS data, which are assumed to be the underlying contributor to the observed longitudinal wave structure.The amplitude of the tide most likely to contribute to a particular wavenumber component is compared to the summed and differenced fields, as seen in Figures 7e-7h.The amplitude of the SE1 tide is remarkably similar to the wavenumber-3 component (in the LT sum).This coherence of the amplitudes with altitude from both methods provides additional confidence in the presence of the SE1 tide. Discussion This section begins with a detailed discussion of the results from Case 1A, followed by summary of the results for the remaining cases and a comparison to observations and model results at low latitudes. Observations and Model Results for Case 1A From Table 2, it is seen that the wavenumber 2 component is strongest in observations of the upper atmosphere. Assuming that this wavenumber-2 structure arises due to vertically propagating tides, the wave amplitude seems to increase with altitude by a factor of 5 or 6 between 76 and 155 km, which is consistent with vertical propagation assuming density decreases with altitude.While the dominant wavenumber seems to change between 76 and 155 km wavenumber-3 to wavenumber-2 component, at 76 km both components are present with comparable amplitudes.The inference of vertical propagation must be done with caution since there are no observations between 90 and 130 km.Furthermore, it is important to note that while change in dominant wavenumber from the middle to the upper atmosphere could be the result of differences in the upward propagation of one tide versus another, one other viable mechanism is in situ wave generation in the upper atmosphere (Forbes et al., 2020). MCD sampled at the same location latitude, local time, L s ) as both the MCS and EUVM observations agree upon the dominant wavenumber component but not on the derived wave amplitudes.While the wave amplitudes seem to grow with altitude in both the observations and the MCD (see Table 2 and Figure 6), the estimated growth in wave amplitude is underestimated in the MCD output.Note that the EUVM observations have larger uncertainties at the lower end of the altitude range due to the use of single scale-height reference atmosphere in the retrieval and may therefore inflate the margin of uncertainties in the wave amplitudes computed (Thiemann et al., 2018). Dominant Wavenumber Varying With Local Time When comparing Cases 1A and 1B, the dominant wavenumber is expected to remain consistent.This is due to the mere 4°L s difference between the two cases and the fact that a local time shift would result in a phase change, not a change in dominant wavenumber (England et al., 2016;Withers et al., 2011).Observations from the upper atmosphere indicate that the wavenumber 2 component is strongest in both these cases, aligning with the expectation.While MCD results also identify wavenumber-2 as strongest in both cases, an equally strong wavenumber 3 is seen in Case 1B (Table S1 in Supporting Information S1).Furthermore, observations from Cases 2A and 2B reveal a change in the dominant wavenumber with local time from the wavenumber 3 to wavenumber 1 component as seen in Tables S2 and S3.Similarly, in Cases 3A and 3B, Tables S4 and S5 show that the dominant wavenumber changes from wavenumber-3 to the 2 component.To understand this change in dominant wavenumber with local time, the full diurnal cycle of MCD (see Figure 8) is examined alongside the Fourier decomposition shown in Figure 6.From Figures 8c and 8d, a change in wave structure with local time is seen between 2 a.m. and 2 p.m. LT, where the model temperatures at the former LT are characterized by a wave signature consisting of two and the latter with three peaks.The Fourier decomposition shown in Figures 6c and 6d reveals the presence of diurnal, semidiurnal and terdiurnal nonmigrating tides in the upper atmosphere that all produce wavenumber-2 structures, in addition to the dominant SE1 tide which gives rise to the wavenumber-3 structure.While the amplitudes of the wavenumber-2 producing tides may individually be smaller than the SE1 tide, these tides may interfere at certain local times, giving rise to a dominant wavenumber-2 component rather than the wavenumber-3 component produced by the SE1 tide, which dominates at other local times.This implies a potentially important limitation of using data from a single local time to infer the dominant tides, which has been commonly employed by previous studies. Diurnal and Semidiurnal Tides at High Latitudes In the middle atmosphere, analysis of MCS observations suggests that the dominant wavenumber arises in the LT sum, which is indicative of a contribution from semidiurnal tides although stationary planetary waves may also be present (see Figure 4b and Table 2).Conversely, analysis of MCD suggests that the diurnal component is dominant, although the semidiurnal component is nearly as strong (see Table 2).Analysis of MCS data from all local times shown in Figure 5 suggests the presence of both semidiurnal and diurnal tides, with the eastward propagating semidiurnal tide SE1 dominant at 76 km.The MCD FFT at this altitude shows that both SE1 and DE2 tides are dominant and have comparable amplitudes, followed closely by DE1 (Figures 6a and 6b).This is consistent with the results obtained by fitting to MCS observations from all available local times, but at 72 km and above SE1 begins to dominate.Focusing on the upper atmosphere, MCD FFT suggests that the semidiurnal tide SE1 is consistently the strongest, but diurnal and terdiurnal components are also present with relatively smaller amplitudes. Stationary Planetary Waves Figure 5 also indicates the presence of both SPW2 and SPW3, although with relatively small amplitudes, assuming the fit accurately captures the underlying variability.In contrast with these observations, the MCD FFT shows that both SPW2 and SPW3, while present, have negligible amplitudes for Case 1A, which can be seen in Figure A3. Dissipation of the Diurnal Tide The MCD FFT shows that while the eastward propagating semidiurnal tide SE1 grows in amplitude with increasing altitude, the diurnal components tides DE1 and DE2 diminish significantly (see Figure 6).The zonal mean zonal wind field is an important parameter in determining the vertical propagation characteristics of tides. If favorable background wind conditions exist, the eastward propagating diurnal tide may propagate to the upper atmosphere at high latitudes (Ekanayake et al., 1997;Forbes et al., 2001).Particularly for the eastward propagating tides, propagation is favored in the westward wind regime where doppler shifting to higher frequencies leads to an increase in vertical wavelengths and therefore, less susceptibility to dissipation (Forbes et al., 2020).The noted decrease in the amplitude of the diurnal tides here suggests that these tides do not propagate to the upper atmosphere.Further, examining Figure 7d from Forbes et al. (2020), which represents the zonal mean zonal wind in MCD for L s 60-90°, it is seen that at 60°N and between 80-120 km, the MCD wind strength is not sufficient to favor vertical propagation of the diurnal tide even though the winds are westward. Terdiurnal Tides at High Latitudes Middle atmospheric observations and MCD results show that TW1 could be present with very small amplitude and that it grows with an altitude upward of 76 km.The MCD FFT in Figure 6 not only reveals that the TW1 tide gains amplitude with altitude above 76 km but also highlights its presence in the upper atmosphere, as seen in panels (c) and (d).While upward propagation is one mechanism for TW1 to reach thermospheric altitudes, it may also be generated via nonlinear interactions between two primary waves.Such an interaction generates two secondary waves, one corresponding to the sum of the frequency and wavenumber of the interacting primary waves and the other corresponding to the difference between them (Teitelbaum & Vial, 1991).TW1 could arise from nonlinear interaction between DW1 and S0 or TW3 and SPW2 (Forbes et al., 2021).Forbes et al. (2021) analyzed MCD and identified TW1 as a contributor to the wave-2 structure observed TGO densities during the polar phase of the orbit.The authors also identified the presence of DW1 and S0 in MCD, indicating that TW1 was generated via interaction between these two tides.While DW1 is not identified in this analysis, the MCD FFT in the upper atmosphere, as shown in Figures 6c and 6d, indicates the presence of both TW1 and S0, which is consistent with the interpretation in Forbes et al. (2021).On the other hand, fit to MCS observations at all available local times in Figure 5 suggests that both S0 and SPW2 are present with comparable amplitudes.This is perhaps in contrast with MCD, indicating that the interaction between SPW2 and TW3 could be the origin of TW1.Recently, Forbes et al. (2023) analyzed NGIMS data and revealed that TW3 is present with amplitudes that are ∼≤10% near 180 km. Summary of Results From All Cases In the upper atmosphere, near 60°N observations from Cases 1A and 1B indicate a dominant wavenumber-2 component although other waves may be present.Near 66°N, Case 2A shows a strong wavenumber-3 component, whereas Case 2B indicates a strong wavenumber-1 component, but wavenumber 3 is the second strongest.And lastly, near 77°N, Case 3A shows a strong wavenumber-3 component, whereas Case 3B shows a strong wavenumber 2 component.While the dominant wavenumber seems to change with latitude based on the cases analyzed here, it is hard to tie a specific latitude to a specific wavenumber since the strongest wave seems to vary with local time in the upper atmosphere.Moreover, since each of the case pairs are sampled at different seasons, it is hard to separate seasonal from latitudinal variation. Unlike the upper atmosphere, change in local time does not seem to affect the middle atmosphere, where observations indicate that wavenumber-3 component is the strongest for Cases 1A, 1B, 3A, and 3B and the wavenumber 2 component is strongest in Cases 2A and 2B. MCS observations from all available LTs for Case 1B (Figure S6 in Supporting Information S1), while largely similar to Case 1A, indicate a relatively bigger presence of SPW2 above 60 km and a smaller presence of the DE1 tide.For Cases 2A and 2B (Figures S7 and S8 in Supporting Information S1), SE1 and S0 tides dominate above 60 km, but SPW2 is significant, particularly at altitudes below 60 km.This is consistent with previous results from Guzewich et al. (2012), which identified that the SPW2 dominates all latitudes and altitudes in the northern hemisphere based on MCS observations.MCD FFT for these cases also reveals the presence of both SPW1 (n = 0, k = 1) and SPW2 (n = 0, k = 2) in the middle atmosphere but with relatively smaller amplitudes (See Figures S2 and S3 in Supporting Information S1).Forbes et al. (2020) showed that SPWs are highly variable with L s , which is broadly consistent with our results at high northern latitudes from observations at a fixed location. In the upper atmosphere, for Cases 2A and 2B, MCD FFT shows that SE1 is strongest, but there is a significant presence of T0 as well.The results for Cases 2A and 2B need to be interpreted with caution due to the smaller number of data points resulting from the exclusion of days with high solar EUV.For Cases 3A and 3B, MCD FFT also shows that the S0 tide which gives rise to a wavenumber-2 structure is the largest, followed by T0, D0 and SE1(See Figures S4 and S5 in Supporting Information S1).Since observations of Case 3A suggest wavenumber-3 is dominant, disagreement with MCD is likely due to the presence of the SE1 and T0 tides, which give rise to a wavenumber-3 structure when sampled at local times they are in phase. The relative significance of the terdiurnal tides varies among the different cases considered.The T0 tide is prominent in cases 3A and 3B at all altitudes, especially in the upper atmosphere.T0 is also prominent at upper atmospheric altitudes in cases 2A and 2B.Forbes et al. (2021) noted the presence of the zonally symmetric T0 tide in the MCD at high latitude summer hemisphere, contributing to the wavenumber-3 signature observed in the TGO aerobraking density.The T0 tide could be produced through the nonlinear interaction between DE2 and SW2 or D0 and S0 tides (Forbes et al., 2021).MCD FFT for Cases 3A and 3B indicate the presence of S0 and D0, clearly suggesting that the generation of T0 from these tides is likely in these cases.For Cases 2A and 2B, while S0 is clearly present, D0 and DE2 are also present, although the latter has a very small amplitude.While either interaction is possible, it seems more likely that S0 and D0 lead to the generation of T0 in these cases considered here as well. Comparison to Low Latitude The tidal perturbations in the upper atmosphere at high northern latitudes seem to be dominated by a mix of wavenumber-2 and -3 components, although wavenumber-1 and -4 components may be present in some cases.These wavenumber are also present in the upper atmospheric observations at low latitudes; however, the tides responsible for producing these longitudinal variations differ from those at low latitudes.It is clear from the observations of the middle atmosphere that the semidiurnal tide is strongest, and this is in contrast with low latitudes where the diurnal tide is consistently strongest, as noted by Kumar et al. (2022).MCS observations suggest that the major contributors are likely the SE1 and DE2 tides, although some DE1, S0, SPW2, and TW1 may be present.This is in contrast with low latitudes, where the DE1 tide is identified as a major contributor to the north of the equator and the DE2 tide to the south of the equator (Kumar et al. (2022)).Furthermore, middle atmospheric observations at high latitudes suggest the presence of terdiurnal tides, which were not identified at low latitudes. In terms of capturing the dominant wavenumber, the model exhibits less good agreement with the observations at high latitudes compared to that reported previously at lower latitudes (Kumar et al., 2022).This disparity can be attributed to the relatively comparable influence of diurnal, semidiurnal, and terdiurnal tides at high latitudes, compared to the lower latitudes where the diurnal tides play a more dominant role.Furthermore, identifying the dominant wavenumber itself is complicated when all three diurnal, semidiurnal, and terdiurnal tides are present. Based on the comparison with observations from the middle atmosphere, the model appears to capture most of the underlying tides correctly; however, additional observations are required to more precisely constrain their amplitudes, particularly in the upper atmosphere.The amplitude of the tides in the model often compares favorably to that observed in the middle atmosphere but not in the upper atmosphere.Furthermore, the phases predicted by the model often do not align with the observations.The prediction of phases presents a challenge, especially when compared with observations at a single local time. Conclusions This study utilizes concurrent observations from the EUVM and MCS instruments on board the MAVEN and MRO spacecraft and model output from MCD to investigate the tides that reach the high latitude upper atmosphere.Focused on six cases at high northern latitudes, the dominant signatures identified are the wavenumbers-2 and -3 components.Although the dominant wavenumbers are similar to those observed at low latitudes, the underlying tides contributing to the wave structure are different.While there is a clear dominance of the diurnal tides at the low latitudes, the high northern latitudes see comparable importance among the diurnal, semidiurnal and terdiurnal tides. An important and notable result from this study is that the dominant wavenumber can be misleading when only a single local time is considered.Furthermore, even with two sampled local times, it remains challenging to accurately identify terdiurnal components, which could be important at the higher latitudes considered in this study.Therefore, obtaining a wider range of observations with improved local time sampling of the Martian atmosphere is imperative, particularly in the high latitude regions. Non-migrating tides can account for much of the longitudinal variability at the high northern latitudes, although stationary planetary waves may also be present.From the observations alone, the middle atmosphere seems to be dominated by the wavenumber-3 component in 4 out of 6 cases.In all these cases, the semidiurnal component is stronger than the diurnal component, revealing that the SE1 tide is the origin of the wavenumber-3 signature.While the model also sees SE1 as the strongest contributor, diurnal tides DE1 and DE2 and terdiurnal tides T0 and TW1 are also present.In the upper atmosphere, observations indicate a strong wavenumber-2 component in half of the cases, with two of the remaining cases dominated by the wavenumber-3 component and one case by the wavenumber-1 component.Model results in the upper atmosphere suggest that the contributions from the diurnal components are very small and that the semidiurnal and terdiurnal components begin to take prominence. The model suggests that the diurnal tides, which have a strong presence in the middle atmosphere, diminish significantly as they ascend into the upper atmosphere.This indicates that the vertical propagation of the diurnal tides may be inhibited at high northern latitudes and is likely related to the zonal mean winds in this region.However, despite the model's predictions, there are no wind measurements in this region to corroborate these findings.In general, the high northern latitudes seem more variable, and more observations are required to determine and distinguish the characteristics of the tides, particularly the terdiurnal tides and stationary planetary waves. Appendix A: Deducing Tidal Amplitudes and Phases by Fitting to MCS Data at All Available Locations MCS observations are typically measured "in-track" where the forward limb is viewed in the direction the spacecraft motion.Since the MRO spacecraft is in a sun-synchronous orbit, this would also mean that the observations are locked in near two local times that are roughly ∼12 hr apart at low and mid-latitudes.The method outlined in Section 3.2 uses this difference in local time to good advantage, allowing for the separation of the diurnal tides from the semidiurnal tides.One major caveat while using this method is that the stationary planetary waves may be aliased with the semidiurnal tide and other higher-order tides.Kleinböhl et al. (2013) utilized measurements obtained from observation campaigns that covered additional local times to analyze semidiurnal tides.These observations were made by viewing the limb 90°to the left and the right with respect to the spacecraft's motion and are called "cross-track" measurements or by viewing the limb anywhere between 0 and 90°, referred to as "off-track" measurements.The resulting observations provide additional local time coverage ranging between six local times at low to mid-latitudes and seven or eight local times at high latitudes. For Case 1A analyzed here, the additional MCS local times sampled shown in Figure A1 indicate that there are ∼8 local times between 55 and 65°N.The data selected from these local times are binned in a longitude-local time grid of 30°by 0.5 hr.This data is then averaged and the corresponding temperature field is shown in Figure A2a.The first control case is an idealized sum of the S0 and SE1 tides generated at the same locations as the MCS sampling and is shown in Figure A2b.The selection of tides to generate the simulated data is informed by the results of the analysis of MCS local time sum and difference temperatures described in Section 3.2.A model that describes the sum of the S0 and SE1 is fitted to the simulated data.The fit parameters (i.e., the amplitudes and phases of each tide) are used to reconstruct data at all longitudes and local times, as shown in Figure A2c.The derived amplitudes and phases for the S0 and SE1 tides are within 5%-10% of the simulated amplitudes and phases, as shown in Table A1. Another fit is performed on the same simulated data but with a fit model that includes the DE1 tide in addition to the S0 and SE1 tides.Since the simulated data do not consist of the DE1 tide, the amplitude and phases associated with it are expected to be zero.As seen in Tables A1 and A2, the DE1 amplitude is nearly zero but a phase is associated with it.Since the amplitude of DE1 is zero, the resulting phase can be neglected.To test whether this method is able to distinguish between eastward and westward propagating tides that give rise to the same zonal variation, we apply the same steps outlined above to a second test case consisting of a simulated field that includes the S0, SE1, and SW4 tides.Both the eastward propagating SE1 and westward propagating SW4 tides can contribute to the wavenumber-2 structure in a fixed local time reference frame. The results from the fits to the simulated data are shown in Tables A3 and A4.It is seen that both amplitudes and phases are within 5%-10% of the expected values.Similar to the first test case, when fitting with the DE1 tide in addition to the other tides, the amplitude is nearly zero, but a phase associated with it which can be neglected. A third test case with simulated data, which includes the terdiurnal tide TW1 along with S0, SE1 and SW5 tides, is analyzed to evaluate the method's ability to capture the terdiurnal tide.The amplitude and phase of TW1 are within 6% of the expected values as seen in Tables A5 and A6.While the method seems to perform reasonably well on the simulated data in the three test cases, a comparison is made between amplitudes derived from the MCD temperatures sampled at all available MCS locations and those obtained from an FFT performed on the entire diurnal cycle of MCD for Case 1A to establish a degree of uncertainty in the fitted amplitudes which is shown in Figure A3. The eastward and westward propagating tides DE1, DE2, SE1, DW4, DW5, SW4, SW5, TW1, the zonally symmetric tides S0 and T0 and the stationary planetary waves SPW2 and SPW3 are fit to the MCD data.The amplitudes derived from the fit for each tide are shown as a function of altitude in Figures A3a and A3b.These amplitudes are compared to those obtained from the FFT shown in Figures A3c and A3d, and at 76 km, the amplitude of SPW2 seems to be overestimated by ∼1K and of SE1 underestimated by ∼0.5K which are the absolute errors.The absolute errors on the amplitudes of all other tides range between 0.08-0.4K. Therefore, at 76 km in the MCD output, the tides that have the most significant amplitudes are DE2, SE1, DE1, and S0 in the order of descending amplitudes.Some TW1 and SPW2 may also be present, but these must be interpreted with caution since, in the model, these exist in very small amplitude at this altitude.A detailed discussion regarding the model and observation comparison can be found in Section 3.2. • This study presents the first analysis of tides from solar occultation observations at Mars • The dominant wavenumber can change with local time due to the presence of tides with multiple periods at high latitudes • Diurnal tides in the model diminish with altitude, suggesting the influence of mean winds on vertical propagation Supporting Information: Figure 1 . Figure 1.Data from MAVEN-EUVM as functions of local time and latitude.(a) All available data from the years 2015, 2017, and 2018.The colors represent data from different years.(b, c) Shows the selected data from the six cases labeled 1A-1B, 2A-2B, and 3A-3B, respectively.(d) Displays the colors that distinguish different months in Earth year and the cases selected for analysis and their L s have been marked. Figure 2 . Figure 2. (a) Shows the residuals computed by subtracting the 27-day rolling mean from the daily maximum irradiance obtained from MAVEN EUVM for the year 2017.The dash-dot lines represent the 2σ standard deviation and the vertical black dashed lines mark the region where Case 1A and 1B lie within this time period (b) Shows data distribution of EUVM data for Case 1A as a function of longitude and latitude.(c) Temperature derived from EUVM data for Case 1A shown as a function of altitude and longitude averaged over 55-65°latitude and binned in 5 km altitude and 30°longitude steps.The regions colored white centered at 285°longitude corresponds to a gap due to insufficient data at the two highest altitude bins. Figure 3 . Figure 3. MAVEN-EUVM temperatures as a function of longitude sampled at four altitudes from 135 to 165 km.The blue dots represent the derived temperatures, the black dashed line represents the wavenumber 1-4 fit to the data and the blue shaded region is the ±1 σ uncertainty in the fit. Figure 4 . Figure 4. Reconstruction of the sum and difference of the wavenumber-2 and -3 components of the fits to the MRO-MCS temperature data for Case 1A.Separate wavenumber 1-4 fits are performed on the temperature data at each 4-km altitude step.Panels (a) and (b) show the temperature "sum" of the ascending and descending nodes, whereas panels (c, d) show the "difference" between them.Panel (e) shows the reconstruction using all fitted parameters for the LT summed temperatures at 72, 76, and 80 km (f) same as panel (e) for LT differenced temperatures. Figure 5 . Figure 5.The amplitudes of the eastward and westward propagating components and the stationary planetary waves as a function of altitude derived from wave-fits to the MCS data from all available local times for Case 1A.The fits are performed separately at each altitude bin; the dotted lines indicate the terdiurnal components, the dashed lines indicate semidiurnal components, the solid lines indicate diurnal components and the dash-dot lines indicate stationary planetary waves.(a) shows the amplitudes of the eastward propagating semidiurnal tides SE1, the diurnal tides DE1 & DE2, the zonally symmetric tides S0 and T0 and the stationary planetary waves SPW 2 & SPW3.(b) same as panel (a) but for SW4, SW5, DW3, DW4, and TW1. Figure 6 . Figure 6.Panels (a-d) show the spectral decomposition of the full MCD temperature distribution for Case 1A.Panels (a, b) show Fourier components at MCS altitudes of 76 and 80 km, whereas panels (c, d) show Fourier components at EUVM altitudes of 145 and 155 km.The most prominent tides in these panels are the SE1 (n = 2, k = 3), DE2 (n = 1, k = 3), DE1 (n = 1, k = 2), S0 (n = 2, k = 2) and TW1 (n = 3, k = 2).Panel (e) shows the MCD output sampled at EUVM locations as a function of longitude sampled at four distinct altitudes from 135 to 165 km (similar to fits to EUVM observations in Figure 3).The blue dots represent the model derived temperatures and the black dashed line represents the wavenumber 1-4 fit to the data. Figure 7 . Figure 7. Comparing the wave phases (top four panels) and amplitudes (bottom four panels) derived from the MCS LT summed and differenced temperatures, MCD, and the fit to MCS temperatures from all available local times for Case 1A.Panels (a, b) show the phase as a function of altitude derived for wave-2 and -3 LT summed temperatures compared with MCD.Panels (c, d) same as panels (a, b) but for the LT differenced temperatures compared with MCD.Panels(e, f) compare the derived amplitudes as a function of altitude of the wave-2 and -3 LT summed temperatures with both MCD and the S0 and SE1 amplitudes derived from fitting to all available LTs. Panels (g, h) are the same as panels (e, f) but for the LT differenced temperatures compared with both MCS and the DE1 and DE2 amplitudes (derived from fitting to all available LTs).The blue shaded region is the ±1 σ uncertainty in the fit to the MCS data. Figure 8 . Figure 8. Longitude-local time structures of MCD temperatures sampled at the same latitudes and L s as Case 1A (55-65°N and L s 82°).The temperatures are binned and averaged in a longitude-local-time grid with a resolution of 30°by 0.5 hr (a) represents the full diurnal cycle at the altitude of the MCS observations at 76 km.(b) same as panel (a) but at the altitude of the EUVM observations at 155 km.(c) Temperature as a function of longitude sampled at 2 and 14 hr LT from panel (a) at 76 km (d) Same as panel (c) but for panel (d) at 155 km. Figure A1 . Figure A1.The MCS data distribution for Case 1A as a function of local time and longitude.The colors indicate the density of points at a particular location.The black box shows the data selected between 55 and 65°N and at all available local times. Figure A2 . Figure A2.(a) All available MCS derived temperatures binned and averaged and shown as a function of longitude and local time for Case 1A at 76 km.The data are first binned in a longitude-local time grid of 30°× 0.5 hr and then averaged.(b) Simulated tidal field consisting of S0 and SE1 tides are at the same locations as the MCS data.(c) Shows the reconstruction of the tidal field by fitting to the simulated tidal field in (b) with a model assuming S0 and SE1.(d) Same as panel (c) but the model assumes S0, SE1 and DE1.(e) A slice of the reconstruction shown in (e) at 3 hr LT.(f) Same as panel (e) but for (d). Figure A3 . Figure A3.Compares the fit to the model sampled at the locations of the MCS observations and the Fourier decomposition using the full diurnal cycle of the model.Panels (a, b) show the amplitude as a function of altitude from fitting MCD sampled at MCS observations at all available local times between 55 and 65°N (Case 1A) for the eastward propagating DE1, DE2, SE1, TW1, T0 tides, the zonally symmetric tide S0 and the westward propagating DW3, DW4, SW4, SW5 tides.Panels (c, d) are the amplitudes of the tides obtained from the Fourier decomposition using the entire diurnal cycle representing the temperatures at all longitudes and local times for Case 1A. Table 2 Amplitudes Derived From the Wavenumber 1-4 Fit to the EUVM and MCS Data for Case 1A Compared With MCD Amplitudes for the Same Sampling as the Observations Note.Amplitudes are computed from the average over 155-165 km altitude for EUVM and are given as absolute temperature amplitudes.For MCS, the amplitudes from the fitted local sum and difference are reported at 76 km.Uncertainties are reported only for the observations.The bold values indicate the dominant wavenumber at for each observation and model result. Table A1 Amplitudes of the Simulated Data Compared With a Fit Using a Model Assuming S0 and SE1 Table A2 Phases of the Simulated Data Compared With a Fit Using a Model Assuming S0 and SE1 and a Model Assuming S0, SE1, and DE1 Table A3 Amplitudes of the Simulated Data Compared With a Fit Using a Model AssumingS0 and SE1, Fit With a Model Assuming S0, SE1, and SW4, and Lastly, a Model Assuming S0, SE1, SW4, and DE1 Table A5 Amplitudes of the Simulated Data Compared With a Fit Using a Model Assuming S0 and SE1, Fit With a Model Assuming S0, SE1, and SW5 and Lastly, a Model Assuming S0, SE1, SW5, TW1, and DE1 Table A6 Phases of the Simulated Data Compared With a Fit Using a Model Assuming S0 and SE1, Fit With a Model Assuming S0, SE1, and SW5 and Lastly, a Model Assuming S0, SE1, SW5, TW1, and DE1
16,392.6
2024-04-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Effective field theories A pedagogical introduction to low-energy effective field theories. In some of them, heavy particles are"integrated out"(a typical example - the Heisenberg-Euler EFT); in some heavy particles remain but some of their degrees of freedom are"integrated out"(Bloch-Nordsieck EFT). A large part of these lectures is, technically, in the framework of QED. QCD examples, namely, decoupling of heavy flavors and HQET, are discussed only briefly. However, effective field theories of QCD are very similar to the QED case, there are just some small technical complications: more diagrams, color factors, etc. The method of regions provides an alternative view at low-energy effective theories; it is also briefly introduced. Introduction We don't know all physics up to infinitely high energies (or down to infinitely small distances).Therefore, all our theories are effective low-energy (or large-distance) theories (except The Theory of Everything if such a thing exists). There is a high energy scale M (and a short distance scale 1/M) where an effective theory breaks down.We want to describe light particles (with masses m i M) and their interactions at low energies, i. e., with characteristic momenta p i M (or, equivalently, at large distances 1/M).To this end, we construct an effective Lagrangian containing the light fields.Physics at small distances 1/M produces local interactions of these fields.The Lagrangian contains all possible operators (allowed by symmetries of our theory).Coefficients of operators of dimension n + 4 are proportional to 1/M n .If M is much larger than energies we are interested in, we can retain only renormalizable terms (dimension 4), and, perhaps, a power correction or two. For more information about effective field theories see the textbook [1]. Heisenberg-Euler EFT, heavy flavors in QCD A low-energy effective Lagrangian has been first constructed [2] to describe photon-photon scattering at ω M. It contained dimension 8 operators made of four F µν .Later an effective Lagrangian for arbitrarily strong homogeneous electromagnetic field has been constructed [3].It contained all powers of F µν but no terms with derivatives.It is not quite EFT in the modern sense: for each dimensionality not all possible operators are included.In QCD with a heavy flavor with mass M, processes with characteristic energies M can be described by an effective Lagrangian without this flavor. Photonia Let's imagine a country, Photonia, in which physicists have high-intensity sources and excellent detectors of low-energy photons, but they don't have electrons and don't know that such a particle exists 1 .At first their experiments (Fig. 1a) show that photons do not interact with each other.They construct a theory, Quantum PhotoDynamics (QPD), with the Lagrangian But later, after they increased the luminosity (and energy) of their "photon colliders" and the sensitivity of their detectors, they discover that photons do scatter, though with a very small cross-section (Fig. 1b). They need to add some interaction terms to this Lagrangian.a b Figure 1. Scattering of low-energy photons There are no dimension 6 gauge-invariant operators, because (this algebraic fact reflects C-parity conservation).All operators with derivatives reduce to plus full derivatives; this operator vanishes due to equations of motion ∂ µ F µν = j ν = 0. On-shell matrix elements of such operators vanish; therefore, we may omit them from the Lagrangian without affecting the S-matrix. Interaction operators first appear at dimension 8: Hence the QPD Lagrangian which incorporates the photon-photon interaction is We indignantly refuse to discuss the question "What the experimentalists and their apparata are made of?" as irrelevant. where the coefficients c 1,2 ∼ 1/M 4 , M is some large mass (the scale of new physics).Of course, operators of dimensions > 8 can be also included, multiplied by higher powers of 1/M, but their effect at low energies is much smaller.Physicists from Photonia can extract the two parameters c 1,2 from two experimental results, and predict results of infinitely many measurements. We are working at the order 1/M 4 ; therefore, the photon-photon interaction vertex can appear in any diagram at most once.The only photon self-energy diagram vanishes: = 0 . (2.6) Therefore, the full photon propagator is equal to the free one.There are no loop corrections to the 4-photon vertex, and hence the interaction operators (2.4) don't renormalize. Qedland In the neighboring country Qedland physicists are more advanced.In addition to photons, they know electrons and positrons, and investigate their interactions at energies E ∼ M (M is the electron mass).They have constructed a nice theory, QED, which describe their experimental results 2 . Physicists from Qedland understand that QPD constructed in Photonia is just a low-energy approximation to QED.The coefficients c 1,2 can be calculated by matching.We calculate the amplitude of photon-photon scattering at low energies in the full theory (QED), expanded in the external momenta up to the 4-th order, and equate it to the same amplitude in the effective theory (QPD): = . (2.7) After expanding the QED diagrams in the external momenta they reduce to the vacuum integrals This QED scattering amplitude is finite at ε → 0. To reproduce it, the interaction term in the QPD Lagrangian (2.5) must be [2] (2.9) 2 They don't know muons, but this is another story. It is not difficult to calculate the two-loop correction to the Lagrangian (2.9).The two-loop scattering amplitude in QED reduces to the two-loop vacuum integrals (−i0 is assumed in all denominators). Coulomb potential Let's suppose that physicists in Photonia have some classical (infinitely heavy) charged particles, and can manipulate them at their will.If a particle with charge −e moves along a world line l, the action contains the interaction term in addition to the photon field action.The integrand exp(iS) in the Feynman path integral contains a phase factor called the Wilson line.The vacuum-to-vacuum transition amplitude in the presence of classical charges is thus the vacuum average of the corresponding Wilson lines 3 .Suppose two charges e and −e stay at rest at some distance r during some (large) time T. The energy of this system is U( r) -the interaction potential of the charges.The vacuum transition amplitude is 0 r T = e −iU( r)T (2.13) (we don't care what happens near its lower and upper ends because T r). 3 In Sect.3.2 we'll see that the propagator of a heavy charged particle in the effective theory which describes its interaction with soft photons is the straight Wilson line. The zeroth-order term in the vacuum average of any Wilson loop is 1.It is convenient (though not necessary) to use the Coulomb gauge to calculate the first correction.In this gauge, there is Coulomb photon with the propagator (it propagates instantaneously) and transverse photon with the propagator (2.15) Wilson lines along the 0 direction only interact with Coulomb photons.The self-energy of the classical particle vanishes: because the particle propagates along time, and the Coulomb photon along space.Therefore, there is just one contribution at the order e 2 : (2.17) (integration in τ gives T).Comparing it with 1 − iU( r)T (2.13), we obtain the Fourier transform of the potential U( q) = e 2 D 00 (0, q) = − e 2 q 2 ; (2.18) where c ∼ 1/M 2 by dimensionality.This term produces the contribution to the photon self-energy The aim of an effective theory is to reproduce the S-matrix, so, we may neglect operators vanishing due to equations of motion (in particular, the self-energy (2.23) vanishes at the photon mass shell q 2 = 0, but is needed in a virtual photon line exchanged between classical sources).Therefore, the term (2.22) reduces to cj µ j µ , where j µ is the external (classical) current.For the classical charges e and −e, it leads to the contact interaction 4 U c ( r) = 2cδ( r) . (2.24) What can physicists in Qedland say about the interaction potential between classical charged particles?In QED the photon self-energy is gauge invariant because there are no off-shell charged external lines.We can use the covariant-gauge self-energy Π(q 2 ) in the Coulomb-gauge propagator: , U( q) = e 2 0 D 00 (0, q) .(2.25) In macroscopic measurements the potential at q → 0 is obtained: Here e os is the charge in the on-shell renormalization scheme.The O(q 2 ) term in Π(q 2 ) determines the contact interaction constant c (2.24). 4 It is difficult to observe a δ-function potential in the interaction of classical charged particles.But this interaction is essential if the particles are quantum-mechanical -it shifts energies of S-wave states. Charge decoupling Now we shall discuss the relation between the full theory (QED) and the low-energy effective theory (QPD) more systematically.All QPD quantities will be denoted by primes.In this theory charge is not renormalized: e 0 = e os = e (µ) . (2.38) The macroscopically measured charge is the same in QED and QPD: e os = e os . (2.39) The bare charges in the two theories are related by the bare decoupling coefficient: (2.40) The MS charges are related by the renormalized decoupling coefficient: (2.41) The photon self-energy Π(q 2 ) in the 1-loop approximation is Expanding in q and using (2.8), we obtain where M 0 is the bare electron mass.The O(q 2 ) correction can be used to obtain the contact interaction c (2.24); from now on we shall neglect it.The on-shell charge renormalization constant (2.29) is The renormalized decoupling coefficient ζ α (µ) (2.41) must be finite, and hence the MS charge renormalization constant is As a free bonus, we have obtained the QED β function (2.35): the negative sign corresponds to screening. 5The renormalized charge decoupling coefficient expressed via the MS renormalized α(µ) and electron mass M(µ) is .48) The photon self-energy Π(0) at 2 loops (Fig. 2) can be calculated using (2.10):In the 1-loop term we need the 1-loop Z α (2.45) and the 1-loop MS mass renormalization constant Z m defined by (2.50) The 1-loop on-shell renormalization constant Z os m can be easily calculated using the on-shell integrals The result is The photon self-energy Π(q 2 ) can be written as a dispersion integral with a positive spectral density.Therefore, the potential (2.25) up to 1 loop is a superposition of the Coulomb potential and Yukawa ones having various radii, with positive weights.The farther we are from the source, the more Yukawa potentials die out, and the weaker is the interaction. Both M os and M(µ) are finite at ε → 0, hence The renormalized decoupling coefficient (2.41) must be finite, and hence we obtain the 2-loop MS charge renormalization constant (by the way, it gives β 1 = −4).The renormalized charge decoupling coefficient is If we define M as the root of the equation then L = 0 at µ = M, and and we obtain there will be a finite correction of order α.More details about decoupling can be found, e. g., in the review [4]. Qedland again Physicists in Qedland suspect that QED is also only a low-energy effective theory.We know that they are right, and muons exist. 6There are two ways in which they can search for new physics: • by increasing the energy of their e + e − colliders in the hope to produce pairs of new particles; • by performing high-precision experiments at low energies (e. g., by measuring the electron magnetic moment). New physics can produce new local interactions of photons, electrons, and positrons at low energies, which should be included in the effective QED Lagrangian.We were lucky that the scale of new physics in QED, the muon mass M, is far away from the electron mass: M m e .Contributions of muon loops to low-energy processes are also strongly suppressed by powers of α.Therefore, the prediction for the electron magnetic moment from the pure QED Lagrangian (without nonrenormalizable corrections) is in good agreement with experiment. After this spectacular success of the simplest Dirac equation (without the Pauli term) for electrons, physicists expected that the same holds for the proton, and its magnetic moment is e/(2M p ).No luck here.This shows that the picture of the proton as a point-like structureless particle is a poor approximation already at the energy scale M p . Heavy flavors in QCD In QED, effects of decoupling of muon loops are tiny.Also, pion pairs become important at about the same energies as muon pairs, so that QED with electrons and muons is a model with a narrow region of applicability.In QCD, decoupling of heavy flavors is fundamental and omnipresent.It would be a huge mistake to use the full 6-flavor QCD at characteristic energies of order GeV: running of α s (µ) and other quantities would be grossly inadequate, convergence of perturbative series would be awful because of large logarithms.In most cases, everybody working in QCD uses an effective low-energy theory, where a few heaviest flavors have been eliminated (even if he does not know that he speaks prose). Let's consider QCD with a single heavy flavor having mass M; for simplicity, all other flavors are supposed to be massless.Then the behavior of light quarks and gluons at low momenta p i M is described by the low-energy effective theory.Its Lagrangian is the usual QCD Lagrangian (of course, without the heavy-quark field) plus higher-dimensional terms (whose coefficients are suppressed by powers of 1/M).Power corrections to the Lagrangian first appear at dimension 6. The full-QCD coupling α (n l +1) s (µ) is related to the effective-theory coupling α Here the C F term can be obtained from the QED result (2.57) by inserting the obvious color factors; the C A term needs a new calculation.The decoupling coefficient ζ α (µ) for any µ can be found by solving the renormalization group equation with the initial condition (2.60). The QCD running coupling α s (µ) not only runs when µ varies; it also jumps when crossing heavy-flavor thresholds.The behavior of α s (µ) near M b is shown in Fig. 3.At µ > m b , the correct theory is the full 5-flavor QCD (α (5) s (µ), the solid line); at µ < m b , the correct theory is the effective low-energy 4-flavor QCD (α can be continued across M b (dashed lines), and it is inessential at which particular µ ∼ M b we switch from one theory to the other one.However, the on-shell mass M os b (or any other mass which differs from it by O(α s ), such as, e.g., Mb ) is most convenient, because the jump is small, O(α 3 s ).For, say, µ = 2M b or µ = M b /2, it would be O(α 2 s ). Method of regions This method (see the textbook [8]) provides an alternative insight to effective field theories.It allows one to calculate diagrams expanded in small ratios of scales directly.It is convenient when you need to calculate a small number of diagrams expanded up to a high order in a small parameter, because effective Lagrangians quickly become very long at such high orders.On the other hand, effective Lagrangians are applicable for all processes, and are useful for investigation general properties (symmetries, factorization). Let's consider the vacuum integral with two masses M and m at M m and d = 2.It contains neither ultraviolet (UV) nor infrared (IR) divergences.After Wick rotation to Euclidean momentum space it becomes Of course, in this simple example it is easy to obtain the exact solution.We use partial fraction decomposition These two integrals, taken separately, diverge; therefore, we use dimensional regularization (d = 2 − 2ε) and obtain This result is finite at ε → 0: It is easier to obtain this result using the prescription known as the method of regions.The integral (2.62) is written at the sum of contributions of two regions, the hard one and the soft one: In the hard region k E ∼ M; in the soft one k E ∼ m.The integrand is expanded to Taylor series in accordance to these power counting rules in each region.After that, the integral is taken over the full (d-dimensional) space. In the hard region (k E ∼ M) we have , where the operator T h expands the integrand in small parameter(s) counting k E as a quantity of order M: Calculating the loop integrals, we arrive at The result is a Taylor series in m.Each loop integral is IR divergent; it contains a single scale M, and hence, by dimensions counting, is proportional to M −2ε .In the soft region (k E ∼ m) we have , where T s counts k E as a quantity of order m: and we obtain The result is a Taylor series in 1/M.Each loop integral is UV divergent; it contains a single scale m, and hence, by dimensions counting, is proportional to m −2ε .The sum of the hard contribution (2.64) and the soft one (2.65)produces the complete result (2.63).IR divergences in the hard region cancel UV divergences in the soft one. In this simple case it is easy to prove this prescription [9].Let's introduce some boundary Λ such that m Λ M, and write . We may apply T h to the first integrand and T s to the second one: . For each of these two integrals, we add and subtract the integral over the remaining part of the k E space: where ) are the integrals of the two expansions over the "wrong" parts of the k E space.In these wrong regions, we may apply the other expansion in addition to the existing one: . But this is the integral over the full k E space: , because the Taylor expansion operators commute: And hence ∆I = 0 , because each integral has no scale.Let's stress that the method of regions works thanks to dimensional regularization.Taylor expansions in each region (i.e., for each set of power counting rules) must be performed completely, up to the end; otherwise, integrals in the defect ∆I can contain some scale(s), and hence not vanish. Let's consider QCD with n l light flavors and a heavy flavor of mass M, and calculate some scattering amplitude of light quarks and gluons with low characteristic energies ∼ E M. We can use the method of regions.For each diagram, some subdiagrams will be hard (characteristic momenta ∼ M; in particular, all heavy-quark loops are in such subdiagrams); light-quark and gluon lines connecting these hard subdiagrams will be soft (characteristic momenta ∼ E).In the EFT language, these hard subdiagrams are local interactions present in the effective Lagrangian; the overall soft diagram is calculated according to the Feynman rules of the low-energy theory. Bloch-Nordsieck EFT, HQET In the effective field theories we have considered previously heavy particles are completely absent in the effective Lagrangians.There is another kind of effective theories in which heavy particles are retained, but move non-relativistically in some reference frame and can be described in a simplified way.This effective theory has been first constructed by Bloch and Nordsieck [10] to describe interaction of an electron with soft photons.Its non-abelian version is called heavy quark effective theory (HQET), see, e. g., [11][12][13].It is used to describe properties of hadrons with a single heavy quark in QCD. Heavy electron effective theory Photonia has imported a single electron from Qedland, and physicists are studying its interaction with soft photons (both real and virtual) which they can produce and detect so well.The aim is to construct a theory describing states with a single electron plus soft photon fields. The ground state ("vacuum") of the theory is the electron at rest (and no photons).It is natural to define its energy to be 0. When the electron has momentum p, its energy is where M is the electron mass (in the on-shell renormalization scheme), our large mass scale.The electron velocity is At the leading (0-th) order in 1/M, the mass shell of the free electron is At this order, the electron velocity is The electron does not move; it always stays in the point where it has been put initially.The Lagrangian where h is the 2-component spinor electron field, leads to the equation of motion This means that the energy of an on-shell electron is ε = 0. Thus the Lagrangian (3.5) reproduces the mass shell (3.3), and can be used to describe the free electron at the leading order in 1/M.The electron has charge −e.Therefore, when placed in an external electromagnetic field, it has energy instead of (3.3).Therefore, the equation of motion is instead of (3.6), where is the covariant derivative.It can be obtained from the HEET Lagrangian [14] This Lagrangian is not Lorentz-invariant.It is invariant with respect to the gauge transformation Of course, the full Lagrangian is the sum of (3.10) and the Lagrangian of the photon field.This gives the equation of motion for the electromagnetic field where the current j µ has only 0-th component (the interaction term in the Lagrangian (3.10) is −j µ A µ ).The electron produces the Coulomb field.At the leading order in 1/M, the electron spin does not interact with electromagnetic field.We can rotate it without affecting physics.Speaking more formally, the Lagrangian (3.10) has, in addition to the U(1) symmetry h → e iα h, also the SU(2) spin symmetry [15]: it is invariant with respect to transformations where U is a SU(2) matrix (U + U = 1). In fact, the electron has magnetic moment µ = µ σ proportional to its spin s = σ/2, and this magnetic moment interacts with magnetic field: the interaction Hamiltonian is − µ • B. But by dimensionality the magnetic moment µ ∼ e/M, and this interaction only appears at the level of 1/M corrections.Namely, µ = −µ B (up to small radiative corrections), where is the Bohr magneton.The Lagrangian thus has an additional term describing this magnetic interaction, This term violates the SU(2) spin symmetry at the 1/M level.If we assume that there are n f flavors of heavy fermions, then the Lagrangian has U(1) × SU(2n f ) symmetry (even when the masses M i are different). The spin-flavor symmetry is broken at the 1/M i level by both the kinetic-energy term and the magnetic-interaction term.At the leading order in 1/M, not only the spin direction but also its magnitude is irrelevant.We can, for example, switch the electron spin off: where ϕ is a scalar field (with charge −e).This is the most convenient form of the Lagrangian in all cases when we are not interested in 1/M corrections.If we consider the scalar and the spinor fields together, then this Lagrangian has U(1) × SU(3) symmetry [16].The superflavor SU(3) symmetry contains, in addition to SU(2) spin transformations (3.14) and phase rotations ϕ → e 2iα ϕ, h → e −iα h, also transformations which mix spin-0 and spin-1 2 fields.In the infinitesimal form, where ε is an infinitesimal spinor parameter.So, this SU(3) is a supersymmetry group.If we want, we can consider, e. g., spins 1 2 and 1; the corresponding Lagrangian has SU(5) superflavor symmetry.The superflavor symmetry is broken at the 1/M level by the magnetic-interaction term in the Lagrangian (3.16). Feynman rules For now, we are working at the leading order in 1/M.The HEET Lagrangian expressed via the bare fields and parameters is It gives the usual photon propagator.From the free electron part ϕ * 0 i∂ 0 ϕ 0 we obtain the momentum-space free electron propagator It depends only on p 0 , not on p.If we use the spin-1 2 field h 0 instead of the spin-0 field ϕ 0 , then the unit 2 × 2 spin matrix is assumed here.The coordinate-space propagator is its Fourier transform: The infinitely heavy (static) electron does not move: it always stays at the point where it has been placed initially.Alternatively, instead of Fourier-transforming (3.22), we can obtain (3.23) by direct solving the equation i∂ 0 S 0 (x) = δ(x) (3.24) for the free x-space propagator.Finally, the interaction term e 0 ϕ * 0 ϕ 0 A 0 0 in (3.21) produces the vertex where is the 4-velocity of our laboratory frame (in which the electron is nearly at rest all the time). The static field ϕ 0 (or h 0 ) describes only particles, there are no antiparticles.Therefore, there are no pair creation and annihilation (even virtual).In other words, there are no loops formed by propagators of the static electron.The electron propagates only forward in time (3.23); the product of θ functions along a loop vanishes.We can also see this in momentum space: all poles of the propagators (3.22) in such a loop are in the lower p 0 half-plane, and closing the integration contour upwards, we get 0. It is easy to find the propagator of the static electron an an arbitrary external electromagnetic field A µ (x).It satisfies the equation instead of (3.24) (the derivative ∂ 0 acts on x).Its solution is S(x, x ) = S(x 0 , x 0 )δ( x − x ) , S(x 0 , x 0 ) = S 0 (x 0 − x 0 )W(x 0 , x 0 ) , (3.28) where is the straight Wilson line from x to x (along v).The same formula can be used when the electromagnetic field is quantum (operator A µ 0 (x)), but the exponent (3.29) has to be path-ordered: operators referring to earlier points (along the path) are placed to the right from those for later points.This is usually denoted by P exp; when the path is directed to the future, P-ordering coincides with T-ordering.The Wilson line has a useful property Properties of Wilson lines were investigated in many papers.Many results now considered classics of HQET were derived in the course of these studies before HQET was invented in 1990.In particular, the HQET Lagrangian (3.18) has been introduced as a technical device for investigation of Wilson lines.The lowest-energy state ("vacuum") in HEET is a single electron at rest, and it is convenient to use its energy as the zero level.In the full theory, its energy is M, and where E is the energy of some state (containing a single electron) in the full theory, and ε is its energy in HEET (it is called the residual energy).We can re-write this relation in a relativistic form: where P µ is the 4-momentum of some state (containing a single electron) in the full theory, p µ is its momentum in HEET (the residual momentum), and v µ is 4-velocity of a reference frame in which the electron always stays approximately at rest.In other words, HEET is applicable if there exists such a 4-velocity v that, after decomposition (3.32), the components of the electron residual momentum p are always small, and components of all photon momenta p i are also small: This condition does not fix v uniquely; it can be varied by δv ∼ p/M.Effective theories corresponding to different choices of v must produce identical physical predictions.This requirement is called reparametrization invariance [17].It produces relations between some quantities of different orders in 1/M.We can re-write the Lagrangian (3.21) in a relativistic form [18]: This Lagrangian is not Lorentz-invariant, because it contains a fixed vector v.It gives the free propagator The mass shell of the static electron is If we want to consider the spin-1 2 electron, it is described by the 4-component (Dirac) spinor field h v which satisfies the condition (so that in the v rest frame the field has only 2 upper components non-vanishing).The Lagrangian [18] L = hv0 iv • Dh v0 + (light fields) (3.38) gives the propagator and the vertex ie And what can our friends from Qedland say about this theory?They are not surprised.The finite-mass free electron propagator S 0 (P) with P = Mv + p (3.32), M → ∞ can be approximated as Diagrammatically, it is related to the HEET propagator: the exact propagator (3.44) is free.There are no corrections to the photon propagator in HEET (3.21) because static-electron loops don't exist 7 .Therefore, the photon field renormalization constant Z A = 1, and hence a = a 0 , e = e 0 .The renormalization constant of the static electron field h and its anomalous dimension are thus known exactly: Suppose the electron substantially changes its 4-velocity (due to some hard-photon interaction).In the HEET framework this can be described by the current (Fig. 4) where cosh ϑ = v • v .Its anomalous dimension Exponentiation works for Wilson lines of any shape, in particular, lines with a cusp.Therefore the 1-loop for Γ(ϑ) is exact. v v Figure 4. Heavy-heavy current There are many methods to calculate Γ(ϑ).The simplest one is based on unitarity.The electron after the kick either remains itself (probability |F| 2 , where F is its form factor) or emits one or several protons: Up to the first order in α, In classical electrodynamics the spectrum of emitted radiation is [21] dE = e 2 2π 2 (ϑ coth ϑ − 1) dω , This argument works up to the order 1/M .At 1/M 4 a 4-photon interaction appears, see Sect.2.1.However, the only correction to the photon propagator at this order vanishes (2.6).The first non-vanishing correction involves two 4-photon vertices, and appears at 1/M 8 . and hence the probability of photon emission is In dimensional regularization, by dimensions counting, this probability becomes (up to a factor which tends to 1 at ε → 0; we don't need such a factor).Hence the form factor is where λ is an infrared cutoff.We need only the ultraviolet 1/ε divergence of F, and so we have to introduce such a cutoff.We obtain and It is given by the soft radiation function in classical electrodynamics, and should be included in The Guinness Book of Records as the anomalous dimension known for the longest time (probably, > 100 years). Hadrons with a heavy quark The B meson is the hydrogen atom of quantum chromodynamics, the simplest non-trivial hadron.In the leading approximation, the b quark in it just sits at rest at the origin and creates a chromoelectric field.Light constituents (gluons, light quarks and antiquarks) move in this external field.Their motion is relativistic; the number of gluons and light quark-antiquark pairs in this light cloud is undetermined and varying.Therefore, there are no reasons to expect that a non-relativistic potential quark model would describe the B meson well enough (in contrast to the Υ meson, where the non-relativistic two-particle picture gives a good starting point). Similarly, the Λ b baryon can be called the helium atom of QCD.Unlike in atomic physics, where the hydrogen atom is much simpler than helium, the B and Λ b are equally difficult.Both have a light cloud with a variable number of relativistic particles.The size of this cloud is the confinement radius 1/Λ QCD ; its properties are determined by large-distance nonperturbative QCD. The analogy with atomic physics can tell us a lot about hadrons with a heavy quark.The usual hydrogen and tritium have identical chemical properties, despite the fact that the tritium nucleus is three times heavier than the proton.Both nuclei create identical electric fields, and both stay at rest.Similarly, the D and B mesons have identical "hadro-chemical" properties, despite the fact that the b quark is three times heavier than the c. The proton magnetic moment is of the order of the nuclear magneton e/(2M p ), and is much smaller than the electron magnetic moment e/(2m e ).Therefore, the energy difference between the states of the hydrogen atom with total spins 0 and 1 (hyperfine splitting) is small (of the order m e /M p times the fine structure).Similarly, the b-quark chromomagnetic moment is proportional to 1/M b by dimensionality, and the hyperfine splitting between the B and B * mesons is small (proportional to 1/M b ).Unlike in atomic physics, both "gross"-structure intervals and fine-structure intervals are just some numbers times Λ QCD , because the light components are relativistic (the practical success of constituent quark models shows that these dimensionless numbers for fine splittings can be rather small, but they contain no small parameter). In the limit M → ∞, the heavy-quark spin does not interact with the gluon field.Therefore, it may be rotated at will, without changing the physics.Such rotations can transform the B and B * into each other; they are degenerate and have identical properties in this limit.This heavy-quark spin symmetry yields many useful relations among heavy-hadron form factors.Not only the orientation, but also the magnitude of the heavy-quark spin is irrelevant in the infinite-mass limit.We can switch off the heavy-quark spin, making it spinless, without affecting the physics.This leads to a supersymmetry group called the superflavor symmetry.It can be used to predict properties of hadrons containing a scalar or vector heavy quark.Such quarks exist in some extensions of the Standard Model (for example, supersymmetric or composite extensions). This idea can also be applied to baryons with two heavy quarks.They form a small-size bound state (with a radius of order 1/(Mα s )) which has spin 0 or 1 and is antitriplet in color.Therefore, these baryons are similar to mesons with a heavy antiquark that has spin 0 or 1.The accuracy of this picture cannot be high, however, because even the radius of the bb diquark is only a few times smaller than the confinement radius. Let us consider mesons with the quark contents Qq, where Q is a heavy quark with mass M (b or c), and q is a light quark (u, d, or s).As discussed above, the heavy-quark spin is inessential in the limit M → ∞, and may be switched off.In a world with a scalar heavy antiquark, S-wave mesons have angular momentum and parity j P = 1 2 + ; P-wave mesons have j P = 1 2 − and 3 3 − .The energy difference between these two P-wave states (fine splitting) is a constant times Λ QCD at M → ∞, just like the splittings between these P-wave states and the ground state; however, this constant is likely to be small. In our real world, the heavy antiquark Q has spin and parity s P Q = 1 2 − .The quantum numbers in the above paragraph are those of the cloud of light fields of a meson.Adding the heavy-antiquark spin, we obtain, in the limit M → ∞, a degenerate doublet of S-wave mesons with spin and parity s P = 0 − and 1 − , and two degenerate doublets of P-wave mesons, one with s P = 0 + and 1 + , and the other with s P = 1 + and 2 + .At a large but finite heavy-quark mass M, these doublets are not exactly degenerate.Hyperfine splittings, equal to to some dimensionless numbers times Λ 2 QCD /M, appear.It is natural to expect that hyperfine splittings in P-wave mesons are less than in the ground-state S-wave doublet, because the characteristic distance between the quarks is larger in the P-wave case.Note that the 1 + mesons from the different doublets do not differ from each other by any exactly conserved quantum numbers, and hence can mix.They differ by the angular momenta of the light fields, which is conserved up to 1/M corrections; therefore, the mixing angle should be of the order of Λ QCD /M. Mesons with q = u and d form isodoublets; together with isosinglets with q = s, they form SU(3) triplets. The + mesons decays into the ground-state ones and a π meson in S-wave (l = 0); these decays are not suppressed, and hence the j P = 1 2 + mesons are very wide.Hyperfine splitting of the P-wave doublet is smaller that that of the S-wave one. In the leading approximation, the spectrum of c-containing mesons is obtained from the spectrum of b-containing mesons simply by a shift by M c − M b .The spectrum of mesons containing c is also shown in Fig. 5.It is positioned in such a way that the weighted average energies of the ground-state doublets (M B + 3M B * )/4 and (M D + 3M D * )/4 coincide.Hyperfine splittings are smaller for B mesons than for D mesons, as expected. In S-wave Qqq baryons, the light-quark spins can add to give j P = 0 + or 1 + .In the first case their spin wave function is antisymmetric; the Fermi statistics and the antisymmetry in color require an antisymmetric flavor wave function.Hence the light quarks must be different; if they are u, d, then their isospin is I = 0.With the heavy-quark spin switched off, this gives the 0 + baryon Λ Q with I = 0.If one of the light quarks is s, we have the isodoublet Ξ Q , which forms an SU(3) antitriplet together with Λ Q .With the heavy-quark spin switched on, these baryons have s P = 1 2 + .In the 1 + case, the flavor wave function is symmetric.If the light quarks are u, d, then their isospin is I = 1.This gives the 1 + isotriplet Σ Q ; with one s quark, we obtain the isodoublet Ξ Q ; and with two s quarks, the isosinglet Ω Q .Together, they form an SU(3) sextet.With the heavy-quark spin switched on, we obtain the degenerate doublets with s The hyperfine splittings in these doublets are of the order of Λ 2 QCD /M.Mixing between Ξ Q and Ξ Q is suppressed both by 1/M and by SU(3). The experimentally observed baryons containing b are shown in Fig. 6.In the third column, the lowest state Ξ b is followed by the doublet Ξ b , Ξ * b .The Ω * b baryon has not yet been observed.The spectrum of baryons containing c is also shown.It is positioned in such a way that the ground-state baryons Λ b and Λ c coincide. In the leading M b → ∞ approximation, the masses M B and M B * are both equal to M b + Λ, where Λ is the energy of the ground state of the light fields in the chromoelectric field of the b antiquark.This energy Λ is of the order of Λ QCD .The excited states of the light fields have energies Λi , giving excited degenerate doublets with masses M b + Λi . There are two 1/M b corrections to the masses.First, the b antiquark has an average momentum squared µ 2 π , which is of order of Λ 2 QCD .Therefore, it has a kinetic energy µ 2 π /(2M b ).Second, the b chromomagnetic moment interacts with the chromomagnetic field created by light constituents at the origin, where the b stays.This chromomagnetic field is proportional to the angular momentum of the light fields .Therefore, the chromomagnetic interaction energy is proportional to The hyperfine splitting is Conclusion In the past only renormalizable theories were considered well-defined: they contain a finite number of parameters, which can be extracted from a finite number of experimental results and used to predict results of an infinite number of other potential measurements.Non-renormalizable theories were rejected because their renormalization at all orders in non-renormalizable interactions involve infinitely many parameters, so that such a theory has no predictive power.This principle is absolutely correct, if we are impudent enough to pretend that our theory describes the Nature up to arbitrarily high energies (or down to arbitrarily small distances). At present we accept the fact that our theories only describe the Nature at sufficiently low energies (or sufficiently large distances).They are effective low-energy theories.Such theories contain all operators (allowed by the relevant symmetries) in their Lagrangians.They are necessarily non-renormalizable.This does not prevent us from obtaining definite predictions at any fixed order in the expansion in E/M, where E is the characteristic energy and M is the scale of new physics.Only if we are lucky and M is many orders of magnitude larger than the energies we are interested in, we can neglect higher-dimensional operators in the Lagrangian and work with a renormalizable theory. We can add higher-dimensional contributions to the Lagrangian, with further unknown coefficients.To any finite order in 1/M, the number of such coefficients is finite, and the theory has predictive power.For example, if we want to work at the order 1/M 4 , then either a single 1/M 4 (dimension 8) vertex or two 1/M 2 ones (dimension 6) can occur in a diagram.UV divergences which appear in diagrams with two dimension 6 vertices are compensated by renormalizing these 2 operators plus dimension 8 counterterms.So, the theory can be renormalized.The usual arguments about non-renormalizability are based on considering diagrams with arbitrarily many vertices of nonrenormalizable interactions (operators of dimensions > 4); this leads to infinitely many free parameters in the theory. The Standard Model does not describe all physics up to infinitely high energies (or down to infinitely small distances).At least, quantum gravity becomes important at the Planck scale.SM does not explain dark matter and baryon asymmetry of the Universe.What can appear at some very short distance scale? • supersymmetry; • compositeness; • extra dimensions; • non-pointlike objects: strings / superstrings / branes; • something we cannot imagine at present.We can construct scenarios for new physics searches based on some known variants of the next theory (more fundamental than SM).I. e. we can investigate some finite number of directions of departure from SM, but an infinite number of directions which we cannot now imagine remain unexplored -this is a set of measure 0. What we need is a systematic model-independent approach for searching of some absolutely unknown new physics at small distances.It produces new local interactions of the SM fields.This approach is called Standard Model Effective Theory (SMEFT), see, e. g., [22,23]. 2 + 2 + 2 + 2 −= 3 2 + experimentally observed mesons containing the b antiquark are shown in Fig.5.The mesons B 1 and B * 2 form a doublet, with the quantum numbers of the light fields j P = 3 .The second P-wave doublet with j P = 1 is suspiciously absent.The j P = 3 mesons decays into the ground-state 1 doublet and a π meson in D-wave (l = 2); the pion momentum is small, these decays are strongly suppressed, and the j P mesons are narrow.The j P = 1 2 4 , s = 1 , where s = s Q +  is the meson spin.If we denote this energy for B by −µ 2 G /(2M b ), then for B * it will be (1/3)µ 2 G /(2M b ).Here µ 2 G is of order of Λ 2 QCD .The B, B * meson masses with 1/M b corrections taken into account are given by the formulae QCD M 2 b. 2 B 2 B * − M 2 B M 2 D * − M 2 D= 1 Taking into account M B* + M B = 2M b + O(Λ QCD ), we obtain M difference M 2 D * − M 2D is given by a similar formula, with M c instead of M b .Therefore, the ratioM + O Λ QCD M c,b .(3.51)Experimentally, this ratio is 0.88.This is a spectacular confirmation of the idea that violations of the heavy-quark spin symmetry are proportional to 1/M.Let us now discuss the B-meson leptonic decay constant f B .It is defined by0| bγ µ γ 5 u|B(p) = i f B p µ ,where the one-particle state is normalized in the usual Lorentz-invariant way:B(p )|B(p) = 2p 0 (2π) 3 δ( p − p) .This relativistic normalization becomes nonsensical in the limitM b → ∞,and in that case the non-relativistic normalization nr B(p )|B(p) nr = (2π) 3 δ( p − p) should be used instead.Then, for the B meson at rest, 0| bγ 0 γ 5 u|B nr = iM B f B √ 2M B .Denoting this matrix element (which is mass-independent at M b → ∞) by iF/ √ 2, we obtain 19))What about corrections?Vertex corrections vanish for the same reason as (2.16); crossed-box diagrams vanish because Coulomb photons propagate instantaneously, and the time orderings of the vertices on the left line and on the right one cannot be opposite: higher orders of expansion of the exponent (2.13).Only corrections to the photon propagator can contribute.But there are no such corrections in QPD.Hence the Coulomb potential (2.19) is exact in this theory.In the presence of sources of the photon field, the dimension 6 operator O (2.3) cannot be ignored.The QPD Lagrangian now contains an extra term
10,334.8
2019-07-11T00:00:00.000
[ "Physics" ]
Facile Synthesis, Static, and Dynamic Magnetic Characteristics of Varying Size Double-Surfactant-Coated Mesoscopic Magnetic Nanoparticles Dispersed Stable Aqueous Magnetic Fluids The present work reports the synthesis of a stable aqueous magnetic fluid (AMF) by dispersing double-surfactant-coated Fe3O4 magnetic nanoparticles (MNPs) in water using a facile ambient scalable wet chemical route. MNPs do not disperse well in water, resulting in low stability. This was improved by dispersing double-surfactant (oleic acid and sodium oleate)-coated MNPs in water, where cross-linking between the surfactants improves the stability of the AMFs. The stability was probed by rheological measurements and all the AMF samples showed a good long-term stability and stability against a gradient magnetic field. Further, the microwave spin resonance behavior of AMFs was studied in detail by corroborating the experimental results obtained from the ferromagnetic resonance (FMR) technique to theoretical predictions by appropriate fittings. A broad spectrum was perceived for AMFs which indicates strong ferromagnetic characteristics. The resonance field shifted to higher magnetic field values with the decrease in particle size as larger-size MNPs magnetize and demagnetize more easily since their magnetic spins can align in the field direction more definitely. The FMR spectra was fitted to obtain various spin resonance parameters. The asymmetric shapes of the FMR spectra were observed with a decrease in particle sizes, which indicates an increase in relaxation time. The relaxation time increased with a decrease in particle sizes (sample A to D) from 37.2779 ps to 42.8301 ps. Further, a detailed investigation of the structural, morphological, and dc magnetic properties of the AMF samples was performed. Room temperature dc magnetic measurements confirmed the superparamagnetic (SPM) characteristics of the AMF and the M-H plot for each sample was fitted with a Langevin function to obtain the domain magnetization, permeability, and hydrodynamic diameter of the MNPs. The saturation magnetization and coercivity of the AMF samples increased with the increase in dispersed MNPs’ size of the samples. The improvement in the stability and magnetic characteristics makes AMFs suitable candidates for various biomedical applications such as drug delivery, magnetic fluid hyperthermia, and biomedicines. Introduction Aqueous magnetic fluids (AMFs) have emerged with great potential due to their substantial development in the field of biomedical applications [1][2][3]. The preparation of tion, and uniformity are the major issues of AMFs that require immediate attention [34,35]. The major effort in the current biomedical applications is focused on improving the stability and dispersivity in biocompatible liquids such as water and optimizing the constituents to improve the magnetic characteristics that provide better control [36][37][38]. Moreover, from the literature, we can conclude that wet chemical synthesis is the better processing method for AMFs. It generally results in low impurity concentration and highly stable fluids can be prepared with incredibly low processing difficulties [39]. The prime focus of our research is on preparing stable AMF with uniformly distributed MNPs to achieve isotropic properties of the fluid. Cao et al. [40] synthesized AMFs using the chemical coprecipitation method by coating three different surfactants (sodium dodecyl sulphate, oleic acid, and polyethylene glycol) to investigate the effect of different surfactants on the stability of the AMFs [40]. They confirmed from the transmission electron microscopy (TEM) results that the MNPs coated with sodium dodecyl sulphate and oleic acid showed a low stability due to agglomeration. Furthermore, the polyethylene-glycol-coated MNPs showed a higher stability due to a lower agglomeration. However, the long-term stability and the effect of gradient magnetic field have not been probed, which are the main factors that reduce the stability of AMFs [40]. Dai et al. [41] prepared the MNPs by a low-temperature freeze-drying technique followed by a coating of 3-aminopropyltriethoxysilane (APTES) on the surface to form amino-functionalized MNPs [41]. To prepare a stable water-based fluid, they attached graphene oxide (GO) to amino-functionalized MNPs by electrostatic adsorption to obtain GO-functionalized MFs (GO-NMPs) composite [41]. Petrenco et al. [27] reported the preparation of AMFs stabilized by sodium oleate (SO) and dodecylbenzene sulphonic acid (DBSA). They correlated the properties of the AMFs with varying surfactant types both in the presence and absence of externally applied magnetic field. Both DBSA and SO produced a micelle structure and interacted with the surfactant concentration, which resulted in an aggregation of MNPs and a large and fractal-type aggregation observed in the prepared AMFs [27]. Morales et al. [42] synthesized MNPs coated with oleic acid and pluronic-polymer-based aqueous dispersion. They suggested that the oleic acid has a low hydrophilic-lipophilic balance, which yields a low stabilization. Thus, they used a block copolymer to enhance stability. However, the prepared aqueous fluid had a lower stability due to the long-term instability of the polymer [42]. Further, Bica et al. [43] probed the effect of various surfactants combination of different chain lengths (lauric acid (LA), myristic acid (MA), oleic acid and DBSA) on the stability of the AMFs. They reported that the double layer combination of both LA and MA resulted in a higher stability and in improvement of the biomedical application [43]. Even though there are numerous research efforts dedicated towards the preparation of AMFs with high stability and magnetization, a single AMF cannot be treated as best performing fluid for all applications. Various applications demand different aspects (viscosity, density, and other physicochemical properties) of the AMF in different ranges, which makes probing the effect of individual constituents of AMFs vital for high-performance fluids [44,45]. The investigation of the spin dynamics of the magnetic materials is very important considering it provides necessary information about the physical mechanisms in magnetic materials [46]. FMR is an exceptional characterization tool for probing the microwave spin resonance characteristics of MNPs systems. FMR provides key information about the MNPs system, such as its relaxation behavior, spin glass transition, dipolar interaction, exchange interactions, anisotropy effects, dead layer, core-shell structure, and magnetic domains of the systems [47]. In addition, other important phenomena such as the magnetic state of the system (ferromagnetic (FM) or SPM), low-temperature carrier freezing, and spin canting can also be studied using FMR. One of the major advantages of these systems is that the particles in both a dry as well as in a fluid state can be characterized by changing the sample stage [15]. FMR is a very versatile and sensitive instrument, and it can sense the FMR signals in the range 10 14 -10 16 magnetic moments/minute in ordered FM materials [19]. It has a sensitivity of the relaxation times of the magnetically polarizable particles in the range 10 −7 -10 −10 s. Further, it can also perceive the paramagnetic impurities in the material as minor as the ppm level [48]. Magnetic manipulations and control over the properties give exciting opportunities for the development of active control and high-performance devices. In this work, we focus on the development of a stable, homogeneous, biocompatible, and high magnetic strength fluid. The AMF is prepared by dispersing Fe 3 O 4 MNPs in water. The stability of the dispersion is achieved by using two layers of surfactants in reverse directions to each other to allow cross-linking. The stability of the prepared AMF is probed by rheological measurements both in the absence and presence of the externally applied magnetic field. Generally, the stability and magnetic properties in AMFs are conflicting properties as the increase in size results in an increase in magnetic strengths of the AMFs; however, their stability decreases. Thus, the optimization of the size of dispersed MNPs is vital. The static and dynamic magnetic characteristics of the AMF are investigated by a vibrating sample magnetometer (VSM) and FMR measurements. The experimental results are correlated with various theoretical predictions to calculate various parameters such as spin concentration, domain magnetizations, permeability, spin-spin relaxations, and hydrodynamic diameter. Further, the structural investigations of the prepared Fe 3 O 4 MNPs are carried out by X-ray diffraction (XRD) measurement. Moreover, a Rietveld refinement of the patterns is done to calculate various structural parameters. We prepare four AMF samples with varying size distributions. The morphology and size distribution of all the AMF samples is performed by TEM and small-angle X-ray scattering (SAXS) measurement. The prepared AMF shows enhanced stability, which will be vital to improving the performance of existing applications, as well as lead to the development of other novel applications. The performance of the fluids remains the key bottleneck in AMF-based applications. The present work focuses on solving the grand challenge associated with biomedical applications of AMFs, i.e., technical issues related to the synthesis of AMFs for homogeneous, biocompatible, high magnetic strengths and stable MFs in large-scale production. Synthesis and Characterization of AMF Samples The synthesis of a stable AMF is a state-of-art technique as it requires the optimization of several counteracting forces. We have implemented a wet chemical synthesis approach for the preparation of AMF consisting of two steps. First, oleic-acid-coated Fe 3 O 4 MNPs were prepared by the surface-modified chemical coprecipitation method. In this method, reverse micelle structures of oleic acid allow the growth of particles inside, preventing agglomeration [16][17][18]21,39]. In the second step, the oleic-acid-coated Fe 3 O 4 MNPs were dispersed in water using the secondary surfactant sodium oleate. The schematic representation of the typical synthesis process adopted is depicted in Figure 1 [20]. For the synthesis of Fe 3 O 4 MNPs, salt precursor solutions of Fe 3+ and Fe 2+ were prepared in the stoichiometric ratio of 2:1 and mixed under a constant stirring rate for 30 min. Afterward, 8 mL of oleic acid was added to the solution which formed reverse micelle structures upon constant stirring and heating at 80 • C, followed by ultrasonication for 15 min. A 25% concentration ammonia solution was then added to the solution for precipitation [4,9,28,49]. Four different samples were prepared by adding different amounts of ammonia solution to maintain a fixed pH. The respective samples corresponding to pH 8, 9, 10, and 12 were named as A, B, C, and D. The resultant solution after ammonia addition was maintained at 80 • C under constant stirring (600 rpm) for 60 min to allow the growth of the particles. The resultant solution was filtered using a NdFeB permanent magnet and washed several times using water and ethanol. The washed particles were dried in a vacuum oven overnight to remove any trace of water. A homogeneous solution of sodium oleate was prepared by adding 3.8 mg of sodium oleate to 20 mL of water and heating it at 50 • C for 30 min under constant stirring. Finally, the oleic-acid-coated MNPs were dispersed in homogeneous aqueous sodium oleate solution by manual stirring [20]. The detailed characterization of the AMF comprises the determination of stability, crystalline purity, structural, morphology, rheological behavior, and static/dynamic-magnetic properties by various sophisticated analytical characterization tools. The stability of the AMF was calculated by a parallel plate magneto-rheometer (MCR-301 with MRD 70 setup, Anton Paar, GmbH, Ostfildern, Germany) with a transverse magnetic field (1.2 T). The gap optimization and standard calibration were performed using silicon oil standard with PP-20 spindle. For structural and phase identification, an XRD (Ultima-IV, Rigaku, Tokyo, Japan) technique was used followed by a Rietveld refinement analysis for a detailed crystallographic modeling. Further, the crystallite size of the prepared MNPs was obtained by the Debye-Scherrer (D-S) formula and Williamson-Hall (W-H) method. For obtaining the morphology and size distribution of the samples, SAXS (Rigaku Ultima-IV with rotational stage and liquid geometry) and TEM (model TECNAI F30, FEI, Hillsboro, OR, USA) were utilized. The size range of the Fe 3 O 4 MNPs was calculated by the SAXS by fitting the experimental data using Nanosolver software (3.7.6.0, Rigaku, Tokyo, Japan) assuming a Gaussian or log-normal distribution. The static magnetic properties were calculated by VSM (model 7410 VSM, Lakeshore, OH, USA). Further, the spin dynamics of the AMFs were probed using an EPR spectrometer (EMX-10, Bruker, Berlin, Germany) using a 100 kHz field modulation. The AMF samples were investigated using an X-band (9.85 GHz) microwave frequency, and a TM011 mode cavity. The FMR spectra of all the AMF samples were recorded at room temperature and different microwave spin resonance parameters such as resonance field, peak-to-peak line width (∆H), g-value, spin-spin relaxation time, and spin concentration were calculated by fitting the FMR spectra. To avoid any distortion to the FMR spectra of the sample, the amplitude of the modulation was kept lower than one-third of the ∆H. The FMR profile shape was further correlated to the effect of varying size distribution on the super-exchange and dipolar interactions of the MNPs. Stability Analysis of AMFs Using Magneto-Rheometer AMFs are required to remain suspended and do not agglomerate with the application and removal of the magnetic field. The stability of the AMFs was probed both in the absence and presence of an applied magnetic field (gradient field) ON (0.18T)/OFF (0T) at constant temperature (25 • C). In the absence of a magnetic field, the AMF shows a Newtonian behavior and the viscosity remains constant [11]. The viscosity of each AMF sample was obtained at the fixed temperature in constant shear mode (100 s −1 ) by taking the average of 10 measurements for each day (till 30 days). Figure 2a shows the viscosity plot of all the AMF samples, which demonstrates that the viscosity remains almost constant over the period of 30 days. This shows that the prepared AMF samples are highly stable for a prolonged period and do not settle down due to gravity. The sampling process is maintained similarly for all the samples and the system is calibrated using standard silicon oil [17,18]. The viscosity of the sample with lower size distribution (AMF sample (D)) is higher, whereas with an increase in size distribution, the viscosity decreases in the absence of a magnetic field. This happens due to greater interparticle friction between the particles with the reduction in size [21,39,50,51]. In the presence of a magnetic field, the viscosity response completely reverses, and larger-size particles AMFs (sample A) show a higher viscosity. This can be related to their higher magnetic moment which responds more strongly to the applied magnetic field and has stronger dipolar interactions between the MNPs [52][53][54][55]. The stability of the AMFs in the gradient magnetic field was obtained by probing the AMF samples with a continuous magnetic field ON/OFF. The viscosity versus time response has been obtained for all the samples with magnetic field ON (0.18T)/OFF (0T) with a 30 s interval. From Figure 2b, it is evident that the viscosity remains the same even after multiple cycles of gradient field. This confirms that the MNPs are not agglomerating under the gradient magnetic field and show almost reversible characteristics when the magnetic field is removed. In the presence of the field, MNPs align in the direction of the magnetic field, forming a chain-like structure which results in an increase of the viscosity of the AMF [11,21]. The stability analysis using a gradient magnetic field confirms that the AMFs are showing completely reversible behavior between Newtonian to non-Newtonian [53,56]. Moreover, the long-term stability has been confirmed for the 30-day viscosity calculation, in which the AMFs are showing a Newtonian behavior and a constant viscosity at a fixed temperature. Structural Analysis of Aqueous Magnetic Fluids Using X-ray Diffraction Method The structural investigation of the MNPs is vital as it provides information about the qualitative phase, preferred growth orientation, structural parameters, crystallite size, strain, lattice parameter, and percentage crystallinity of the material. The structural and phase identification of all the AMF samples was carried out by the XRD technique. The diffraction patterns of all the samples were recorded in 2θ range 20 • -80 • with a step size of 0.01 • using a Cu-K α source (λ = 1.5406 Å) at slow scan (0.5 • /min) [4,49]. The XRD plot of all the AMF samples prepared from the coprecipitation technique (A, B, C, and D) is shown in Figure 3. The peak positions in all the XRD plots perfectly match with the single cubic spinel ferrites structure. The diffraction peaks observed for sample A at 30 [15,17,39]. The other samples (B, C, and D) also have peaks at the same position. Some minor shifts are present which can be correlated to the anisotropy, sampling, and instrumental factors [8,9,28,48]. From Figure 3, we can conclude that all the AMF samples are single cubic spinel phase and their full width at half-maxima (FWHM) increase from sample A to D consistently. This systematic increase in the FWHM corresponds to the decrease in the crystallite size from samples A to D [39,46,57]. The crystallite size and the strain in the AMF samples were obtained by the D-S formula (Equation (1)) and W-H method (Equation (2)) [4,48]: where, d-crystallite size, k-Scherrer constant, β hkl -FWHM corresponding to each peak, θ-Braggs' angle, ε-crystalline strain. The obtained values of the crystallite size and induced strain from Equations (1) and (2) are summarized in Table 1 [21,46,48]. The crystallite size obtained from both the D-S and W-H methods is in close proximity. The crystallite size of the samples decreases with an increase in precipitating pH of the samples. The crystallite size decreases from 10.6 nm to 5.8 nm for samples A to D systematically. This confirms that the crystallite size of the material synthesized using the coprecipitation method decreased with an increase in reaction pH while other synthesis parameters were kept constant [20,49]. The increase in pH results in a significant increase in nucleation sites, which restricts the long-range growth in the material resulting in smaller crystallites [4,9,58]. The variation in D-S and W-H methods appears due to the instrumental and strain broadening. The induced strain for samples A and B is 0.0020 and 0.0026, respectively, which is lower compared to samples C and D which have higher induced strain values of 0.006 and 0.015, respectively. Moreover, it is evident that the decrease in size of the materials generally results in higher strain and more impurities [4,59,60]. Figure 4 shows the variation of crystallite size and strain for all the samples. The lattice constant (a) was also calculated for all the samples by taking all the diffraction planes; it lies in the range 8.3482 to 8.3440 Å, which is in good agreement with the standards (8.342 Å) [16]. The decrease in lattice constant from samples A to D is due to the large ionic radii difference of Fe 2+ (0.77 Å) as compared to Fe 3+ (0.67 Å) [11,15]. With the decrease in size, the cation distribution changes, and Fe 3+ and Fe 2+ ions are replaced with each other from their respective tetrahedral (A) and octahedral (B) sites. This results in a deviation of the lattice constant from the standard values. The cation distribution at A and B sites plays a vital role in governing the structures of these AMFs [48,49]. Table 1. Crystallite size, strain, various refinement parameters (goodness of fit, R-factor), lattice parameters, and unit cell volume of the AMF samples. Afterward, XRD patterns obtained for all the AMF samples were analyzed using the Rietveld refinement technique using Fullprof Suite software. Various structural factors such as atomic position, site occupancy, and crystallographic factors have been obtained from the refinement. All the XRD plots were refined using the Fd-3m(227) space group of spinel structure [48]. The Rietveld refined patterns of all the AMF samples are shown in Figure 5 and different structural parameters are summarized in Table 1. The observed data from the experiments are shown in open circles, the calculated pattern is shown as the solid line in Figure 5. In addition, the difference between the observed and experimental data is shown at the bottom as a blue line, and the vertical lines show the calculated Braggs' positions of each plane. This perfect match with the standards shows the crystalline single-phase spinel ferrite group of Fd-3m(227) symmetry [46]. The refinement of the XRD pattern of the AMF sample was carried out in two steps. Firstly, the removal of the background and scaling was performed. In the second step, the structural parameters and site occupancy was determined [15]. The refinement of all the samples was carried out by fixing the octahedral sites (16d (1/8, 1/8, 1/8)) and tetrahedral sites (8a (1/2, 1/2, 1/2)) with free oxygen (32e (x, x, x)) sites [50]. All the structural and refinement parameters such as R-factor ((R F = crystallographic factor, R P = profile factor, R B = Bragg Factor, R wp = weight profile factor, and R exp = expected value) are summarized in Table 1. The large reliability factor is observed for the smaller crystallite samples as it leads to the large, diffused scattering compared to that of the highly crystalline sample which dominates the Braggs' scattering [4]. The refined profile has a goodness of fit (χ 2 ) in the range of 1.23 to 1.43, which constitutes a good fit between observed and experimental data. The value of the lattice parameter (a = b = c) and direct cell volume (V) of all the AMF samples were calculated by the refined pattern and are listed in Table 1. Size Distribution of Aqueous Magnetic Fluids Using Transmission Electron Microscopy The electron micrographs of the double-surfactant-coated Fe 3 O 4 -based AMFs samples (A, B, C, and D) are shown in Figure 6a-d. The inset figures show the HRTEM images of the respective samples. The samples for the TEM analysis were prepared in the liquid state by diluting it with kerosene, followed by heating from the light source. In all the samples, the particles are almost spherical with equisized distribution. The inset Figure 6a shows the HRTEM image of sample A having an interplanar spacing of 0.29 nm, which corresponds to the (220) plane of the spinel ferrite phase [19]. Similarly, the inset in Figure 6b-d shows the HRTEM image of samples B, C, and D, respectively. An interplanar spacing of 0.20 nm and 0.25 nm is measured in samples B, and C and D, respectively, corresponding to the planes (400) and (311) [4,8,61]. The size distribution of the samples is shown in Figure 6e-h where (e-h) are best fitted with a Gaussian distribution and (g) is fitted with a log-normal distribution [33,[62][63][64][65]. The size distribution for sample A is 3-30 nm with an average particle size of 17.2 nm. Similarly, for sample B, the size distribution is 2-22 nm with an average size of 10.2 nm; for sample C, the size distribution is 2-22 nm and the average size is 8.3 nm; and for sample D, the size distribution is 1-17 nm and the average size is 7.5 nm. From the electron micrographs, we observe that all the particles are of very narrow size distribution, which is a key requirement for the AMF samples. In all the samples, the particles are approximately spherical. Moreover, from the TEM analysis, it is evident that the size of the sample is decreasing from A to D with an increase in precipitation pH. This is well in alignment with the XRD results. The reduction in size with the increase in precipitating pH is due to a large number of nucleation sites at higher pH [66]. Size Distribution Using Small-Angle X-ray Scattering Measurement A standard rotational attachment for the liquid capillary sample stage was used for the SAXS measurement of all the AMF samples. The sample was diluted and placed in the very thin capillary (0.07 mm thickness quartz capillary) and sealed from the top. The capillary was provided a slow rotation (330 rpm) for the measurement ranging from 0.05 to 3 • with a step width of 0.02 • using CuKα radiation. The experimental data obtained from the measurement were modeled to calculate the size distribution of the particles [67,68]. Figure 7 shows the size distribution plot of the samples obtained from Nanosolver software assuming the Gaussian or log-normal distribution. The average particle size obtained from the SAXS measurement with associated resize distribution is shown in Table 2. In addition, the respective values of the size obtained from the TEM are shown for comparison in Table 2 [69,70]. We note that the size of the sample is decreased from A to D and the size distribution for sample A is very narrow but slightly wider for the other samples. The average size of the samples is in good agreement with the TEM results. Although a slight variation in the size distribution from TEM analysis can be well understood, as in the SAXS measurement, a relatively large amount of the sample is under test compared to the TEM measurement. The SAXS results are highly reproducible and reliable [71,72]. The SAXS data modeling using Nanosolver requires various input data such as the density of the materials, interparticle distance, and matrix in which the system is dissolved. SAXS is an immensely powerful and reliable tool for understanding the shape of the sample as well [64,73]. Samples A, B, and D are best fitted with a Gaussian distribution. This indicates that these samples are almost spherical. However, sample C has a left-skewed log-normal distribution which reflects that particles are slightly elongated and differ from the spherical shape [69,70,72]. The SAXS provides a more reliable approximation of the size distribution; however, TEM is a better-suited method for probing the morphology. DC Magnetic Measurement of the Double-Surfactant-Coated Fe 3 O 4 -Based Aqueous Magnetic Fluid Sample Using Vibrating Sample Magnetometer The room temperature magnetic measurement of the AMF samples was performed in the liquid mode using a leak-proof homemade Perspex container with a magnetic field in the range ±2 T dc. This sample holder isolates the sample from the atmosphere so that MF will not interact with the surroundings. The samples were scanned with a magnetic field in the range ±2 T, which is the usual magnetization field range of soft magnetic materials. The M-H loop of the AMF samples is shown in Figure 8, which depicts the SPM behavior of the AMFs since the coercivity ("CE"-H c ) of all the samples is negligible. The saturation magnetization ("SM"-M s ) of the samples increases with the increase in size, which can be perceived from Table 3 and Figure 8. The SM of sample A is 54.5 emu/g which reduces consistently to 40.1 emu/g for sample D as shown in Figure 9a. The decrease in SM of the AMF samples with a decrease in size results from the presence of a large number of spins occupying the surface of the MNPs [46,67]. The influence of the dead layer becomes more prominent as the size reduces, which also plays a part in the reduction of SM [74]. Table 3 shows the values of the various key magnetic characteristic parameters such as SM, CE, and remnant magnetization ("RM"-M r ), of the AMF samples. Further, the anisotropy constant (k anis ) of the samples was calculated by the formula: k anis = Hc×Ms 0.98 [46,48,50,57]. The k anis of sample A is higher and decreases systematically with the decrease in size of the dispersed MNPs. From Table 3, we observe that the CE and RM of the samples decrease with a decrease in size which is well in alignment with the theoretical predictions and literature. The CE of samples A is 52.44 G which decreases to 21.92 for the smaller size sample D as shown in Figure 9b. Similarly, the RM decreases from 2.18 to 1.27 emu/g for samples A to D. However, the values of the RM and CE are exceedingly small, which clearly indicates that the particles are SPM in nature [4,46,75]. Further, the M-H loop of the samples was fitted using the Langevin function given as [4,15,16,46,75] [16]. Assuming a uniform MNPs distribution, the average interparticle distance (r) can be given as r = (1/n ) 1/3 where ∅ = n 1 6 πD 3 . The best-fitted raw curves of the samples with the experimental data are depicted in Figure 10, where symbols represent the experimental raw curve and solid line represents the fitted curve [46]. Various deduced parameters and fitting parameters obtained from the Langevin fitting are listed in Table 3. The hydrodynamic diameter (D h ) of the samples calculated from the Langevin function fitting is well in accordance with the particle size calculated by TEM and SAXS and the crystallite size obtained by XRD. The domain magnetization (M d ) of AMFs increases whereas the relative permeability decreases with a decrease in dispersed MNPs. Moreover, we can conclude from the M-H loop that all the AMF samples are SPM in nature and the SM, CE, RM and anisotropy constant increase with an increase in size of the MNPs. Further, the Langevin fitting of the raw curves has confirmed that the domain magnetization decreases with an increase in size of the MNPs in the AMF samples. Room Temperature Spin Dynamics Investigation of Fe 3 O 4 -MNPs-Dispersed AMF The room temperature FMR spectra of the AMFs samples were recorded in a liquid state and the dependence of microwave spin resonance properties on the varying particle size was investigated. The FMR spectra of the samples were fitted with suitable distribution functions and resonance parameters were calculated from the best-fitted curves. The broad FMR spectra of the different AMF samples (Figure 11) with varying MNP sizes confirm the ferromagnetic characteristics of the materials. The experimental data and best-fitted FMR spectra are shown in Figure 11. The FMR spectra clearly show that all the samples are asymmetrical with a lower tail of the broad spectra as compared to the upper tail. This implies that the spins are taking a longer time to relax back to their ground state [47]. Moreover, the broadness of the lower tail increases with a decrease in particle size, which can be related to the higher surface spins inducing anisotropy in the material system [57]. The important spin dynamics parameters such as resonance field (H R ), peak-to-peak width (∆H PP ), Landé g-tensor, spin concentration (N s ), and spin-spin relaxation time (T S ) were obtained for all the AMF samples and are depicted in Table 4. The single broad spectra for all the samples as evident from Figure 11 demonstrate the ferromagnetic behavior of the sample, and no isolated Fe 3+ and Fe 2+ exist. The single FMR line spectra are the contribution from the sample and suggest that both Fe 3+ and Fe 2+ are in a single phase. Further, the FMR spectra of all the samples are asymmetrical but their H R and ∆H PP show systematic variations. From Table 4, we observe that the resonance field shifts to higher magnetic field values with the decrease in particle size (Figure 12a). It can be well understood as the larger particle size samples magnetize and demagnetize at lower field due to strong dipolar interactions. This allows the magnetic spins to align in the field direction more easily [19]. The shifting of the resonance peak to a higher value with a decrease in the size of the MNPs can be explained by the magneto-crystalline anisotropy and thermal energy. At smaller crystallite sizes, smaller volumes lead to small magneto-crystalline anisotropy energies, which are expressed as: E A = K·V; where K is the anisotropy constant and V is the volume of the SPM particles [19,46,76,77]. As the anisotropy energy becomes smaller than the thermal energy E T = K B T, the magnetic spins start to move freely and randomly in all directions. This leads to an increase in the required magnetic field values to achieve resonance [78,79]. Further, the g-value is a key parameter of the MNPs sample which provides the orbital contribution to the magnetic moment and is expressed as g = hν µ β H R , where h is Planck's constant, ν is the microwave frequency, and µ B is Bohr's magnetron [80]. The g-value increased monotonically from 2.109 to 2.236 with the increase in particle size of the Fe 3 O 4 MNPs as shown in Figure 12a [46,57]. The g-value is an inverse factor of the resonance field and shows just the opposite trend for the magnetic system. Furthermore, a slight incremental change is observed in peak-to-peak linewidth with an increase in crystallite size. The variation in the linewidth is negligible with respect to the magnitude of linewidth itself [19]. Due to regional variations in the local magnetic field experienced by the magnetic spins, a gradation in a magnetic field is experienced by the spins in the ferromagnetic system [48][49][50]. Due to this gradient, these spins achieve resonance at different values of the magnetic field giving a broadness to the FMR signal. As seen in Table 4, the peak-to-peak linewidth increases almost linearly with particle size. The shape of the FMR signal was analyzed thoroughly through different fitting functions. A Gaussian profile provided the best fit for smaller particle size; however, a transition from Gaussian to pseudo-Voigt was observed in the profile, as the particle size was increased to 10 nm. Pseudo-Voigt is an approximation of a Voigt profile using a linear combination of Gaussian and Lorentzian profiles instead of their convolution [19,39,46,50]. The Gaussian shape of the profiles is indicative of the dipolar broadening in the linewidth. Due to exchange interactions, the dipolar broadening in the center is reduced, leading to a narrowing in the center of the peak and resulting in a pseudo-Voigt shape [19]. Further, the spin-spin relaxation time (T s ) was calculated from the spectral half-width (∆H 1/2 ) obtained by fitting the peak shapes and g-value, listed in Table 4. T s is typically expressed using the below Equation (3) [81]: As the size of MNPs increases, the spin-spin relaxation time shows a sharp decline. A similar decay in transverse relaxation rate has also been reported by Noginova et al. [82]. The spin-spin relaxation time varies with the inverse second power (T S ∝ r −2 , where r is the particle radius) of the SPM particles size as suggested by Yin et al. [83]. The relaxation time for sample A is 37.2779 × 10 12 s; it increases to 42.8301 × 10 12 s for sample D as shown in Table 4 and Figure 12b. This is also evident from the FMR spectra shape and the lower broad tail. With the decrease in particle size, the thermal energy of the particles becomes dominant, which imparts randomization in the system, contributing to the larger relaxation time [19,39,50]. Further, the spin concentration was calculated using Equation (4) [48]: where ∆H 1/2 is the full width at half-maximum of the absorption peak. It is seen from Table 4 that initially, the spin concentration increases with a decrease in the particle size. However, after a critical point, it starts decreasing (Figure 12b). This unusual behavior of the spin concentration can be understood from Equation (4) as N s depends on the two key parameters, g-value, and ∆H 1/2 . Both these quantities have a different behavior with the particle size variation. Thus, the spin concentration of the AMF samples is maximized at the critical point and the maximum spin concentration of 1568.5 × 10 22 is observed for sample B. This is evidence that the optimization of the size of the MNPs dispersed in AMF plays a key role in governing their performance. Conclusions A double-surfactant-coated AMF was synthesized and a detailed investigation of its stability, structure, dc magnetic properties, and spin dynamics was carried out. AMFs have showed a tremendous potential recently due to their diversified applications in the scientific and technological fields. However, the stability of AMFs remains the key bottleneck in applications as MNPs do not disperse well with water. Herein, we have successfully prepared a highly stable AMF by coating two layers of surfactant on Fe 3 O 4 MNPs using oleic acid and sodium oleate. The polar head of the oleic acid attaches to the MNPs whereas the polar head of the sodium oleate attaches to water and their nonpolar heads cross-linked to each other allowing a stable dispersion of MNPs in water. The stability of the AMF samples was investigated by a novel approach using a magneto-rheometer and probing the fluid samples in rotational mode at gradient magnetic fields. All the AMF samples showed a high stability against a gradient magnetic field for a prolonged period. The structural investigation of the AMF samples was carried out by an XRD analysis and the obtained XRD patterns of each sample were probed by the Rietveld refinement method. The crystallite size of the samples varied from 10.6 to 5.8 nm for samples A to D obtained by the W-H method. The structural refinement confirmed the spinel ferrite phase (Fd-3m geometry, space group 227). A good fit between the observed and experimental data was perceived with χ 2 values in the range 1.23 to 1.43. Further, the morphology of the Fe 3 O 4 MNPs was probed by a TEM analysis, which confirmed the spherical shape of all the samples. The average size obtained for the samples decreased with an increase in precipitation pH consistently (17.2 to 7.5 nm for samples A to D, respectively). Moreover, the size distribution was further confirmed by the SAXS measurement, and the results of the average particle size obtained for sample A was 16.6, which decreased to 7.2 nm for sample D. Further, the SAXS curve gave an indication about the shape of the particles. Samples A, B, and D showed a spherical nature, whereas sample C showed a slightly elongated particle shape. The values of the average size obtained for all the samples using TEM and SAXS were in close proximity and also matched with the trend observed for the crystallite size. Further, the dc magnetic measurement of all the samples confirmed the SPM nature and various parameters such as SM, CE, RM, and k anis were calculated by an M-H plot. The SM of the AMF samples systematically increased with an increase in size. Further, the raw curve of the M-H was fitted with the Langevin function, and various parameters such as domain magnetization, permittivity, and hydrodynamic diameter were obtained. The spin dynamics of the AMF samples was carried out by an FMR measurement which confirmed the ferromagnetic nature of all the samples. The resonance field increased with decreases in the sample size, whereas the linewidth decreased. The relaxation time for sample A was 37.2779 ps, which increased to 42.8301 ps for sample D; this is evident from the broad tail in the smaller size samples. The detailed characterization confirms the improved stability and magnetic characteristics of the AMFs. This makes them more sustainable for biomedical applications such as targeted drug delivery, magnetic resonance imaging (MRI), and hyperthermia treatment. Author Contributions: S.P. and R.V.: synthesis, conceptualization of idea, overall manuscript writing and analysis of results; P.K.: VSM measurement and analysis; A.S.: Rietveld refinement; K.J. and S.S.: FMR analysis; P.S.: TEM measurement and analysis; X.W.: writing-review and editing, resources, and supervision; R.P.P.: overall supervision and conceptualization of idea. All authors have read and agreed to the published version of the manuscript.
9,060.8
2021-11-01T00:00:00.000
[ "Materials Science", "Physics", "Chemistry" ]
Measurement of b hadron production fractions in 7 TeV pp collisions Measurements of $b$ hadron production ratios in proton-proton collisions at a centre-of-mass energy of 7 TeV with an integrated luminosity of 3 pb$^{-1}$ are presented. We study the ratios of strange $B$ meson to light $B$ meson production $f_s/(f_u+f_d)$ and $\Lambda_b^0$ baryon to light $B$ meson production $f_{\Lambda_b}/(f_u+f_d)$ as a function of the charmed hadron-muon pair transverse momentum $p_T$ and the $b$ hadron pseudorapidity $\eta$, for $p_T$ between 0 and 14 GeV and $\eta$ between 2 and 5. We find that $f_s/(f_u+f_d)$ is consistent with being independent of $p_{\rm T}$ and $\eta$, and we determine $f_s/(f_u+f_d)$ = 0.134$\pm$ 0.004 $^{+0.011}_{-0.010}$, where the first error is statistical and the second systematic. The corresponding ratio $f_{\Lambda_b}/(f_u+f_d)$ is found to be dependent upon the transverse momentum of the charmed hadron-muon pair, $f_{\Lambda_b}/(f_u+f_d)=(0.404\pm 0.017 (stat) \pm 0.027 (syst) \pm 0.105 (Br))\times[1 -(0.031 \pm 0.004 (stat) \pm 0.003 (syst))\times p_T(GeV)]$, where Br reflects an absolute scale uncertainty due to the poorly known branching fraction Br(\Lambda_c^+ \to pK^-\pi^+)$. We extract the ratio of strange $B$ meson to light neutral $B$ meson production $f_s/f_d$ by averaging the result reported here with two previous measurements derived from the relative abundances of $\bar{B}_s \to D_S^+ \pi ^-$ to $\bar{B}^0 \to D^+K^-$ and $\bar{B}^0 \to D^+\pi^-$. We obtain $f_s/f_d=0.267^{+0.021}_{-0.020}$. Introduction The fragmentation process, in which a primary b quark forms either a bq meson or a bq 1 q 2 baryon, cannot be reliably predicted because it is driven by strong dynamics in the non-perturbative regime. Thus fragmentation functions for the various hadron species must be determined experimentally. The LHCb experiment at the LHC explores a unique kinematic region: it detects b hadrons produced in a cone centered around the beam axis covering a region of pseudorapidity η, defined in terms of the polar angle θ with respect to the beam direction as − ln(tan θ/2), ranging approximately between 2 and 5. Knowledge of the fragmentation functions allows us to relate theoretical predictions of the bb quark production cross-section, derived from perturbative QCD, to the observed hadrons. In addition, since many absolute branching fractions of B − and B 0 decays have been well measured at e + e − colliders [1], it suffices to measure the ratio of B 0 s production to either B − or B 0 production to perform precise absolute B 0 s branching fraction measurements. In this paper we describe measurements of two ratios of fragmentation functions: f s /(f u +f d ) and . The inclusion of charged conjugate modes is implied throughout the paper, and we measure the average production ratios. Previous measurements of these fractions have been made at LEP [2] and at CDF [3]. More recently, LHCb measured the ratio f s /f d using the decay modes B 0 → D + π − , B 0 → D + K − , and B 0 s → D + s π − [4] and theoretical input from QCD factorization [5,6]. Here we measure this ratio using semileptonic decays without any significant model dependence. A commonly adopted assumption is that the fractions of these different species should be the same in high energy b jets originating from Z 0 decays and high p T b jets originating from pp collisions at the Tevatron or pp collisions at LHC, based on the notion that hadronization is a non-perturbative process occurring at the scale of Λ QCD . Nonetheless, the results from different experiments are discrepant in the case of the b baryon fraction [2]. The measurements reported in this paper are performed using the LHCb detector [7], a forward spectrometer designed to study production and decays of hadrons containing b or c quarks. LHCb includes a vertex detector (VELO), providing precise locations of primary pp interaction vertices, and of detached vertices of long lived hadrons. The momenta of charged particles are determined using information from the VELO together with the rest of the tracking system, composed of a large area silicon tracker located before a 4 Tm dipole magnet, and a combination of silicon strip and straw drift chamber detectors located after the magnet. Two Ring Imaging Cherenkov (RICH) detectors are used for charged hadron identification. Photon detection and electron identification are implemented through an electromagnetic calorimeter followed by a hadron calorimeter. A system of alternating layers of iron and chambers provides muon identification. The two calorimeters and the muon system provide the energy and momentum information to implement a first level (L0) hardware trigger. An additional trigger level is software based, and its algorithms are tuned to the experiment's operating condition. In this analysis we use a data sample of 3 pb −1 collected from 7 TeV centre-of-mass energy pp collisions at the LHC during 2010. The trigger selects events where a single muon is detected without biasing the impact parameter distribution of the decay products of the b hadron, nor any kinematic variable relevant to semileptonic decays. These features reduce the systematic uncertainty in the efficiency. Our goal is to measure two specific production ratios: that of B 0 s relative to the sum of B − and B 0 , and that of Λ 0 b , relative to the sum of B − and B 0 . The sum of the B 0 , B − , B 0 s and Λ 0 b fractions does not equal one, as there is other b production, namely a very small rate for B − c mesons, bottomonia, and other b baryons that do not decay strongly into Λ 0 b , such as the Ξ b . We measure relative fractions by studying the final states D 0 µ − νX, D + µ − νX, D + s µ − νX, Λ + c µ − νX, D 0 K + µ − νX, and D 0 pµ − νX. We do not attempt to separate f u and f d , but we measure the sum of D 0 and D + channels and correct for cross-feeds from B 0 s and Λ 0 b decays. We assume near equality of the semileptonic decay width of all b hadrons, as discussed below. Charmed hadrons are reconstructed through the modes listed in Table 1, together with their branching fractions. We use all D + s → K − K + π + decays rather than a combination of the resonant φπ + and K * 0 K + contributions, because these D + s decays cannot be cleanly isolated due to interference effects of different amplitudes. Each of these different charmed hadron plus muon final states can be populated by a combination of initial b hadron states. B 0 mesons decay semileptonically into a mixture of D 0 and D + mesons, while B − mesons decay predominantly into D 0 mesons with a smaller admixture of D + mesons. Both include a tiny component of D + s K meson pairs. B 0 s mesons decay predominantly into D + s mesons, but can also decay into D 0 K + and D + K 0 S mesons; this is expected if the B 0 s decays into a D * * s state that is heavy enough to decay into a DK pair. In this paper we measure this contribution using D 0 K + Xµ − ν events. Finally, Λ 0 b baryons decay mostly into Λ + c final states. We determine other contributions using D 0 pXµ − ν events. We ignore the contributions of b → u decays that comprise approximately 1% of semileptonic b hadron decays [8], and constitute a roughly equal portion of each b species in any case. The corrected yields for B 0 or B − decaying into D 0 µ − νX or D + µ − νX, n corr , can be expressed in terms of the measured yields, n, as where we use the shorthand n(Dµ) ≡ n(DXµ − ν). An analogous abbreviation is used for the total trigger and detection efficiencies. For example, the ratio (B 0 s → D 0 )/ (B 0 s → D 0 K + ) gives the relative efficiency to reconstruct a charged K in semi-muonic B 0 s decays producing a D 0 meson. Similarly Both the D 0 Xµ − ν and the D + Xµ − ν final states contain small components of cross-feed from B 0 s decays to D 0 K + Xµ − ν and to D + K 0 Xµ − ν. These components are accounted for by the two decays B 0 s → D + s1 Xµ − ν and B 0 s → D * + s2 Xµ − ν as reported in a recent LHCb publication [9]. The third terms in Eqs. 1 and 2 are due to a similar small cross-feed from Λ 0 b decays. The number of B 0 s resulting in D + s Xµ − ν in the final state is given by where the last term subtracts yields of D + s KXµ − ν final states originating from B 0 or B − semileptonic decays, and N (B 0 + B − ) indicates the total number of B 0 and B − produced. We derive this correction using the branching fraction B(B → D ( * )+ s Kµν) = (6.1 ± 1.2) × 10 −4 [10] measured by the BaBar experiment. In addition, B 0 s decays semileptonically into DKXµ − ν, and thus we need to add to Eq. 3 where, using isospin symmetry, the factor of 2 accounts for B 0 s → DK 0 Xµ − ν semileptonic decays. The equation for the ratio f s /(f u + f d ) is where B 0 s → Dµ represents B 0 s semileptonic decays to a final charmed hadron, given by the sum of the contributions shown in Eqs. 3 and 4, and the symbols τ B i indicate the B i hadron lifetimes, that are all well measured [1]. We use the average B 0 s lifetime, 1.472±0.025 ps [1]. This equation assumes equality of the semileptonic widths of all the b meson species. This is a reliable assumption, as corrections in HQET arise only to order 1/m 2 b and the SU(3) breaking correction is quite small, of the order of 1% [11,12,13]. The Λ 0 b corrected yield is derived in an analogous manner. We determine where D represents a generic charmed hadron, and extract the Λ 0 b fraction using Again, we assume near equality of the semileptonic widths of different b hadrons, but we apply a small adjustment ξ = 4±2%, to account for the chromomagnetic correction, affecting b-flavoured mesons but not b baryons [11,12,13]. The uncertainty is evaluated with very conservative assumptions for all the parameters of the heavy quark expansion. Analysis method To isolate a sample of b flavoured hadrons with low backgrounds, we match charmed hadron candidates with tracks identified as muons. Right-sign (RS) combinations have the sign of the charge of the muon being the same as the charge of the kaon in D 0 , D + , or Λ + c decays, or the opposite charge of the pion in D + s decays, while wrong-sign (WS) combinations comprise combinations with opposite charge correlations. WS events are useful to estimate certain backgrounds. This analysis follows our previous investigation of b → D 0 Xµ − ν [14]. We consider events where a well-identified muon with momentum greater than 3 GeV and transverse momentum greater than 1.2 GeV is found. Charmed hadron candidates are formed from hadrons with momenta greater than 2 GeV and transverse momenta greater than 0.3 GeV, and we require that the average transverse momentum of the hadrons forming the candidate be greater than 0.7 GeV. Kaons, pions, and protons are identified using the RICH system. The impact parameter (IP), defined as the minimum distance of approach of the track with respect to the primary vertex, is used to select tracks coming from charm decays. We require that the χ 2 , formed by using the hypothesis that each track's IP is equal to 0, is greater than 9. Moreover, the selected tracks must be consistent with coming from a common vertex: the χ 2 per number of degrees of freedom of the vertex fit must be smaller than 6. In order to ensure that the charm vertex is distinct from the primary pp interaction vertex, we require that the χ 2 , based on the hypothesis that the decay flight distance from the primary vertex is zero, is greater than 100. Charmed hadrons and muons are combined to form a partially reconstructed b hadron by requiring that they come from a common vertex, and that the cosine of the angle between the momentum of the charmed hadron and muon pair and the line from the Dµ vertex to the primary vertex be greater than 0.999. As the charmed hadron is a decay product of the b hadron, we require that the difference in z component of the decay vertex of the charmed hadron candidate and that of the beauty candidate be greater than 0. We explicitly require that the η of the b hadron candidate be between 2 and 5. We measure η using the line defined by connecting the primary event vertex and the vertex formed by the D and the µ. Finally, the invariant mass of the charmed hadron and muon system must be between 3 and 5 GeV for D 0 µ − and D + µ − candidates, between 3.1 and 5.1 GeV for D + s µ − candidates, and between 3.3 and 5.3 GeV for Λ + c µ − candidates. We perform our analysis in a grid of 3 η and 5 p T bins, covering the range 2 < η < 5 and p T ≤ 14 GeV. The b hadron signal is separated from various sources of background by studying the two dimensional distribution of charmed hadron candidate invariant mass and ln(IP/mm). This approach allows us to determine the background coming from false combinations under the charmed hadron signal mass peak directly. The study of the ln(IP/mm) distribution allows the separation of prompt charm decay candidates from charmed hadron daughters of b hadrons [14]. We refer to these samples as Prompt and Dfb respectively. Signal extraction We describe the method used to extract the charmed hadron-µ signal by using the D 0 Xµ − ν final state as an example; the same procedure is applied to the final states We perform unbinned extended maximum likelihood fits to the two-dimensional distributions in K − π + invariant mass over a region extending ±80 MeV from the D 0 mass peak, and ln(IP/mm). The parameters of the IP distribution of the Prompt sample are found by examining directly produced charm [14] whereas a shape derived from simulation is used for the Dfb component. An example fit for D 0 µ − νX, using the whole p T and η range, is shown in Fig. 1. The fitted yields for RS are 27666±187 Dfb, 695±43 Prompt, and 1492±30 false D 0 combinations, inferred from the fitted yields in the sideband mass regions, spanning the intervals between 35 and 75 MeV from the signal peak on both sides. For WS we find 362±39 Dfb, 187±18 Prompt, and 1134±19 false D 0 combinations. The RS yield includes a background of around 0.5% from incorrectly identified µ candidates. As this paper focuses on ratios of yields, we do not subtract this component. Figure 2 shows the corresponding fits for the D + Xµ − ν final state. The fitted yields consist of 9257±110 Dfb events, 362±34 Prompt, and 1150±22 false D + combinations. For WS we find 77±22 Dfb, 139±14 Prompt and 307±10 false D + combinations. The analysis for the D + s Xµ − ν mode follows in the same manner. Here, however, we are concerned about the reflection from Λ + c → pK − π + where the proton is taken to be a kaon, since we do not impose an explicit proton veto. Using such a veto would lose 30% of the signal and also introduce a systematic error. We choose to model separately this particular background. We add a probability density function (PDF) determined from simulation to model this, and the level is allowed to float within the estimated error on the size of the background. The small peak near 2010 MeV in Fig. 3 We explicitly include this term in the fit, assuming the shape to be the same as for the D + s signal, and we obtain 4±1 events in the RS signal region and no events in the WS signal region. The measured yields in the RS sample are 2192±64 Dfb, 63±16 Prompt, 985±145 false D + s background, and 387±132 Λ + c reflection background. The corresponding yields in the WS sample are 13±19, 20±7, 499±16, and 3±3 respectively. Figure 3 shows the fit results. The last final state considered is Λ + c Xµ − ν. Figure 4 shows the data and fit components to the ln(IP/mm) and pK − π + invariant mass combinations for events with 2 < η < 5. The Λ 0 b may also decay into D 0 pXµ − ν. We search for these decays by requiring the presence of a track well identified as a proton and detached from any primary vertex. The resulting D 0 p invariant mass distribution is shown in Fig. 5. We also show the combinations that cannot arise from Λ 0 b decay, namely those with D 0 p combinations. There is a clear excess of RS over WS combinations especially near threshold. Fits to Fig. 5(a) give 154±13 RS events and 55±8 WS events. In this case, we use the WS yield for background subtraction, scaled by the RS/WS background ratio determined with a MC simulation including (B − + B 0 → D 0 Xµ − ν) and generic bb events. This ratio is found to be 1.4±0.2. Thus, the net signal is 76±17±11, where the last error reflects the uncertainty in the ratio between RS and WS background. Background studies Apart from false D combinations, separated from the signal by the two-dimensional fit described above, there are also physical background sources that affect the RS Dfb samples, and originate from bb events, which are studied with a MC simulation. In the meson case, the background mainly comes from b → DDX with one of the D mesons decaying semi-muonically, and from combinations of tracks from the pp → bbX events, where one b hadron decays into a D meson and the other b hadron decays semi-muonically. The background fractions are (1.9±0.3)% for D 0 Xµ − ν, (2.5±0.6)% for D + Xµ − ν, and (5.1±1.7)% for D + s Xµ − ν. The main background component for Λ 0 b semileptonic decays is Λ 0 b decaying into D − s Λ + c , and the D − s decaying semi-muonically. Overall, we find a very small background rate of (1.0±0.2)%, where the error reflects only the statistical uncertainty in the simulation. We correct the candidate b hadron yields in the signal region with the predicted background fractions. A conservative 3% systematic uncertainty in the background subtraction is assigned to reflect modelling uncertainties. Monte Carlo simulation and efficiency determination In order to estimate the detection efficiency, we need some knowledge of the different final states which contribute to the Cabibbo favoured semileptonic width, as some of the selection criteria affect final states with distinct masses and quantum numbers differently. Although much is known about the B 0 and B − semileptonic decays, information on the corresponding B 0 s and Λ 0 b semileptonic decays is rather sparse. In particular, the hadronic composition of the final states in B 0 s decays is poorly known [9], and only a study from CDF provides some constraints on the branching ratios of final states dominant in the corresponding Λ 0 b decays [15]. In the case of the B 0 s → D + s semileptonic decays, we assume that the final states are D + s , D * + s , D * s0 (2317) + , D s1 (2460) + , and D s1 (2536) + . States above DK threshold decay predominantly into D ( * ) K final states. We model the decays to the final states D + s µ − ν and D * + s µ − ν with HQET form factors using normalization coefficients derived from studies of the corresponding B 0 and B − semileptonic decays [1], while we use the ISGW2 form factor model [16] to describe final states including higher mass resonances. In order to determine the ratio between the different hadron species in the final state, we use the measured kinematic distributions of the quasi-exclusive process B 0 s → D + s µ − νX. To reconstruct the squared invariant mass of the µ − ν pair (q 2 ), we exploit the measured direction of the b hadron momentum, which, together with energy and momentum conservation, assuming no missing particles other than the neutrino, allow the reconstruction of the ν 4-vector, up to a two-fold ambiguity, due to its unknown orientation with respect to the B flight path in its rest frame. We choose the solution corresponding to the lowest b hadron momentum. This method works well when there are no missing particles, or when the missing particles are soft, as in the case when the charmed system is a D * meson. We then perform a two-dimensional fit to the q 2 versus m(µD + s ) distribution. Figure 6 shows stacked histograms of the D + s , D * + s , and D * * + s components. In the fit we constrain the ratio B(B 0 to be equal to the average D * µ − ν/Dµ − ν ratio in semileptonic B 0 and B − decays (2.42±0.10) [1]. This constraint reduces the uncertainty of one D * * fraction. We have also performed fits removing this assumption, and the variation between the different components is used to assess the modelling systematic uncertainty. A similar procedure is applied to the Λ + c µ − sample and the results are shown in Fig. 7. In this case we consider three final states, Λ + c µ − ν, Λ c (2595) + µ − ν, and Λ c (2625) + µ − ν, with form factors from the model of Ref. [17]. We constrain the two highest mass hadrons to be produced in the ratio predicted by this theory. The measured pion, kaon and proton identification efficiencies are determined using K 0 S , D * + , and Λ 0 calibration samples where p, K, and π are selected without utilizing the particle identification criteria. The efficiency is obtained by fitting simultaneously the invariant mass distributions of events either passing or failing the identification requirements. Values are obtained in bins of the particle η and p T , and these efficiency matrices are applied to the MC simulation. Alternatively, the particle identification efficiency can be determined by using the measured efficiencies and combining them with weights proportional to the fraction of particle types with a given η and p T for each µ charmed hadron pair η and p T bin. The overall efficiencies obtained with these two methods are consistent. An example of the resulting particle identification efficiency as a function of the η and p T of the Λ + c µ − pair is shown in Fig. 8. As the functional forms of the fragmentation ratios in terms of p T and η are not known, we determine the efficiencies for the final states studied as a function of p T and η within the LHCb acceptance. Figure 9 shows the results. Figure 8: Measured proton identification efficiency as a function of the Λ + c µ − p T for 2 < η < 3, 3 < η < 4, 4 < η < 5 respectively, and for the selection criteria used in the Λ + c → pK − π + reconstruction. 3 Evaluation of the ratios f s /(f u + f d ) and Perturbative QCD calculations lead us to expect the ratios f s /(f u +f d ) and f Λ b /(f u +f d ) to be independent of η, while a possible dependence upon the b hadron transverse momentum p T is not ruled out, especially for ratios involving baryon species [18]. Thus we determine these fractions in different p T and η bins. For simplicity, we use the transverse momentum of the charmed hadron-µ pair as the p T variable, and do not try to unfold the b hadron transverse momentum. In order to determine the corrected yields entering the ratio f s /(f u +f d ), we determine yields in a matrix of three η and five p T bins and divide them by the corresponding efficiencies. We then use Eq. 5, with the measured lifetime ratio (τ B − + τ B 0 )/2τ B 0 s = 1.07 ± 0.02 [1] to derive the ratio f s /(f u + f d ) in two η bins. The measured ratio is constant over the whole η-p T domain. Figure 10 shows the f s /(f u + f d ) fractions in bins of p T in two η intervals. By fitting a single constant to all the data, we obtain f s /(f u + f d ) = 0.134 ± 0.004 +0.011 −0.010 in the interval 2 < η < 5, where the first error is statistical and the second is systematic. The latter includes several different sources listed in Table 2. The dominant systematic uncertainty is caused by the experimental uncertainty on B(D + s → K + K − π + ) of 4.9%. s semileptonic decay modelling 3.0 Backgrounds 2.0 Tracking efficiency 2.0 Lifetime ratio 1.8 PID efficiency Adding in the contributions of the D 0 and D + branching fractions we have a systematic error of 5.5% due to the charmed hadron branching fractions. The B 0 s semileptonic modelling error is derived by changing the ratio between different hadron species in the final state obtained by removing the SU(3) symmetry constrain, and changing the shapes of the less well known D * * states. The tracking efficiency errors mostly cancel in the ratio since we are dealing only with combinations of three or four tracks. The lifetime ratio error reflects the present experimental accuracy [1]. We correct both for the bin-dependent PID efficiency obtained with the procedure detailed before, accounting for the statistical error of the calibration sample, and the overall PID efficiency uncertainty, due to the sensitivity to the event multiplicity. The latter is derived by taking the kaon identification efficiency obtained with the method described before, without correcting for the different track multiplicities in the calibration and signal samples. This is compared with the results of the same procedure performed correcting for the ratio of multiplicities in the two samples. The error due to B 0 s → D 0 K + Xµ − ν is obtained by changing the RS/WS background ratio predicted by the simulation within errors, and evaluating the corresponding change in f s /(f u + f d ). Finally, the error due to (B − , B 0 ) → D + s KXµ − ν reflects the uncertainty in the measured branching fraction. Isospin symmetry implies the equality of f d and f u , which allows us to compare f + /f 0 ≡ n corr (D + µ)/n corr (D 0 µ) with its expected value. It is not possible to decouple the two ratios for an independent determination of f u /f d . Using all the known semileptonic branching fractions [1], we estimate the expected relative fraction of the D + and D 0 modes from B +/0 decays to be f + /f 0 = 0.375 ± 0.023, where the error includes a 6% theoretical uncertainty associated to the extrapolation of present experimental data needed to account for the inclusive b → cµ − ν semileptonic rate. Our corrected yields correspond to f + /f 0 = 0.373 ± 0.006 (stat) ± 0.007 (eff) ± 0.014, for a total uncertainty of 4.5%. The last error accounts for uncertainties in B background modelling, in the D 0 K + µ − ν yield, the D 0 pµ − ν yield, the D 0 and D + branching fractions, and tracking efficiency. The other systematic errors mostly cancel in the ratio. Our measurement of f + /f 0 is not seen to be dependent upon p T or η, as shown in Fig. 11, and is in agreement with expectation. We follow the same procedure to derive the fraction f Λ b /(f u + f d ), using Eq. 7 and the ratio (τ B − + τ B 0 )/(2τ Λ 0 b ) = 1.14 ± 0.03 [1]. In this case, we observe a p T dependence in the two η intervals. Figure 12 shows the data fitted to a straight line Table 3 summarizes the fit results. A corresponding fit to a constant shows that a p T independent f Λ b /(f u + f d ) is excluded at the level of four standard deviations. The Table 3 include only the bin-dependent terms discussed above. Table 4 summarizes all the sources of absolute scale systematic uncertainties, that include several components. Their definitions mirror closely the corresponding uncertainties for the f s /(f u + f d ) determination, and are assessed with the same procedures. The term Λ b → D 0 pXµ − ν accounts for the uncertainty in the raw D 0 pXµ − ν yield, and is evaluated by changing the RS/WS background ratio (1.4±0.2) within the quoted uncertainty. In addition, an uncertainty of 2% is associated with the derivation of the semileptonic branching fraction ratios from the corresponding lifetimes, labelled Γ sl in Table 4. The uncertainty is derived assigning conservative errors to the parameters affecting the chromomagnetic operator that influences the B meson total decay widths, but not the Λ 0 b . By far the largest term is the poorly known B(Λ + c → pK − π + ); thus it is quoted separately. The systematic uncertainties included are only those associated with the bin-dependent MC and particle identification errors. In view of the observed dependence upon p T , we present our results as where the scale factor uncertainties are statistical, systematic, and the error on B(Λ c → pK − π + ) respectively. The correlation coefficient between the scale factor and the slope parameter in the fit with the full error matrix is −0.63. Previous measurements of this fraction have been made at LEP and the Tevatron [3]. LEP obtains 0.110±0.019 [2]. This fraction has been calculated by combining direct rate measurements with time-integrated mixing probability averaged over an unbiased sample of semi-leptonic b hadron decays. CDF measures f Λ b /(f u + f d ) = 0.281 ± 0.012 +0.011+0.128 −0.056−0.086 , where the last error reflects the uncertainty in B(Λ + c → pK − π + ). It has been suggested [3] that the difference between the Tevatron and LEP results is explained by the different kinematics of the two experiments. The average p T of the Λ + c µ − system is 10 GeV for CDF, while the b-jets, at LEP, have p ≈ 40 GeV. LHCb probes an even lower b p T range, while retaining some sensitivity in the CDF kinematic region. These data are consistent with CDF in the kinematic region covered by both experiments, and indicate that the baryon fraction is higher in the lower p T region. Combined result for the production fraction f s /f d from LHCb From the study of b hadron semileptonic decays reported above, and assuming isospin symmetry, namely f u = f d , we obtain where the first error is statistical and the second is systematic. Measurements of this quantity have also been made by LHCb by using hadronic B meson decays [4]. The ratio determined using the relative abundances of B 0 = 0.250 ± 0.024(stat) ± 0.017(syst) ± 0.017(theor), while that from the relative abundances of B 0 The first uncertainty is statistical, the second systematic and the third theoretical. The theoretical uncertainties in both cases include non-factorizable SU(3)-breaking effects and form factor ratio uncertainties. The second ratio is affected by an additional source, accounting for the W -exchange diagram in the B 0 → D + π − decay. In order to average these results, we consider the correlations between different sources of systematic uncertainties, as shown in Table 5. We then utilise a generator of pseudoexperiments, where each independent source of uncertainty is generated as a random variable with Gaussian distribution, except for the component B 0 s → D 0 K + µ − ν µ X, which is modeled with a bifurcated Gaussian with standard deviations equal to the positive and negative errors shown in Table 5. This approach to the averaging procedure is motivated by the goal of proper treatment of asymmetric errors [21]. We assume that the theoretical errors have a Gaussian distribution. The RMS value of f s /f d is then evaluated as a function of α 1 and α 2 . We derive the most probable value f s /f d by determining the coefficients α i at which the RMS is minimum, and the total errors by computing the boundaries defining the 68% CL, scanning from top to bottom along the axes α 1 and α 2 in the range comprised between 0 and 1. The optimal weights determined with this procedure are α 1 = 0.73, and α 2 = 0.14, corresponding to the most probable value f s /f d = 0.267 +0.021 −0.020 . The most probable value differs slightly from a simple weighted average of the three measurements because of the asymmetry of the error distribution in the semileptonic determination. By switching off different components we can assess the contribution of each source of uncertainty. Table 6 summarizes the results. Conclusions We measure the ratio of the B 0 s production fraction to the sum of those for B − and B 0 mesons f s /(f u +f d ) = 0.134±0.004 +0.011 −0.010 , and find it consistent with being independent of η and p T . Our results are more precise than, and in agreement with, previous measurements in different kinematic regions. We combine the LHCb measurements of the ratio of B 0 s to B 0 production fractions obtained using b hadron semileptonic decays, and two different ratios of branching fraction of exclusive hadronic decays to derive f s /f d = 0.267 +0.021 −0.020 . The ratio of the Λ 0 b baryon production fraction to the sum of those for B − and B 0 mesons varies with the p T of the charmed hadron muon pair. Assuming a linear dependence up to p T = 14 GeV, we obtain f Λ b f u + f d = (0.404 ± 0.017 ± 0.027 ± 0.105) × [1 − (0.031 ± 0.004 ± 0.003) × p T (GeV)], (12) where the errors on the absolute scale are statistical, systematic and error on B(Λ + c → pK − π + ) respectively. No η dependence is found.
8,405.6
2011-11-09T00:00:00.000
[ "Physics" ]
Mechanisms of Visible Light Photocatalysis in N-Doped Anatase TiO 2 with Oxygen Vacancies from GGA + U Calculations We have systematically studied the photocatalyticmechanisms of nitrogen doping in anatase TiO 2 using first-principles calculations based on density functional theory, employing Hubbard U (8.47 eV) on-site correction. The impurity formation energy, charge density, and electronic structure properties of TiO 2 supercells containing substitutional nitrogen, interstitial nitrogen, or oxygen vacancies were evaluated to clarify the mechanisms under visible light. According to the formation energy, a substitutional N atom is better formed than an interstitial N atom, and the formation of an oxygen vacancy in N-doped TiO 2 is easier than that in pure TiO 2 . The calculated results have shown that a significant band gap narrowing may only occur in heavy nitrogen doping. With light nitrogen doping, the photocatalysis under visible light relies on N-isolated impurity states. Oxygen vacancies existence in Ndoped TiO 2 can improve the photocatalysis in visible light because of a band gap narrowing and n-type donor states.These findings provide a reasonable explanation of the mechanisms of visible light photocatalysis in N-doped TiO 2 . Introduction Photocatalytic mechanisms are created with an electron-hole pair by exciting an electron from the valence band to the conduction band through absorption of the electromagnetic radiation.Since the pioneering work of Fujishima and Honda in 1972 [1], titanium dioxide (TiO 2 ) has attracted attention as a photocatalytic material due to nontoxicity, low cost, and chemical stability.However, anatase TiO 2 has a wide band gap (3.2 eV) and only absorbs ultraviolet (UV) light at wavelengths shorter than 387 nm.UV light accounts for a small fraction (∼5%) of solar energy impinging on the surface of the Earth; that is, solar energy utilization is low.Limitations due to the wide band gap make TiO 2 ineffective for many potential applications.Because visible light (400-700 nm) accounts for a large fraction (∼45%), the modification of TiO 2 for extending optical absorption to the visible light region has become a thoroughly researched topic. Since Asahi et al. [2] reported in 2001 that nitrogen doping enhances photocatalytic activity under visible light, TiO 2 has been doped with a variety of elements, such as N [3][4][5][6][7][8][9][10], C [11], B [12], P [13], Fe [14,15], and La [16] to study photocatalytic activity under visible light with nitrogen doping proving effectiveness.Although the nitrogen doping is considered more effective and widely studied, the photocatalytic mechanisms under visible light are still debatable.Asahi et al. [2] indicated that the N 2p states hybridize with O 2p states that result in a narrowing of the band gap with the material becoming photoactive in the visible light region.However, other studies have supported the notion that Ndoping does not cause a narrowing of the band gap of TiO 2 [3,4].For example, Irie et al. [4] considered that an isolated N 2p band above the valence band was responsible for the response to visible light.Oxygen vacancies induced by N-doping contributed to the absorption as well as photoactivity in the visible light region were also reported [17][18][19].Valentin et al. [20] employed theoretical calculations to show that nitrogen doping led to a substantial reduction of energy costs to form oxygen vacancies in TiO 2 .This suggested that nitrogen doping was likely to be accompanied by the formation of oxygen vacancies.Rumaiz et al. [21] indicated that the related defect level of oxygen vacancies was approximately 1 eV above the valence band maximum (VBM) and explained the knee formation in the optical absorbance spectra of N-doped TiO 2 .Lee et al. [22] suggested that the interstitial N-doping states with the oxygen deficiency were more effective for photocatalysis than the substitutional N-doping states with the oxygen deficiency.Zhao and Liu [23] have employed density functional theory (DFT) and adopted a 2 × 1 × 1 supercell model to investigate the modification mechanism of N-doped anatase TiO 2 .The calculated band gap of pure anatase TiO 2 was 2.68 eV, which was smaller than the experimental value of 3.2 eV, due to the limitation of DFT.The results have shown that except for oxygen deficient model, the band gaps of N-doped TiO 2 , including substitutional and interstitial Ndoping models, were narrowed between 0.03 and 0.23 eV.The fundamental absorption edges extended to the visible light region. As mentioned above, the mechanisms of N-doping in TiO 2 under visible light region have three views: (1) band gap narrowing, (2) impurity energy levels, and (3) oxygen vacancies.Although theoretical calculations have been investigated for the mechanisms, most of them greatly underestimated the band gap of TiO 2 due to the adoption of conventional DFT method.In this paper, first-principles calculations employing the generalized gradient approximation (GGA) and Hubbard U approach are performed to investigate the formation energy, charge density, and electronic structure of N-doped anatase TiO 2 with oxygen vacancies systematically to comprehend the mechanisms of N-doped anatase TiO 2 .The calculated results were analyzed and compared with the previous literature. Calculation Models and Methods A 2 × 2 × 1 supercell of a pure anatase TiO 2 containing 16 Ti atoms and 32 O atoms was considered in this study as shown in Figure 1(a).To investigate the effect of the N-doping and oxygen vacancy in TiO 2 on the electronic structure, five defect types were modeled as shown in Figures 1(b)-1(f).The substitutional N-doping supercell (Figure 1 First-principles calculations were performed using the CASTEP module [24] in Materials Studio 5.0 developed by Accelrys Software Inc. Electron-ion interactions were modeled using ultrasoft pseudopotentials in the Vanderbilt form [25].The valence atomic configurations were 2s 2 2p 4 for O, 2s 2 2p 3 for N, and 3s 2 3p 6 3d 2 4s 2 for Ti.The wave functions of the valence electrons were expanded through a plane wave basis set, and the cutoff energy was selected as 400 eV.The Monkhorst-Pack scheme [26] K-points grid sampling was set at 4 × 4 × 3 (less than 0.04 Å−1 ) in the supercells.The convergence threshold for self-consistent iterations was set at 5 × 10 −6 eV.In the optimization process, the energy change, maximum force, maximum stress, and maximum displacement tolerances were set at 9 × 10 −5 eV/atom, 0.09 eV/ Å, 0.09 GPa, and 0.009 Å, respectively. The spin-polarized GGA+U approach introduced an intra-atomic electron-electron interaction as an on-site correction to describe systems with localized d and f electrons capable of producing a more optimal band gap.Determination of an appropriate effective Hubbard eff parameter is necessary in GGA+U calculations to interpret the intraatomic electron correlation correctly.As shown in Figure 2, for anatase TiO 2 , the band gap widens when the effective Hubbard eff was increased.The band gap was effectively widened by increasing eff from 2 to 8 eV.Here, the effective on-site Coulomb interaction was eff = 8.47 eV for Ti 3d in the GGA+U approach and the calculated band gap of pure anatase is 3.21 eV, similar to the experimental value. Formation Energy. To examine the relative stability of Ndoped TiO 2 for various defective models, the defect formation energies ( ) were calculated according to the following formula: tot (defect) and tot (pure) are the total energies of defective models and pure TiO It should be noted that the formation energy of a defective system depends on the selected U value.The U value of 8.47 eV was fixed to qualitatively examine the relative stability of N-doped TiO 2 for various defective models in this study.Table 1 summarizes the calculated formation energies for different defective models in TiO 2 .The smaller value represents that a defective system is more stable.The (N s ) (formation energy of N s model) is smaller under the Ti-rich condition than that under O-rich condition, indicating that the incorporation of N into TiO 2 at the O site is thermodynamically favorable.At the Ti-rich condition, the (N s ) is smaller than (N i ), indicating that substitutional N atoms are more probable to be formed.This result is opposite with Zhao's calculated result (interstitial N atoms are favored) [23], but is consistent with Lee's calculated result [22].It can be found that the formation energy of an oxygen vacancy from pure TiO 2 is 1.0 eV ( (O v ) = 1.0) and that from N doped TiO 2 is 0.26 the formation of oxygen vacancies with N existence in TiO 2 is easier than pure TiO 2 and is in agreement with the previous literature [21,23]. Electronic Density. Table 1 summarizes Mulliken populations.Figure 3 indicates the charge distribution of each 1 shows that the Mulliken population of N s (−0.62) is larger than that of O (−0.737) and leads to an unfilled 2p orbital in the N atom because the electronegativity of N is lower than that of O as shown in Figure 3(a).For N i model, the Mulliken population of N i is −0.27, indicating that the N i atom obtains fewer electrons from Ti atoms.3.3.Electronic Structure.Figure 4 indicates the total density of states (TDOS), and the projected density of states (PDOS) of various defective models was calculated to investigate the electronic properties of N-doped anatase TiO 2 .Other related values that were calculated include the band gap ( ), the width of the valence band ( VB ), and the maximum absorption wavelength ( max = 1240/ ) that Table 2 summarizes.The band gap ( ) of pure anatase TiO 2 is 3.21 eV as shown in Figure 4(a) and is consistent with the experimental value of 6 International Journal of Photoenergy As a result, the electron transition energy from the valence band to the conduction band decreased by approximately 0.88 eV because of the heavy nitrogen doping and, thus, may induce a red shift (extending to 532 nm) at the edge of the optical absorption range.Both the narrowing of the band gap and the increased mobility of photo-generated carriers in heavy nitrogen doping concentrations improve the photocatalytic activity under visible light, as illustrated in recent experimental results [6,7]. Conclusions Using the GGA+U method, this study calculated impurity formation energy, charge density, and electronic properties of an N-doped anatase TiO 2 with oxygen vacancies system to investigate the photocatalytic mechanisms of N-doped TiO 2 under visible light.An effective Hubbard U of 8.47 eV was adopted to determine the experimental band gap correctly.The formation energy calculated results have shown that the substitutional N atom was easier to be formed than the interstitial one and the formation of oxygen vacancies under nitrogen existence in TiO 2 was easier to be formed than pure TiO 2 .The calculated results have shown that the mechanisms of photocatalytic activity under visible light are concluded as the following: (1) the significant band gap narrowing may occur in heavy nitrogen doping; (2) with light nitrogen doping, the mechanism is the result of N-isolated impurity states; and (3) oxygen vacancies existence in N-doped TiO 2 improves the photocatalysis in visible light because of a band gap narrowing and n-type donor states. (b)) was labeled N s and constructed by substituting one oxygen atom with one nitrogen atom (2.1 at.%) and the interstitial N-doping supercell (Figure 1(c)) being labeled N i , with one N atom was embedded into the interspace.In oxygen vacancy systems, one O atom was removed and were labeled O v (Figure 1(d)), N s O v (Figure 1(e), with one O atom being substituted by N atom), and N i O v (Figure 1(f), with one N atom being embedded into the interspace). 2 ; N and O represent the chemical potentials of the N and O atoms; and are the numbers of doped nitrogen and removed oxygen atoms; = 1 and = 1 for N s model, and = 1, = 0 for N i model, = 0, = 1 for O v model, = 1, = 2 for N s O v model, = 1, = 1 for N i O v model.The formation energy depends on growth conditions and can be Ti-rich or O-rich.For TiO 2 , Ti and O satisfy the relationship Ti + 2 O = TiO 2 .Under the Orich growth condition, O is determined by the total energy of an O 2 molecule ( O = O 2 /2) and Ti is determined by the formula Ti = TiO 2 − 2 O .Under the Ti-rich growth condition, Ti is the energy of one Ti atom in bulk Ti and O is determined by O = ( TiO 2 − Ti )/2. Figure 2 : Figure 2: Relationship between the effective Hubbard parameter ( eff ) and the band gap ( ) of anatase TiO 2 . Figure 3 (NFigure 3 : Figure 3: Charge distribution of various defective models in N-doped TiO 2 : (a) N s , (b) N i , (c) O v , (d) N s O v , and (e) N i O v . Figure 3 ( c) shows that the electrons are remained in a Ti atom near the oxygen vacancy.Comparing the N s with N s O v models, the Mulliken population of N atom decreases from −0.62 to −0.75, indicating that there are more electrons that transfer from oxygen vacancy to the N atom, and the N 2p orbital is filled as shown in Figure 3(d).For N i O v model, the electron clouds are shared between N i and the adjacent O atoms.They increase the population values in N i and O atoms. Table 1 : Formation energy and average Mulliken population of various defective models of N-doped TiO 2 . s model, Table Table 2 : [2]] gap, maximum absorption, wavelength, and width of valence band for various defective models of N-doped TiO 2 .The valence band of TiO 2 has a large bandwidth ( VB ) of approximately 4.63 eV, showing a strong delocalization among the O 2p electrons.For N model from Figure4(b), the gap is 3.17 eV and the VB is 4.68 eV, implying that and VB are not changed obviously compared with the pure TiO 2 .It is demonstrated from Figure4(b) that one isolated N 2p state is localized above the top of the VB of the host TiO 2 and is consistent with the calculations of Long[27].The electron in the VB is excited to localized impurity states in the band gap and subsequently to the CB through absorption of visible light.For N i model from Figure4(c), the is narrowed to 3.08 eV and the VB is broadened to 4.86 eV.It was observed that the N 2p states primarily contributed to three energy ranges: −7.5 to −6.1, −0.6 to 0, and 1.8 to 2 eV.For O v model from Figure4(d), the (3.18 eV) is not changed obviously and the VB is broadened to 5.12 eV.The two extra electrons resulted from removing an oxygen atom lead to an occupied state near Fermi energy level and are localized at 1.0 eV above the top of VB and consistent with the experimental results[21].Comparing with the N and N models, both band gaps of the N s O v and N i O v (2.95 and 3.06 eV) became smaller and extended optical absorption into the visible region (420 and 405 nm).Figures 4(e)-4(f) show that the VB became broader.Therefore, the existence of oxygen vacancies in N-doped anatase TiO 2 improves the photocatalysis under visible light.Our previous work regarding DOS for heavy nitrogen doping in the N-doped anatase TiO 2 are shown in Figures4(g)-4(h)[8].It can be observed that a significant narrowing of the band gap of Ndoped TiO 2 occurs only for heavy nitrogen doping (≥8.3 at.%) and is in agreement with Ashai calculated results[2].
3,621.8
2013-02-17T00:00:00.000
[ "Materials Science", "Chemistry" ]
Evaluation of an O2-Substituted (1–3)-β-D-Glucan, Produced by Pediococcus parvulus 2.6, in ex vivo Models of Crohn’s Disease 1,3-β-glucans are extracellular polysaccharides synthesized by microorganisms and plants, with therapeutic potential. Among them, the O2-substituted-(1–3)-β-D-glucan, synthesized by some lactic acid bacteria (LAB), has a prebiotic effect on probiotic strains, an immunomodulatory effect on monocyte-derived macrophages, and potentiates the ability of the producer strain to adhere to Caco-2 cells differentiated to enterocytes. In this work, the O2-substituted-(1–3)-β-D-glucan polymers produced by GTF glycoyltransferase in the natural host Pediococcus parvulus 2.6 and in the recombinant strain Lactococcus lactis NZ9000[pNGTF] were tested. Their immunomodulatory activity was investigated in an ex vivo model using human biopsies from patients affected by Crohn’s disease (CD). Both polymers had an anti-inflammatory effect including, a reduction of Interleukine 8 both at the level of its gene expression and its secreted levels. The overall data indicate that the O2-substituted-(1–3)-β-D-glucan have a potential role in ameliorating inflammation via the gut immune system cell modulation. INTRODUCTION The exopolysaccharides (EPS) are large, lineal, or branched, extracellular carbohydrate polymers, produced by algae, plants, and bacteria. They are commonly used as food additive due to their rheological properties (bio-thickener, gelling, or viscosifier agents) in the food industry. The demand for new polymers in the food industry is positioning EPS from lactic acid bacteria (LAB), as the new generation of food thickeners. Due to their Generally Regarded As Safe (GRAS) status, they are suitable for the production of fermented and/or functional products. The EPS synthesized by LAB are divided into homopolysaccharides, if they contain only one monosaccharide type or heteropolysaccharides, if they contain various monosaccharides types (Werning et al., 2012). These EPS are known to support bacterial growth, as well as participate in cellular recognition and interaction, surface adhesion, and biofilm formation. Currently, EPS are gaining interest as prebiotics, as modulators of the host immune system (Schmid et al., 2015), and as antiviral agents (Nácher-Vázquez et al., 2015). Their immunomodulatory properties are dependent on their ability to form a suitable tertiary structure. For instance, (1-3)-β-D-glucans (β-glucans), linear or branched in position O-4 and O-6, are gaining interest as therapeutic targets; various studies have shown their positive influence in reduction of human serum cholesterol levels and their stimulation of the human immune system (Chan et al., 2009;Jin et al., 2018), as well as their potential anti-carcinogenic properties (Chan et al., 2009;Ali et al., 2015). Recent findings documented that (1-3)-β-glucans are capable of regulating the inflammatory response, modulate immune system cell types (such as peripheral blood immune system cells, intestinal epithelium, and mammalian microglia), through their interaction with membrane receptors, including toll-like receptors, Dectin-1, SIGNR1, complement receptor 3 (CR3), LaCer, and Scavenger, which are differentially expressed in these cells types (Li et al., 2019). Their interaction with a receptor leads to downstream events including the activation of immune system cells, both innate (e.g., macrophages, monocytes, or neutrophils) and adaptive (e.g., T cells or B cells). These interactions may result in differential cytokine production [such as tumor necrosis factor-α (TNF-α), IL-10, , through the modulation of the nuclear factor kappa-light-chain-enhancer of activate B cells (NFκB) transcription factor, belonging to the mitogen-activated protein kinase (MAPK) pathway (Chan et al., 2009;Volman et al., 2010). Moreover, it has been reported that the EPS are metabolized in the gut due to the presence of the glycolytic enzyme pools of the microbiota. The exact nature of this process depends on the molecular weight and biochemical composition of the EPS (Bodera, 2008;Kau et al., 2011;Ballesteros Pomar and Gonzalez Arnaiz, 2018). Finally, this hydrolysis is beneficial to both the host and the microbiota itself, because it generates secondary metabolites with the potential to act as probiotic or postbiotic immunomodulators, that ultimately promote and/or restore a healthy environment for the microbiota (Iweala and Nagler, 2019). Laminarin, an O6-substituted-(1-3)-β-D-glucan isolated from brown algae, causes a reduction in the expression of pro-inflammatory cytokines, such as IL-6 and IL-1β in pig intestinal mucosa (Heim et al., 2014;Walsh et al., 2015) and counteracts dysbiosis of the microbiota (Rattigan et al., 2020). Moreover, studies performed with human fecal microbiota and commercial laminarin from Laminaria digitata, or crude polysaccharide-rich extracts from this algae, indicated that the polysaccharide influences mucus and gut microbiota composition, which results in a potentially beneficial production of short-chain fatty acids (such as butyrate; Devillé et al., 2007;Strain et al., 2020). Inflammatory bowel disease (IBD) has a multifactorial etiology and includes Crohn's disease (CD) and ulcerative colitis (UC; Kamada et al., 2013;Morhardt et al., 2019). They are chronic gastrointestinal disorders, where the dysregulation of the immune system is responsible for immunological imbalance characterized by the production of pro-inflammatory cytokines (such as and IL12) into the gut lumen, alteration of microbiota, and the intestinal mucosal barrier (Knights et al., 2013). High inflammation is responsible for abdominal pain, bloody stools, weight loss, diarrhea, etc. To evaluate therapeutic treatments, current chemically induced animal models of IBD, including sodium dextran-sulfate treatment, resembles UC symptoms, but emulates only partially the inflammatory process that occurs in human CD (Chassaing et al., 2014). However, an ex vivo model, using CD mucosal tissue, allows the investigation of how to modulate the inflammation at the gut mucosa level. Previous studies, using this model, have established that certain probiotic bacteria are capable of interacting with immunocompetent cells using the mucosal interface, and thus can modulate locally the production of pro-inflammatory cytokines by inflamed tissue (Borruel et al., 2003;Llopis et al., 2009;Hidalgo-Cantabrana et al., 2015). In this context, recently, new therapeutic strategies involving the use of probiotic strains have been developed to ameliorate CD patient's symptoms. Some of these include microorganisms, such as Lacticaseibacillus rhamnosus GG, Limosilactobacillus reuteri, Lactobacillus acidophilus, Bifidobacterium infantis, Saccharomyces boulardii, Escherichia coli Nissle 1917, and Clostridium butyricum MIYAIRI 588 (Tsai et al., 2019), which contribute to reduction of inflammation, due to their ability to reduce pathogen adhesion to the intestinal epithelium (blocking their binding site), and by production of antibacterial substances or EPS (Orel and Kamhi Trop, 2014;Oka and Sartor, 2020). In addition, probiotic strains modify the release of cytokines in the intestinal epithelium and inhibit, in the immune system cells, the production of the transcription factor NFkB, leading as a consequence to a reduction of intestinal inflammation (Basson et al., 2017;Oka and Sartor, 2020). In this general context, some probiotic bacteria produce a O2-substituted-(1-3)-β-D-glucan with prebiotic activity and that stimulate the growth of probiotic LAB (Russo et al., 2011;Pérez-Ramos et al., 2017). Therefore, the producing bacteria or their EPS have potential as adjuvants in the treatment of IBD. Such a role is plausible, taking into account that β-glucans play an important role in the modulation of both the innate immune response, through interaction with dendritic cells (DCs) and macrophages, and also in the adaptive immune response, increasing proliferation of T-and natural killer cells, via cytokine release (Bodera, 2008;Zhang et al., 2015). Frontiers in Microbiology | www.frontiersin.org This peculiarity of structure was the reason why we investigated the probiotic potential of the producing strain P. parvulus 2.6 (Werning et al., 2012;Pérez-Ramos et al., 2017), and the prebiotic potential of its EPS P (Russo et al., 2011;Pérez-Ramos et al., 2017). Also, our previous work supports an anti-inflammatory effect of this O2-substituted-(1-3)-β-D-glucan. Comparison of the behavior of P. parvulus 2.6 with its isogenic EPS P-non-producing strain, revealed that the presence of the polymer decreased the pro-inflammatory effect exerted by the LAB on human macrophages M1, indicating a possible activity of this EPS as an immunomodulator in the innate immune response (Fernández de Palencia et al., 2009). Moreover, in an induced inflammation model, using the zebrafish transgenic line Tg(mpx:GFP)i114, the polymer inhibited neutrophil recruitment and proliferation in the larvae, confirming once again its potential as an immunomodulator . Furthermore, the metabolic rate of macrophages derived from human monocytes increased upon exposure to either the EPS P and the EPS L synthesized by L. lactis NZ9000[pGTF] (Notararigo et al., 2014). In addition, these EPS activated processes involved in M1 differentiation, migration, and cellular proliferation, as well as inhibited AKT and mTor pathways implicated in the inflammatory response (Notararigo et al., 2014). Therefore, this current work aimed to study the immunomodulatory effect of the O2-substituted-(1-3)-β-Dglucan, vs. the O6-substituted-(1-3)-β-D-glucan, (laminarin, isolated from Laminaria digitata), on ileocolonic biopsies of CD patients. Patients Samples of intestinal mucosa were obtained during surgery from four patients with CD (one male and three females, age 46.7 ± 19.9, SEM), who underwent ileal resection for stricture unresponsive to conventional medical treatment ( Table 1). Biopsies of the intestinal tissue were kept at 4°C, from collection until their later handling. None of the patients had been on anti-TNF treatment for at least 2 months before the intervention. The diagnosis of CD was previously established by the clinical routine, as well as radiological and endoscopic, criteria, and afterwards was confirmed by histological evaluation of the surgical specimen. All patients received the same preparation for colonic surgery including gut lavage with electrolytepolyethylene glycol solution and broad-spectrum antibiotic therapy. EPS P and EPS L Production and Purification Exponential cultures of P. parvulus 2.6 (Dueñas- Chasco et al., 1997;Pérez-Ramos et al., 2018) and L. lactis NZ9000[pGTF] (Werning et al., 2008) were used to produce EPS P and EPS L, respectively. The EPS were produced and purified, as previously described (Notararigo et al., 2013). Briefly, after removal of bacterial cells by centrifugation, the EPS present in the culture supernatants were precipitated with three volumes of ethanol. Then, the EPS were further purified by dialysis and fractionation by size exclusion chromatography after resuspension in a 0.3 M NaOH solution. Afterwards, the EPS alkaline solution was dialyzed as above. Finally, the EPS was lyophilized and left at room temperature until use. After the first and second lyophilization, the purity of the EPS was tested fluorometrically using specific fluorescent staining kits for DNA, RNA, and proteins as previously reported (Zarour et al., 2018). Solutions of the purified polymers were prepared at 1 mg ml −1 . No contaminants were detected ( Table 2). Organ Culture of Human Colonic Mucosa Organ culture assays were performed as described (Borruel et al., 2002(Borruel et al., , 2003Llopis et al., 2009). Full-thickness ileal wall specimens, including areas with macroscopic lesions, were collected at surgery. After rinsing and washing with sterile saline solution, the specimens were transferred to the laboratory in sterile saline solution at 4°C. The intestinal mucosa was removed from the tissue, and cut into pieces of approximately 25-35 mg weight each, making an equal distribution of the macroscopic lesion. Each piece was placed on the insert of a 12-well cell culture plate (Netwell culture system, Costar), with the epithelial surface uppermost. Filters were placed into the wells and incubated with 1,500 μl RPMI 1640 culture medium (CanSera), without antibiotic (37°C, 95% O 2 , 5% CO 2 -carbogene-). Before usage, the culture medium was filtered through a 0.22 μm membrane, warmed to 37°C, and gassed for 90 min with carbogen. Tissue without lesion (not inflamed) was used as negative control. The tissues obtained from biopsies of CD patients were exposed independently to EPS L, EPS P, or laminarin. A solution of the polymers in culture medium at 100 μg ml −1 was added by dripping onto the tissues. The cell culture plate was covered with a supported lid, allowing a correct exchange of gases, and subsequently placed in a bath at 37°C, inside a container shielded with a wet cloth. Then, the container was connected to the carbogen gas outlet, gassed at high pressure for 10 s at 1 h intervals, and incubated for 4 h. Then, the pro-inflammatory (TNF-α and IL-8) and antiinflammatory (IL-10) cytokine levels released into the media were quantified by ELISA. At the end of the experiment, aliquots of the supernatant were collected from each well, prior to being stored at −80°C, while the tissue was kept immersed in 400 μl of rRNA stabilization solution "RNA later" (Ambion) at 4°C overnight, and then stored at −80°C. Determination of Cytokine Levels by Elisa The cytokine protein profiles of tissue culture supernatants, in response to EPS treatments, were analyzed by OptEIA ELISA (BD Pharmingen) to detect IL-10 and TNF-α, as well as by DuoSet ELISA (RD Systems) for IL-8, following the suppliers' instructions. The cytokine concentrations were extrapolated from the regression line generated using the absorbance values of a standard curve, which was produced in the same test with known concentrations of commercial cytokines. Gene Expression Profiling Under EPS Treatment In gene expression organ culture assays, CD samples were harvested in the presence of RNA later (Ambion) and stored at −80°C. RNA extraction from the patients' biopsies was performed using the RNeasy mini (Qiagen), according to the supplier's instructions. RNA concentration and integrity were analyzed and determined with the RNA 6000 Nano Chip in a Bioanalyzer 2100 (Agilent Technologies). RNA quantity and integrity were considered acceptable if the 28S/18S ribosomal fragment ratio was over 1.5, and the RNA Integrity Number (RIN) ranged in values between 9 and 10. Selected gene expression was evaluated with real-time PCR, and 1 μg of total RNA was used for the reverse transcription to synthesize first strand cDNA following the recommended protocol of the High-Capacity cDNA Reverse Transcription kit (Applied Biosystems). Quantitative PCR was then carried out with the Taqman Gene Expression Assay (Thermo fisher Scientific), using TaqMan Fast Universal PCR Master Mix (2X; Applied Biosystems). Relative quantification of gene expression of thymic stomal lymphopietin (TSLP), IL-12p35, IL-10, and IL-8 was determined, using a 7500 Fast Real time PCR System (Applied Biosystems). Data were obtained as threshold cycle (Ct) values. Gene expression levels for each individual sample were normalized relative to the PPIA gene as housekeeping/constitutive/endogenous gene, which encodes the Peptidylprolyl Isomerase A (also called Cyclophilin A). Each condition was run in triplicate. Transcription Factor Signaling Pathways NFκB RNA Array The RT2Profiler PCR array (SABiosciences-Qiagen) was used to obtain the expression profile of 84 fundamental genes of the NFκB pathway (Supplementary Table S1; supplementary data). The analysis with the array was performed according to the manufacturer's instructions, as follows. The substrate used was cDNA synthesized from RNA samples (obtained from patient biopsies) with the "RT2 First Strand" kit (Qiagen). The quantitative-RT-PCR (qPCR) was carried out using the SYBR GREEN technology "RT2 SYBR Green, " and run in a thermal cycler iQ5 (BioRad). Once the reaction was complete, the data corresponding to the Ct values of each gene were exported to a data sheet for further analysis. The analysis of the differential gene expression was carried out through the web application developed by the company for this purpose based on the method ΔΔCt and using three of the five constitutive genes for the standardization of the data (β-actine, β-2-microglobulin, and the ribosomal P0 protein; Seeger et al., 2014). Bioinformatic Functional Analysis The functional analysis of the selected genes was performed using the Database for Annotation, Visualization and Integrated Discovery (DAVID 6.7; http://david.abcc.ncifcrf.gov), which integrates the information of over 1.5 million genes in more than 65.000 species, from different public sources of gene and protein notations (Huang et al., 2008). DAVID provides a functional classification of genes by extracting the information from various databases. We selected terms from Gene Ontology (GO; http://geneontology.org/) and from molecular pathways integrated in: the Biological Biochemistry Image Database (BBID, http://bbid.irp.nia.nih.gov/), Biocarta (http://www.biocarta.com/ genes/index.asp), and the Kyoto Encyclopedia of Genes and Genomes (KEGG, http://www.genome.jp/kegg/). Gene ontology terms allows the unification of the representation of the attributes of genes and their products among the species, favoring the functional interpretation of experimental data by dividing the terms into three categories: cellular compartment (CC), molecular function (MF), and biological process (BP). The use of molecular pathways databases facilitates the production of graphic information of how genes and their products interact. Statistical Analysis Statistical analysis was carried out using the Prism 8 (GraphPad) software. The normality of the data was tested by the Kolmorov-Smirnov or Shapiro-Wilk Test normality test. For parametric data, a two-tailed paired t-test was applied, while for nonparametric data, the Friedman Test for paired data was used. For qPCR, for parametric data, unpaired two-tailed Student t-test was performed with Welch's correction. Values with p < 0.05 were considered significant. Effect of O2-Substituted-(1-3)-β-D-Glucans on Cytokine Production in ex vivo Models The EPS P and EPS L synthesized by P. parvulus 2.6 and L. lactis NZ9000[pGTF], respectively, are identical O2substituted-(1-3)-β-D-glucans with the same primary structure (Dueñas-Chasco et al., 1997;Werning et al., 2014). However, the polymers have different molecular masses, 9.6 × 10 6 Da for EPS P and 6.6 × 10 6 Da for EPS L (Werning et al., 2014). The molecular mass of polysaccharides plays a role in their biological activity, consequently, there may be differences between the immunomodulatory activities of each EPS, and therefore both polymers were tested. Thus, in the present pilot study, to evaluate EPS P and EPS L (Figure 1A), ex vivo tissue cultures have been used as a model of pathological intestinal inflammation (Figure 1B), in which the immune system plays an important role in the development, propagation, and maintenance of inflammation and disease. We have shown that this model is suitable to investigate the effects of LAB treatment on the intestinal immunity axis (Bäuerl et al., 2013). Laminarin was also tested as a positive control due to its ability to activate the immune system through its interaction with the Dectin-1 receptor (Xie et al., 2010;Smith et al., 2018). Tissues obtained from biopsies of four CD patients were exposed independently to the polysaccharides ( Figure 1B); and subsequently the pro-inflammatory (TNF-α and IL-8) and antiinflammatory (IL-10) cytokine levels released into the media were quantified (Figure 1C). The results revealed that the concentration of the pro-inflammatory IL-8 secreted by the tissues during treatment with either of the three β-glucans tested was significantly lower (p < 0.05) than that released by the untreated control tissue. In addition, the highest effect was obtained with EPS P or laminarin. These results indicated a role of the O2-substituted-(1-3)-β-D-glucans on the reduction of inflammation associated with the intestinal epithelial mucosa. However, the levels of TNF-α and IL-10 secreted by the tissues were not affected by the treatments. This lack of influence could be due to the cytokine's short lifespan and to the large number of proteases released by the tissue of CD patients (Borruel et al., 2002). These hypotheses presumably did not apply for the results obtained for IL-8, since the levels detected for this cytokine were 40 times higher than those observed for the TNF-α and IL-10. Therefore, the alteration of IL-8 concentration seems to reflect the biological status of the tissue. One of the CD patients, whose ileocolonic biopsy was analyzed and previously treated with antibiotic (Table 2). Therefore, the results obtained could have a bias due to the inclusion of his biopsy in the study. However, quantification of the cytokine levels released by the biopsies of only the three other patients provided the same kind of pattern and behavior (Supplementary Figure S1), validating the performed assay. Effect of O2-Substituted-(1-3)-β-D -Glucans on Gene Expression Profiling in ex vivo Model To understand the immunomodulatory effect of the bacterial β-glucan on gene expression in the ex vivo model, levels of transcription of the TSLP, IL-8, IL-12p35, and NFκB coding genes in the treated vs. untreated tissues were determined by means of qPCR (Figure 2). The analysis of gene expression in the treated vs. untreated tissues revealed in general a similar pattern of response to the exposure to EPS L, EPS P, or laminarin. Thus, the treatment with any of the three β-glucans resulted in a decrease of TSLP and IL-8 transcription. However, this effect was only statistically significant (p = 0.0367) for TSLP in tissues treated with EPS P. TSLP is a protein, which belongs to the cytokine's family and, that promotes T cell maturation when antigen presenting cells are activated. In IBD patients, low levels of TSLP are associated with a permeability increase of the gut barrier, or Th2 cell differentiation, depending on the cell type that is expressing it (epithelial cells or DC cells; Biancheri et al., 2016;Park et al., 2017). Recently, it has been proposed that TSLP also participate in Treg cell development and gut homeostasis perpetuation (Tahaghoghi-Hajghorbani et al., 2019). Our results suggest that EPS L and laminarin exhibit normal values of gene expression, leading to a balanced TSLP function, while EPS P slightly decreases its expression, and probably has no effect on TSLP turn over. EPS P fold change reached 0.74, which means that the decrease of relative expression is not so dramatic as expected in CD patients with active inflammatory reactions (Tahaghoghi-Hajghorbani et al., 2019). Our results also revealed that the NFκB transcript was over-expressed in tissues exposed to EPS L or laminarin (Figure 2). NFκB is a transcription factor that regulates the expression of pro-inflammatory genes, including cytokines, chemokines, and other molecules. NFκB is a heterodimer formed by RelA/1 that activates the canonical signaling pathway and Rel B/NFKB2 that activates the non-canonical pathway (see details in Figure 3). The first is activated by TNF-α or IL1, while the latter is activated by cluster differentiation 40 (CD40), B cell activating factor, and other molecules, but not by TNF-α. The heterodimer activation mechanism is complex; additionally, the presence of cofactor molecules in the nucleus, that bind to the transcription factor complex, may influence the transcription of pro-or anti-inflammatory proteins coding genes (Lawrence, 2009). Its activity is regulated by the endogenous cytoplasmic inhibitors IκKs kinases complex that ensures the activation/repression of the heterodimer, that in specific conditions is able to transmigrate to the nucleus and induce transcription (Jobin and Sartor, 2000). In IBD, the canonical pathway is activated constitutively, thus conspicuous amounts Our results indicate that EPS L and laminarin increased the relative expression of NFκB (RelA subunit) 1.5-fold higher than that of the control, while EPS P does not have any major effect. To comprehend the EPS effect on NFκB activation, it would be necessary to study all genes involved in the NFκB signaling pathway and clarify EPS activity. We would expect that the observed increase of NFκB expression would determine the increase of pro-inflammatory mediators, like IL-8 or TNF-α, instead we detected the reduction of the IL-8 transcript (Figure 2) and no significant difference in TNF-α detection in cultured media (Figure 1), or in gene expression (Figure 2). IL-8 is a chemokine produced by macrophages and intestinal epithelial cells (IEC); it is considered a good marker to determine disease activity (Mohammed Vashist et al., 2018). It is also known as neutrophil chemotactic factor, because it induces chemotaxis in target cells such as neutrophils or other cells that migrate toward the infection focus (Llopis et al., 2009;Walana et al., 2018). Our results showed that the amount of the protein IL-8 (Figure 1) and the expression of its coding gene (Figure 2) decreased during treatments, suggesting that neutrophil recruitment also decreased as a consequence of the reduction of the inflammation rate. In line with these results, we have previously detected that EPS P treatment decreased IL-8 and TNF-α expression in a gnotobiotic zebrafish larvae model, indicating its ability to reduce inflammation . Also, when EPS P was tested in an induced inflammation model using the zebrafish transgenic line Tg(mpx:GFP)i114, a significant decrease of neutrophil recruitment to the inflammation focus was observed . Analysis of IL12p35 gene expression confirmed that the treatments did not augment the pro-inflammatory response in the ex vivo model (Figure 2). The fold change values were close to the untreated control, henceforth, taking into account that IL12p35 is the specific subunit of IL12 cytokine, EPS do not activate mechanisms involved in IL12 signaling pathway. Overall, the results obtained indicated a trend toward reduction of inflammation in CD biopsies treated with any of the three β-glucans. IL-10 is an anti-inflammatory cytokine, that is released in the gut by macrophages and IEC to maintain homeostasis (Latorre et al., 2018;Morhardt et al., 2019). According to our results, IL-10 mRNA levels did not show significant differences in the expression profiling (Figure 2), confirming the results obtained in cytokine production (Figure 1). This agrees with our previous observation that macrophages derived from human monocytes treated independently with either EPS did not stimulate IL-10 production (Notararigo et al., 2014). Moreover, we observed the same behavior in a gnotobiotic zebrafish larvae model, where EPS P treatment did not result in an increase of IL-10 expression . Taken together, these results support that bacterial β-glucans do not interact with the IL10R receptor (Shouval et al., 2014). Effect of EPS L and EPS P on the NFκB Gene Expression Profile of CD Patient Biopsy After detecting IL-8 reduction at transcriptional and protein levels, as well as NFκB modulation, we decided to investigate the effect of β-glucans on NFκB signaling pathways, carrying out a gene expression "array" with the RNA from the CD patient that showed the greatest IL-8 relative expression reduction. Moreover, to have a representative analysis, the results obtained for EPS treated and untreated samples were normalized taking in account the expression of three houskeeping genes (see details in "Materials and Methods"), and the results are depicted in Figure 4. The profiles in response to treatment with either EPS P or EPS L were, in general similar (Figures 4B-E). The expression of the cytokines IL1A and IL1B coding genes was reduced (fold change: IL1A 0.8, 0.7; IL1B 0.5, 0.8 for EPS L and EPS P treatment, respectively) as well as the levels of transcripts encoding the colony-stimulating factor 3 (CSF3) and the MyD88 adapter protein. These results point out that both EPS have an immunomodulatory effect on the gut immune system. Receptor Interacting Serine/Threonine Kinase 1 (RIPK1) was upregulated indicating a possible function in IEC, restoring the gut barrier integrity by preventing a dysregulated cell death mechanism. Moreover, the results seemed to support that the EPS selectively activated NFκB heterodimers, EPS P RelB/NFKB2, and EPS L RelA/NFKB1 (Figure 4A; Jobin and Sartor, 2000). We selectively studied the NFκB heterodimer complex formed by RelB/NFKB2 and RelA/NFKB1 because NFκB1 and NFκB2 act as inhibitors, blocking transcription factor translocation to the nucleus until an external stimulus activates the proteolysis that converts them into p50 and p52, respectively. The overexpression of both inhibitors, showed in Figure 4D, could be the key for the immunomodulatory effect of bacterial β-glucan. According to these results, EPS P seemed to exert its impact mainly on the MyD88 independent signaling pathway, as shown by its influence on RelB, Interleukin 1 Receptor Associated Kinase 1 (IRAK1), RIPK1, Inhibitor of Nuclear Factor Kappa B Kinase Subunit Epsilon (IKBKE), Signal Transducer and Activator of Transcription 1 (STAT1), and CD40 (Figures 4C,D). On the other hand, EPS L could modulate both the canonical and non-canonical pathways ( Figure 4A). The first, MyD88 dependent, by activating Rel, Toll Like Receptor 6 (TLR6; data Frontiers in Microbiology | www.frontiersin.org not shown), and RIPK1, and repressing IL1A and IL1B gene expression. The latter, MyD88 independent, by activation IKBKE and STAT1 (Figure 4C). In line with these findings, our previous results demonstrated that EPS P downregulated the relative expression of three genes involved in inflammation, MyD88 among them, in a zebrafish larvae model, pointing out the potential of the O2-substituted-(1-3)-β-D-glucan as an immunomodulator . Although EPS P and EPS L could have a different effect on transduction pathways activated by NFκB, this analysis does not fully clarify what mechanisms they may be modulating. Both have been shown to reduce the expression of pro-inflammatory cytokines, such as IL-1A, IL-1B, TNF, and IL-8 (Figure 4B), and these results have been validated in different models (Notararigo et al., 2014;Pérez-Ramos et al., 2018). The overall results were quite complex, considering that many of the genes studied are involved in the signaling of different receptors and are present in the majority of cell types isolated in the CD biopsy. For this reason, it was necessary to use bioinformatics tools to group the genes whose function may be related, and improve the understanding of the mechanisms underlying the stimulus provided by the O2-substituted-(1-3)β-D-glucan, as described below. Functional Analysis With David Bioinformatic Resources In order to interpret the results of the differential response caused by each EPS on the tissue of the CD patient, a bioinformatic clustering with DAVID was carry out. The aim of this analysis was to arrange genes and derive their biological meaning, by association with identical or related functions and/or pathways. We restricted gene analysis to fold changes between ≤5 and/or ≥1.5, to establish strong interaction between selected genes. The gene clustering revealed a modulation of 14 or 19 genes upon treatment with either EPS L or EPS P. Among the regulated genes, the presence of EPS L resulted in a reduction of expression of 4, whereas downregulation of only one gene was observed with the EPS P treatment. Also, 10 or 19 where upregulated after exposure of the tissues to EPS L or EPS P, respectively. Furthermore, as a response to both treatments, the alteration of expression of five genes showed the same pattern: upregulation for IKBKE, PC4, and SFRS1 interacting protein 1 (PSIP1), RIPK1 and STAT1, and downregulation for Colony-Stimulating Factor 3 (CSF3). The functional analysis for molecular function of GO depicted in Table 3 showed that the enrichment in functional terms for genes affected by treatment with EPS P, was greater than that with EPS L in number, degree of enrichment, and number of members in each category. In addition, EPS L regulon included genes grouped into two categories related to cytokine activity, while for EPS P regulon more grouping terms were found, highlighting the regulation of caspase activity and cytokine activity. These results confirmed the data obtained from functional grouping (data not shown), and the overall results allowed us to postulate a possible mechanism of action of either EPS on the apoptosis pathway represented in Figure 3. EPS L could affect extrinsic apoptosis pathways with the inhibition of Tumor Necrosis Factor Apoptosis-Inducing Ligand 10 (TNFSF10) and IL-1B, while also it could activate survival pathway (RIPK1), with a possible healing effect on gut barrier, through the NFκB pathway and chemotaxis reduction upon CSF3 downregulation. EPS P could activate apoptosis pathway by altering gene expression of Tumor Necrosis Factor Receptor Superfamily Member 10a (TNFRSF10A), and the cell death regulator BCL2L1 while repression of CSF3 may points to a role in modulation of intestinal immunity, with a possible decrease in the chemotaxis of the neutrophils to the foci of inflammation (Carol et al., 2006). To support this hypothesis, some of the results depicted in Figure 4E should be discussed in the context of the apoptotic signal pathways. BCL2A1 is a gene which belongs to the BCL-2 family, the protein encoded by this gene is able to reduce the release of pro-apoptotic cytochrome c from mitochondria and block caspase activation. The gene showed a 1.7-or 0.9-fold change upon treatment with EPS L or EPS P. This gene is directly activated by NFκB in response to inflammatory mediators, such as granulocyte-macrophage colony-stimulating factor (GM-CSF), CD40, and cytokine like TNF-α or IL-1 which suggest lymphocyte activation/proliferation and cell survival. Yet, both EPS downregulate TNF-α, IL-1A, and IL-1B, and have no effect on GM-CSF (data not shown), while only EPS P upregulated CD40. Taken together, these results suggest that EPS L had an anti-apoptotic effect, while EPS P had a possible pro-apoptotic effect. Indeed, both β-glucans showed an antiinflammatory effect, because of reduction of the relative expression of cytokines, that might reflect in reduced lymphocyte activation for EPS L and in cell death activation for EPS P. The BCL2L1 gene product is a potent pro-apoptotic activator, and this gene was upregulated by both EPS treatments with a fold change of 1.6 for EPS P and 1.2 for EPS L. We postulate that the O2-substituted-(1-3)-β-D-glucans might reestablish normal condition of the intestinal immune system cells through an apoptosis activation mechanism ( Figure 3B). In addition, expression of the cytokine TNFSF10 was downregulated in the presence of both EPS (fold change: 0.5 and 0.4 for EPS P and EPS L treatments, respectively) while TNFRSF10A, that binds TNFSF10 was slightly upregulated under EPS P treatment (fold change 1.3), and downregulated upon exposure to EPS L (fold change: 0.7). Moreover, Tumor Necrosis Factor Receptor Superfamily Member 10b (TNFRSF10B), and IL-1R receptors coding genes were downregulated (fold change: 0.9 and 0.7, respectively) in EPS L treated tissues, and upregulated (fold change: 1.4 and 1.7, respectively) in tissues exposed to EPS P (Figures 4B,C). These results indicate that EPS L and EPS P reduced the expression of the apoptotic ligand TNFSF10, but had a different effect on the expression of receptor TNFRSF10B, an IL-1R. It is possible that EPS P activates a cell death mechanism, to regulate lymphocyte proliferation, while EPS L acts at the cell mediation level by reducing cytokine production. Therefore, EPS P seems to modulate apoptosis, as several results previously collected support the bioinformatic analysis. Some examples are, the lowering of IL-8 in an ex vivo model, Frontiers in Microbiology | www.frontiersin.org a zebra fish model and in an in vitro transwell model (unpublished data). The decreased level of IL-1B and IL12p35 in LPMCs isolated from CD biopsies and treated with either EPS, detected with Luminex (data not shown). IL-8 takes part in the neutrophil recruitment, while IL-1B and IL-12p35 are Th1 mediators that augment when Th1 cells are deregulated, therefore indicating a possible implication of physiological modulation of apoptosis in these T cells subpopulations. It is well-established that IBD pathogeneses determines Th1/Th2 imbalance toward Th1 (Neurath, 2014). Th1 proliferation is decontrolled because apoptosis regulation is imbalanced, increasing inflammation and chemotaxis in the gut. Hence, Th1 cells overproduce pro-inflammatory mediators like: TNF-α, IL1A, IL1B, IFN-γ etc. In turn TNF-α, that has a pivotal role in the mucosal inflammation, recruit, and over stimulate immune system cell like Th1. As IBD is Th1 shifted pathology, and EPS P is able to ameliorate the rate of inflammation, it is plausible to postulate that it could play a role in Th1 through apoptosis modulation. Bioinformatic analysis revealed that many mediators of this pathway have been modulated by the treatment (Table 3), and we also observed a reduction of pro-inflammatory mediators (Figures 1C, 2). As the Th1 of CD patients cycles faster than the healthy controls (Sturm et al., 2004), it is fair to believe that Th1 could play a role in the re-establishment of Th1 population to a normal concentration, and therefore lowering inflammation rate. Regarding EPS L, gene clustering showed a dual behavior, by the reduction of cell death ligands expression such as TNFSF10 and IL-1B, and over-expression of RIPK, a kinase involved in necroptosis and cell survival, by modulating dysregulated apoptosis in IEC (Figure 3A; Dannappel et al., 2014). Thus, the EPS treatments affect mechanisms that restore gut burrier homeostasis due to its wound healing properties and reduce inflammation mediators. It is worth noting that the differential response of the tissues to EPS P and EPS L might be related to their structural differences. EPS P has a molecular mass of 9.6 × 10 6 Da, higher than that of EPS L (6.6 × 10 6 Da, Werning et al., 2014), that could increase the affinity of the natural O2-substituted-(1-3)-β-D-glucan (EPS P) affinity for its receptor(s). It has been shown that molecular mass and tertiary structure of β-glucans, play a critical role in receptor binding (Legentil et al., 2015). We have previously shown that EPS P increases adhesion of P. parvulus 2.6 to human enterocytes in vitro and increases in vivo the colonization capacity of the bacterial strain in the zebrafish gut . Therefore, it is tempting to assume that, after ingestion in a beverage, P. parvulus 2.6 could synthesize EPS P in the human intestine and the action of glycosidases synthesized by the microbiota could reduce its molecular mass generating an EPS L-like polymer with its detected dual role. CONCLUSION In an ex vivo model of CD biopsies, treatment with EPS P and EPS L are concomitant with decreased levels of pro-inflammatory cytokine IL-8 in the supernatant of tissue biopsies, and in relative gene expression. RNA array demonstrated that O2-substituted-(1-3)-β-D-glucans modulate the NFkB pathway, activating both the canonical and non-canonical pathway. The differences found in the expression profiling indicate that the 9.6 × 10 6 Da EPS P may be able to restore the apoptosis mechanism in Th1 cells, whereas the 6.6 × 10 6 Da EPS L might restore the intestinal barrier through activation of RIPK1 in IEC. Further studies have to be performed to determine which receptor is activated by EPS P and EPS L and, if a co-receptor is necessary for receptor recognition, and if EPS have different effects on immune system cells. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
8,549.8
2021-02-05T00:00:00.000
[ "Biology" ]
Geometry of meandering and braided gravel-bed threads from the Bayanbulak Grassland, Tianshan, P. R. China The Bayanbulak Grassland, Tianshan, P. R. China, is located in an intramontane sedimentary basin where meandering and braided gravel-bed rivers coexist under the same climatic and geological settings. We report and compare measurements of the discharge, width, depth, slope and grain size of individual threads from these braided and meandering rivers. Both types of threads share statistically indistinguishable regime relations. Their depths and slopes compare well with the threshold theory, but they are wider than predicted by this theory. These findings are reminiscent of previous observations from similar gravel-bed rivers. Using the scaling laws of the threshold theory, we detrend our data with respect to discharge to produce a homogeneous statistical ensemble of width, depth and slope measurements. The statistical distributions of these dimensionless quantities are similar for braided and meandering threads. This suggests that a braided river is a collection of intertwined threads, which individually resemble those of meandering rivers. Given the environmental conditions in Bayanbulak, we furthermore hypothesize that bedload transport causes the threads to be wider than predicted by the threshold theory. Introduction The morphology of alluvial rivers extends between two endmembers: in meandering rivers, the flow of water and sediments is confined in a single thread, whereas in braided rivers the flow is distributed into intertwined threads separated by bars (Fig. 1; Leopold and Wolman, 1957;Fergu-son, 1987;Ashmore, 1991;Schumm, 2005;Kleinhans and van den Berg, 2011). Linear stability analyses, supported by laboratory experiments, explain how bedload transport generates bars, and favors the formation of meandering or braided patterns (Parker, 1976;Fredsøe, 1978;Fujita and Muramoto, 1985;Devauchelle et al., 2007;Ashmore, 1991;Zolezzi et al., 2012).This mechanism proves more efficient in wide and shallow channels.Field measurements indicate that the bankfull aspect ratio (ratio of width to depth) of braided rivers is usually much larger than that of meandering ones, thus suggesting that the bar instability is indeed responsible for braiding (Parker, 1976;Fredsøe, 1978;Fujita and Muramoto, 1985;Devauchelle et al., 2007;Ashmore, 1991;Zolezzi et al., 2012).What exactly controls the aspect ratio of an alluvial river remains an open question, although sediment discharge and riparian vegetation seem significant in this respect: high sediment load and weak vegetation both favor wider and shallower channels, and often induce braiding (Smith and Smith, 1984;Gran and Paola, 2001;Tal andPaola, 2007, 2010;Brauderick et al., 2009;Dijk et al., 2012;Métivier and Barrier, 2012). In a fully developed braided channel, emerged bars separate the threads from each other (Fig. 1), and the very definition of bankfull conditions becomes ambiguous.Most authors treat the channel as a whole by defining lumped quantities, such as the total channel width or the average water depth (Métivier and Barrier, 2012).Conversely, few studies focus on the morphology of braided and meandering channels at the level of individual threads (Church and Gilbert, 1975;Mosley, 1983;Ashmore, 2013;Gaurav et al., 2015).In sand-bed rivers, the geometry of braided threads appears to be indistinguishable from that of meandering ones.This observation accords with recent laboratory experiments (Seizilles et al., 2013;Reitz et al., 2014).To our knowledge, this similarity has not been fully investigated in gravel-bed rivers. Here, we report on measurements in the Bayanbulak Grassland, Tianshan Mountains, P. R. China, where tens of meandering and braided gravel-bed rivers develop in the same environment.After comparison with other data sets from the literature, we compare the morphology of braided and meandering threads in our data set.Finally, we rescale our measurements based on the threshold theory to generate and analyze a single statistical ensemble from rivers highly dispersed in size (Glover and Florey, 1951;Henderson, 1963;Seizilles et al., 2013;Gaurav et al., 2015). Field site The Bayanbulak Grassland is an intramontane sedimentary basin standing at an elevation of about 2500 m in the Tianshan Mountains (Fig. 2).Two main wetlands, the Qong Yulduz basin (known as the Swan Lake in Chinese), and the Kizik Yulduz basin, are distributed around the main Kaidu River.They are immediately surrounded by sloping meadows (slope S ∼ 0.01), themselves enclosed with the Tianshan Mountains which provide water to the Kaidu River (Zhang et al., 2002).The hydrology of the basins is controlled by snowmelt and summer orographic precipitations (Zhang et al., 2002;Yang and Cui, 2005).Snow accumulates from November to March, and starts melting in April, inducing the water discharge to rise in all rivers (Zhang et al., 2007).Orographic precipitation takes over in summer (be- tween 260 and 290 mm), and the discharge continues to rise until August (Fig. 3). The morphology of the Bayanbulak rivers varies between highly meandering (sinuosity above 1.3 to 1.5) and braided, and the same river often switches from one to the other along its course (Figs. 4 and 5).The rivers span about 4 orders of magnitude in discharge, and about 2 in width (Fig. 6).Although a variety of grass species grow in the basin, their influence on the channel morphology is probably moderate (Zhang et al., 2002;Andrews, 1984;Métivier and Barrier, 2012).Finally, most rivers flow over gravel, whose size distribution does not vary significantly over the basin (Fig. 6).All these features combine to make the Bayanbulak Grassland an ideal field site to investigate the morphology of gravel-bed rivers. Method We carried out two field campaigns in July 2012 and July 2013, during the high-flow season to compare the geom- (Church and Rood, 1983;Parker et al., 2007;King et al., 2004;Ashmore, 2013). etry of braided and meandering threads (Fig. 3).We treated the threads of braided rivers individually, based on the wetted area at the time of measurement (Fig. 1).We measured the cross section geometry, the discharge, the grain-size distribution and the slope of the threads from as many rivers, spanning as broad a range in discharge, as possible.We chose the sections at random, according to their accessibility, our purpose being to collect a statistically significant data set. To measure the cross section and the water discharge of large rivers, we used a 2 Mhz acoustic Doppler current profiler (ADCP, Teledyne-RDI StreamPro).The instrument was mounted on a raft and cross sections were performed from which we extracted both the geometry and the discharge of the threads. In shallower rivers, we used wading rods and rulers to measure the thread geometry.The mean surface velocity was measured using floats.The average velocity was obtained from the surface velocity using a correction factor of 0.6 (Sanders, 1998;Gaurav et al., 2015).The discharge was obtained by the product of the average velocity with the wetted area. Repeated ADCP profiles across the same section show that discharge, width and depth measurements are all reproducible within less than 15 %.Manual measurements yield an uncertainty of about 2 % for width, 12 % for depth and 25 % for velocity.The resulting uncertainty on discharge is less than 40 % for both methods. We used a Topcon theodolite with a laser rangefinder to measure the long profile of the threads, and estimate their slope.The length of topographic profiles varies from 100 m for small braided threads to more than 3 km for one meandering thread.Uncertainties on the location of the theodolite and atmospheric inhomogeneities curtail the precision of long-distance profiles.For our measurements, we expect the uncertainty on angles to reach 90 .The corresponding absolute uncertainty on the slope of a river is about 5 × 10 −4 . We measured the grain-size distribution from surface counts.Depending on the size of exposed surfaces, the number of counts ranged from 200 to 500 (Wolman, 1954;Bunte and Abt, 2001).We extracted the median grain size d 50 and the size of the 90th percentile d 90 from these distributions. Finally, the sinuosity of the threads was measured using the topographic profiles when available.When these were not available, we used Google images and calculated the sinuosity from 1 km long stretches centered on the measurement site.The Bayanbulak rivers we surveyed exhibit two very distinct planforms.Single-thread rivers are, on average, highly meandering with a sinuosity of 1.5 ± 0.2 (Schumm, 2005).The braided rivers we surveyed have a total braiding index ranging from 3.3 to almost 11.2.As our objective is to compare these two endmembers, we ignored rivers with intermediate wandering morphology (Church, 1983).Overall, our data set is composed of 92 measurements of width, depth, average velocity, discharge, slope and grain size, among which 53 correspond to braided-river threads (Table 1), and 39 to meandering-river threads (Table 2). Regime equations Figure 6 compares our measurements to four other sources.Three of them, the compendiums of Parker et al. (2007), Church and Rood (1983) and King et al. (2004) include measurements from single-thread rivers.The fourth one corresponds to measurements on individual threads of the braided Sunwapta River (Ashmore, 2013).These sources are hereafter referred to as the GBR data set. The Bayanbulak threads are widely dispersed in size (0.6 ≤ W ≤ 35 m) and discharge (0.002 ≤ Q ≤ 51 m 3 s −1 ).On average, they are smaller than the GBR threads.The median grain size of the Bayanbulak threads d 50 0.013 m is finer (the standard deviation of the d 50 is σ d 50 ∼ 0.008 m).Our data set therefore extend the GBR ones towards smaller threads with finer sediments. We now consider the empirical regime equations of individual threads (Fig. 7).To facilitate the comparison between the GBR data set and our own, we use dimensionless quantities, namely W/d 50 , H /d 50 , S and Q * = Q/ g d 5 50 , where g is the acceleration of gravity.Not surprisingly, the geometry of a thread is strongly correlated with its water discharge: its width and depth increase with discharge, while its slope decreases.At first sight, these trends are similar for meandering and braided threads.They also compare well to the GBR data set, although the Bayanbulak threads are slightly wider than the GBR ones on average.The measurement uncertainty, although significant, is less than the variability of our data, except for slopes smaller than about 5 × 10 −3 .Despite considerable scatter, both our measurements and the GBR data sets gather around straight lines in the log-log plots of Fig. 7, suggesting power-law regime equations: where α w , α h , α s , β w , β h and β s are dimensionless parameters.To evaluate them, we use reduced major axis regression (RMA) instead of least square regression because the variability of our data is comparable along both axes (Sokal and Rohlf, 1995;Scherrer, 1984).The resulting fitted coefficients are reported in Table 3.The scatter in the slope measurement is too large to provide significant estimates of the slope coefficients α s and β s .At the 95 % confidence level, the regime relationships of meandering and braided threads cannot be distinguished.Similarly, the depth of the Bayanbulak threads cannot be distinguished from those of the GBR threads.Conversely, the Bayanbulak threads are significantly wider than the GBR threads with respect to their median grain size.So far we have made the width, depth and discharge dimensionless using d 50 as the characteristic grain size of the sediment.This choice, however, is arbitrary (Parker et al., 2007;Parker, 2008).Large grains are arguably more likely to control the geometry of the threads than smaller ones, and a larger quantile might be a better approximation of the characteristic grain size.For comparison, we rescaled our measurements using d 90 instead of d 50 , and repeated the above analysis.Our conclusions are not altered significantly by this choice of characteristic grain size (Table 3). Detrending So far, we have found that the empirical regime equations of meandering and braided threads are statistically similar.To proceed further with this comparison, we would like to convert our measurements into a single statistical ensemble.We thus need to detrend our data set with respect to water discharge, based on analytical regime equations.Following Gaurav et al. (2015), we propose to use the threshold theory to do so. This formulation is similar to the one proposed by Parker et al. (2007), but for two points.First, Eqs.(2) to (4) represent a threshold channel, whereas Parker et al. (2007) extend the theory to active channels.Second, the formulation of Glover and Florey (1951) uses a constant friction coefficient in the momentum balance, whereas Parker et al. (2007) use a more elaborate friction law.Here we use the simplest formulation, as the variability of our data overshadows these differences (Métivier and Barrier, 2012).The dashed lines on Fig. 7 represent Eqs. ( 2) to (4).On average, the Bayanbulak threads are wider, shallower and steeper than the corresponding threshold thread.However, the theory predicts reasonably their dependence with respect to discharge, thus supporting its use to detrend our data.Accordingly, we define a set of rescaled quantities as follows: Here the coefficients C W , C H , C S correspond to the prefactors in square brackets of Eqs. ( 2) to (4).We used the typical values reported above for the coefficients that do not vary in our data set. Figure 8 shows the relationship between the rescaled thread geometry and its dimensionless discharge, using d 50 to approximate the characteristic grain size d s .The new quantities W * , H * and S * appear far less dependent on the water discharge than their original counterpart, although a residual trend remains for all of them.Using ordinary least squares, we fit power laws to our rescaled data to evaluate this residual trend.We find W * ∝ Q −0.19±0.03* and H * ∝ Q −0.10±0.05* for the Bayanbulak threads, and for the GBR threads.The width of the Bayanbulak threads shows the strongest correlation, yet even this correlation is mild.Finally, slopes are more strongly correlated with discharge than width and depth both for the GBR threads (S * ∝ Q 0.21±0.05* ) and the Bayanbulak threads (S * ∝ Q 0.39±0.11* ).However, most of the difference between the Bayanbulak and GBR threads is due to slopes well below the measurement precision.In all cases, the scatter is large, and all correlations fall within the standard deviation of the data set. Thread geometry We now analyze our rescaled measurements as a homogeneous statistical ensemble (Fig. 8).The means of the rescaled distributions of width, depth and slope all fall about 1 order of magnitude away from one, and their dispersion around this mean is also about 1 order of magnitude (Table 4).This observation supports the use of the threshold theory to scale the morphology of the Bayanbulak rivers.According to the rescaling Eqs. ( 6) to ( 8) the aspect ratio of a river W/H should be naturally detrended (Fig. 9).Indeed, the correlation coefficient of aspect ratio and discharge is less than 0.01 for all data sets.As expected, the aspect ratio of braided and meandering threads cannot be distinguished at the 95 % level of confidence.Finally, the difference between the width of the Bayanbulak threads and that of the GBR threads also appears in the distribution of aspect ratios: the Bayanbulak aspect ratios are larger than the GBR ones. Conclusion Our measurements on gravel-bed rivers in the Bayanbulak Grassland reveal that braided threads are geometrically similar to meandering ones.Their size can be virtually detrended with respect to water discharge using the threshold theory.As a result, their aspect ratio is naturally detrended.These findings accord with recent observations in sand-bed rivers of the Kosi Megafan (Gaurav et al., 2015).They also accord with recent results from rivers of the Ganges-Brahmaputra plain (Gaurav, 2016). The striking similarity between braided and meandering threads in gravel-bed and sand-bed rivers supports the view that fully developed braided rivers are essentially a collection of threads interacting with each other, rather than a single wide channel segmented by sediment bars.If confirmed, this would suggest that a braid results from the collective behavior of individual threads, the property and dynamics of which would be close to that of meandering threads (Sinha and Friend, 1994;Ashmore, 2013;Reitz et al., 2014). Our observations, like those of Gaurav et al. (2015) or the GBR data set, are highly dispersed around their average value, which points at the influence of hidden parameters on their morphology.Among those, the intensity of sediment transport is likely to play a prominent role, at least in the case of the Bayanbulak rivers where both vegetation and grainsize distributions are relatively uniform over the grassland. More specifically, field observations suggest that a heavier sediment load tends to increase the aspect ratio of a thread, other things being equal (Smith and Smith, 1984;Tal and Paola, 2010;Métivier and Barrier, 2012).This proposition remains speculative though, and needs to be thoroughly tested against dedicated field measurements, which we believe should include both braided and meandering threads.Finally, if the sediment discharge is indeed the most prominent parameter after water discharge, its influence on the geometry of a channel should also manifest itself in laboratory experiments. Figure 4 . Figure 4. (a) Meandering and (b) braided rivers in the Bayanbulak Grassland.Left panels: field picture; right panels: satellite image (Google Earth).The corresponding locations also appear in Fig. 2. Figure 5 . Figure 5. Satellite and panoramic view of a metamorphosis from braided to meandering (Bayanbulak Grassland, 84.578 • E, 42.721 • N, Google Earth).Marker on the satellite image indicates the viewpoint of the panoramic image.Its location also appears in Fig. 2. Figure 7 . Figure 7. Dimensionless width, depth and slope of individual gravel-bed threads as a function of dimensionless water discharge.Dashed lines represent the threshold theory. FFigure 9 . Figure 9. Aspect ratio of braided and meandering threads from Bayanbulak and the GBR data sets, as a function of rescaled water discharge (Q * ). Table 1 . Data gathered for braided-river threads.Latitude (lat) and longitude (long) are in degrees centesimal; measurement stands for measurement type (Fl: float, ADCP: acoustic Doppler current profiler); Q: discharge, Sec: wetted area, V : average velocity, W : width, H : depth, d 50 : median grain size, d 90 : size of the 90th percentile, S: slope.All physical quantities are given in the International System of Units. Table 2 . Data gathered for meandering-river threads.Latitude (lat) and longitude (long) are in degrees centesimal; measurement stands for measurement type (Fl: float, ADCP: acoustic doppler current profiler); Q: discharge, Sec: wetted area, V : average velocity, W : width, H : depth, d 50 : median grain size, d 90 : size of the 90th percentile, S: slope.All physical quantities are given in the International System of Units. Table 3 . Linear regressions on the log 10 of width and depth as functions of discharge and for two characteristic grain sizes.The confidence level is 95 %.RMA: reduced major axis regression σ β stands for confidence interval on the slope of the regression β.Width: log 10 (W/d s ) = β w log 10 Q * + α w Table 4 . Mean and standard deviations of the logarithms of detrended widths, depths and slopes.The aspect ratios is naturally detrended and does not depend on grain size.
4,347
2016-03-22T00:00:00.000
[ "Environmental Science", "Geology" ]
A New Approach for Hiv-1 Protease Cleavage Site Prediction Combined with Feature Selection Acquired immunodeficiency syndrome (AIDS) is a fatal disease which highly threatens the health of human being. Human immunodeficiency virus (HIV) is the pathogeny for this disease. Investigating HIV-1 protease cleavage sites can help researchers find or develop protease inhibitors which can restrain the replication of HIV-1, thus resisting AIDS. Feature selection is a new approach for solving the HIV-1 protease cleavage site prediction task and it's a key point in our research. Comparing with the previous work, there are several advantages in our work. First, a filter method is used to eliminate the redundant features. Second, besides traditional orthogonal encoding (OE), two kinds of newly proposed features extracted by conducting principal component analysis (PCA) and non-linear Fisher transformation (NLF) on AAindex database are used. The two new features are proven to perform better than OE. Third, the data set used here is largely expanded to 1922 samples. Also to improve prediction performance, we conduct parameter optimization for SVM, thus the classifier can obtain better prediction capability. We also fuse the three kinds of features to make sure comprehensive feature representation and improve prediction performance. To effectively evaluate the prediction performance of our method, five parameters, which are much more than previous work, are used to conduct complete comparison. The experimental results of our method show that our method gain better performance than the state of art method. This means that the feature selection combined with feature fusion and classifier parameter optimization can effectively improve HIV-1 cleavage site prediction. Moreover , our work can provide useful help for HIV-1 protease inhibitor developing in the future. INTRODUCTION Acquired immune deficiency syndrome (AIDS) is quite a mortality disease, which is due to the patients' infection of HIV-1.HIV-1 protease is a key enzyme in the virus replication process, and it cleaves specific kinds of small proteins to smaller peptides which will generate the indispensable proteins for the replication process [1].HIV-1 protease inhibitors can combine with the protease firmly but cannot be cleaved, so the protease will not combine with the substrates and its function will be inhibited.Nevertheless, it's not practical to find inhibitors in laboratory by conducting biological experiment, because there are too many kinds of peptides to test one by one.Take octapeptide for example: there are 20 kinds of amino acid residues in nature, thus there are 20 8 kinds of octapeptides altogether.It's impossible to test so many octapeptides by biological experiment.Nevertheless, machine learning can be used here to solve the problem [2]. For a machine learning task, feature extraction, dimensionality reduction, classifier designing and performance evaluation are of great importance, which will be discussed as follows: octapeptide that contains eight amino acid residues is the research object in the research.In previous investigations, researchers proposed different feature extraction methods for octapeptide sequence which can be mainly divided into two categories: feature extraction based on peptide sequence and physicochemical properties [3].Orthogonal encoding (OE) is a classical feature extraction method based on sequence.Features based on physicochemical properties can be extracted from the Amino Acid Index Database (AAindex database) which is a collection of amino acid indices in published papers [4].The inherently contained characteristics of amino acids can provide useful information for the prediction task [5].Many published bioinformatics investigations use data from this database [6][7][8].Loris Nanni and his colleague propose two kinds of new physicochemical features using principal component analysis (PCA) and non-linear Fisher transformation (NLF) based on this database [9].The two kinds of new features are compared with OE, and turn out to perform better than OE.For some pattern recognition tasks, if a stand alone method is not good enough, ensembles of features can be conducted to improve classification performance [10].Thus the three kinds of features are fused in our research to guarantee comprehensive representation.Feature selection is mentioned that can improve classification performance in their work too, and it's a key point in this paper. Feature selection is an effective dimensionality reduction method, which is quite different from feature transformation.It does not change the original features, but keeps the original structure features and help understanding the physical meaning of data [11].It also removes redundant features and raises classifier efficiency, thus improving prediction performance [12].Local preserving projection (LPP) is an effective feature transformation method, which retains the meaningful information and eliminates the redundant information [13].However, the retained information is saved in the transformed features, difficult to understand.We expect to find the relationship between the retained information and transformed features.Thus a feature selection approach called BPFS that approximates LPP is used to find the optimal feature subset [14].The subset includes features from original features space and contains the meaningful information.BPFS has one severe drawback: the optimal feature number of subset is not clearly defined, and different data might obtain their own optimal feature number of subset.In this paper, we conduct complete tests for all subsets with different feature numbers, and calculate multiple evaluation parameters to compare their prediction performance, based on which to determine the optimal feature number for each kind of original features. Performance evaluation is much important for a machine learning task, and different evaluation parameters can be used.Loris Nanni and his colleague use euc (1-auc) to evaluate their method, which is equivalent with auc [9,15].Auc can overall measure the perform-ance of a classifier based on setting different classification thresholds and calculating corresponding sensitivities and specificities.However, for our HIV-1 protease cleavage site prediction task, the best threshold needs to be determined in order to provide best prediction capability.Matthew's correlation coefficient (mcc) can perfectly evaluate the prediction performance of our work using the best classification threshold [16].It takes sensitivity and specificity into consideration at the same time.Also we calculate accuracy, sensitivity, specificity, and auc to better evaluate our work; all of them have their own characteristics and advantages.Especially mcc is the most important evaluation parameter. The rest of this paper is organized as follows: Section 2 introduces the data set and the feature selection method.Section 3 shows the results of experiments and presents the detailed analysis of the results.At last Section 4 provides the conclusion. Data Set There are 20 8 kinds of octapeptides, which is a very big number.To effectively investigate inhibitor prediction, date set should contain as many samples as possible to make sure the completeness of data set.The bigger data set, the more helpful is the prediction result.In previous papers some classic data sets have been collected and analyzed.The most famous one is the 362 data set which is collected by Cai and Chou [17].Another relatively bigger one is the 746 data set, which is collected by You, Garwicz and Rognvaldsson [18].To enlarge the data set, 392 new octapeptides are added to the 362 data set by Hyeoncheol Kim, Tae-Sun Yoon and their colleagues, thus generating a 754-sample data set [19].The largest data set mentioned in the published investigations is the 1625 data set which is collected by Kontijevskis and his colleagues [20].To get a larger data set, we fuse all the data sets above and get 3618 samples.After removing contradictory and redundant samples, there are 1922 octapeptides including 596 positive samples and 1326 negative samples.This dataset is called 1922 data set. Feature Selection A filter method named BPFS is used here to eliminate the redundant features.BPFS is newly proposed to conduct feature selection, which transforms the original high-dimensionality features into a lower dimensionality space by a binary projection matrix (all the elements in it are 0 or 1), thus accomplishing feature selection.Correntropy is used as the evaluation function.The approach of BPFS is to make sure the correntropy between the subset and the labels of samples is a maximum.Assume there are two data sets  which contain N samples.Then the correntropy of X and Y can be calculated according to Eq.1. At the beginning of this algorithm, LPP is carried out to get the mapping matrix C. Assume the data set contains n samples.The original feature number of data is d, and the feature number after conducting LPP is p.The feature selection model is like this: a data set d n X R   contains n samples and each sample is represented by a d-element vector x i ; learn a mapping matrix which maximizes the objective function J(W).Here W is a 0-1 matrix.Assume that the n samples in data set belong to N c different classes and the sample number of the class x i belongs to is Let Y is the data set after feature selection, then Y = WX.J(W) can be represented by the correntropy between Y and C, as shown in Eq.2. Here . For all i and j, , and A series of math operations prove that the task to find the best projection matrix can be converted to a binary programming problem, and we use Hungary algorithm to solve this binary programming problem.A drawback of BPFS is that the inherent dimension of data is not determined, thus the optimal feature number of subset is not affirmed.In the following part, we will determine the best feature number of subsets for each kind of features. Optimization for Subset Feature Number BPFS is an effective feature selection method while the feature number of subset need to be set before using it.Thus before conducting BPFS on the three kinds of features, the optional p values for them should be affirmed.Here p is determined by completely testing all subsets with different p values.Take OE for example, each amino acid residue is represented by a 20-bit vector.Thus an octapeptide sequence is represented by a 160feature vector, which means the feature number of the original OE data is 160.In the beginning p is set to 1 and BPFS is conducted, then a subset containing one feature is got.Carry out 10-fold cross validation on this subset, compute four evaluation parameters (accuracy, sensitivity, specificity and mcc) and save them.Then p is set to 2 and same work is done as mentioned previously.Each time make sure p is added by 1 and do the work.Repeat this process until p is 160.When all the work is done the evaluation parameters for each value of p is saved, according to which the optimal p is determined. The principle we follow is to make sure the parameter obtains a relatively high value, and starting from this point all the values following are relatively high.Comprehensively consider the values of all the parameters for all different subsets and finally determine the optimal p value.For example the original feature number of OE for an octapeptide is 160. Figure 1 shows all the parameter values of different subsets.The abscissa of each subgraph denotes the feature number of each subset, and the ordinate of each subgraph denotes the value of each evaluation parameter for different subsets.When the subset includes 120 features, the four parameters get relatively high values and the following values are high too.Thus p is set to 120 for OE.For PCA based features, each amino acid residue is represented by a 19-element feature vector, thus an octapeptide sequence can be represented by a 152-feature vector.And for NLF based features, each amino acid residue is represented by an 18-element feature vector, thus an octapeptide sequence can be represented by a 144-feature vector.Repeat the same work for PCA and NLF based features, and the optimal p values for them are 124 and 106.In the following part, the prediction capability of the three optimal subsets is examined. EXPERIMENTS AND DISCUSSIONS In order to comprehensively analyze and compare the experiment results, multiple evaluation parameters are used in this paper: accuracy, sensitivity, specificity, mcc and auc.Different from Loris Nanni's work, in which only euc is used, our work can effectively assess the experiment results and provide instruction for HIV-1 protease inhibitors designing. In order to get excellent prediction capability, parameter optimization is conducted for SVM in this paper.The radial basis function (RBF) is chosen as the kernel function in this work.Here accuracy, mcc and auc are separately used to determine the optimal C and g values by 10-fold cross validation.The three parameters are unbiased thus can evaluate the classification performance effectively.The range of C is set between 2 0 and 2 5 , and the range of g is set between 2 −5 and 2 0 .Each time the index of base 2 increases by 0.5 until it reaches the ceiling value.The results of parameter optimization are shown in Table 1.The optimal C and g are determined according accuracy, mcc and auc respectively. First we use accuracy to determine the optimal C and g.Then test the prediction performance by 10-fold cross validation and calculate the five evaluation parameters.Table 2 shows the detailed results of each kind of fea- Ensemble of the three original features can significantly improve prediction capability and performs better than all the single original features.This means fusion of the three kinds of original features can effectively make use of different information contained in the features, thus improving prediction capability.Examining the results of the three subsets for different features, we can find their performances are quite close to their corresponding original features.This means feature selection successfully eliminates redundant features and preserves informative features thus keeping good prediction capability.Ensemble of the three subsets gets best result in this table, which means it makes sure the redundant features are eliminated and useful features are preserved, and different kinds of information are effectively used.The results prove that feature fusion of subsets got by feature selection can significantly improve prediction performance.Also mcc is used to optimize SVM parameters here.The prediction results of 10-fold cross validation are shown in Table 3. From this table, we can find the prediction capability of original OE, PCA and NLF based features is different: PCA based features gain best results, NLF based features gain little inferior results and the results of OE are not as good as them.This kind of results is consistent with the conclusion got in the previous part: PCA and NLF based features have better prediction capability than OE.This time ensemble of the three kinds of original features significantly improves prediction performance again.The results of the three subsets show that they obtain very close prediction capability to their original features.The ensemble of three subsets also gets very good results which are equivalent with the ensemble of three kinds of original features.This means fusion of subsets keep prediction capability as good as original features even though the dimension of feature space is reduced. At last, auc is used to choose the optimal parameters for SVM, and the results of 10-fold is shown in Table 4. From table, we can find that the original OE and NLF based features have equivalent prediction capability, and PCA based features are better than them.Also the results of three subsets are close to their original features.This time the ensemble of three kinds of original features gain slightly inferior results to original PCA based features.The reason for that may be the parameters for SVM are not appropriate enough.Nevertheless, the ensemble of three subsets still gain the best results, which means that feature fusion of the three kinds of features after feature selection is useful and effective for HIV-1 protease Comparing all the results shown in the three tables, we can find the best results are feature fusion of the three subsets using the SVM parameters optimized based on classification accuracy.Its mcc and auc values are the largest in all the experiment results.The other three evaluation parameters also get very high values.In Loris Nanni's work, only one kind of evaluation parameter is used: euc, which can be calculated by 1-auc.Our work provides five parameters to evaluate prediction performance, because only one kind of parameter isn't enough to effectively measure the results.Though the best euc got in Loris Nanni's work is 0.007, and the best euc in our work is 0.008 (1 − 0.992), our work gets quite high mcc value.Euc can measure the overall performance of a classifier testing different classification thresholds, but the most important point of HIV-1 protease cleavage site prediction task is to train a good classifier with optimal parameters to accomplish a good prediction model.Finding the only best threshold can affirm the classifier has best prediction capability, and mcc can perfectly evaluate the prediction performance using the optimal parameters and classification threshold.The best results in our work are pleasantly surprising.The best mcc in our work is 0.914 which is quite a high value.It is reasonable to believe that our results are better than the state of art results, and also Loris Nanni's results.Our work can provide much useful help for researchers and doctors to discover or design HIV-1 protease inhibitors in the future. CONCLUSION Feature selection is a new approach for HIV-1 protease cleavage site prediction.Different from traditional methods, our work eliminates the redundant features, simplifies the feature structure and improves prediction performance.Physicochemical properties of amino acid residues provide a lot of useful information and we try to make good use of them for the prediction task.Thus two newly proposed kinds of features extracted from AAindex database by conducting PCA and NLF are used in this paper.Traditional OE features are also used, while results of the experiment show that the two kinds of new features perform better than OE.To make effective use of the physicochemical and sequence information contained in an octapeptide, we fuse the three kinds of features to represent an octapeptide.Parameter optimization for SVM is also conducted to improve the prediction capability of the classifier.To make a complete comparison between our method and previous work, five evaluation parameters are calculated for each kind of work.The results turn out to be that our method gain better prediction performance than the state of art work.In the future, we expect to find a new feature extraction method to generate more informative features to represent an amino acid residue.More effective feature selection methods can be used to pick out the useful and informative features to improve prediction performance.Moreover, a more successful ensemble method of features or classifiers can be used to solve the prediction task.Hopefully the future investigation of HIV-1 protease cleavage site will provide more useful help for HIV-1 protease inhibitor development. Figure 1 . Figure 1.The test results of all possible subsets for OE features. Table 1 . aThe optimal C and g determined according to different evaluation parameters. a Here OE means the original OE features, and OE_FS means the subset for OE features after feature selection.The PCA and NLF based features are indicated in the same way.The ensemble of three kinds of original features is shown as All_fusion, and the ensemble of the three subsets is shown as FS_Fusion.The two values in each column are the C and g values for SVM respectively. Table 2 . Prediction performance of accuracy based optimization parameters.and their fusion combinations.Comparing the five evaluation parameters of original OE, PCA and NLF based features we can find PCA and NLF based features get better prediction performance than OE.PCA based features perform a little better than NLF based features. tures Table 3 . Prediction performance of mcc based optimization parameters. Table 4 . Prediction performance of auc based optimization parameters.
4,442.2
2013-12-06T00:00:00.000
[ "Computer Science", "Medicine" ]
User Clustering and Resource Allocation in Hybrid NOMA-OMA Systems Under Nakagami-m Fading In this paper, we tackle the problem of optimizing user clustering, power, and resource (time slot or bandwidth) allocation in the downlink of a hybrid non-orthogonal multiple access (NOMA)-orthogonal multiple access (OMA) system. In such a system, users are organized into several clusters under one of the following scenarios: (1) fixed cluster size, (2) fixed number of clusters, and (3) variable number of clusters and variable cluster size. A power domain NOMA (PD-NOMA) scheme is used in each cluster, while OMA is employed for allocating resources to different clusters. The goal is to maximize the minimum success probability (which is equivalent to minimizing the maximum outage probability) among all users to guarantee fairness. We prove that at the optimal solution, all users have the same success probability, which is called the common success probability (CSP). Then, we propose an efficient algorithm for finding the optimal CSP and cluster resource allocation factors simultaneously. The optimal power allocation factors and the optimal decoding order of users in each cluster are then derived in closed-form expressions based on the obtained optimal CSP. Simulation results show considerable performance gains by the proposed scheme, compared to existing schemes in terms of fairness, the minimum success probability of users, and the sum throughput. I. INTRODUCTION Substantial growths in the number of users and emerging high data-rate applications with strict quality-of-service (QoS) requirements pose new challenges for the design/plan of future generations of cellular networks. It has been widely acknowledged that it is imperative to employ more efficient multiple access schemes and improve their performance to cope with such demands. Over the last few years, nonorthogonal multiple access (NOMA) has received a lot of attentions and regarded as a promising multiple access scheme due to its ability to serve multiple users in the same time/frequency resource block. In particular power-domain NOMA (PD-NOMA) is considered in various standardization activities since it can improve spectral efficiency, fairness and throughput of cell-edge users [1], [2]. In PD-NOMA, the base station (BS) combines the users' signals by superposition The associate editor coordinating the review of this manuscript and approving it for publication was Cesar Vargas-Rosales . coding at the transmission side, whereas each user detects its own signal by successive interference cancellation (SIC). However, as the complexity and latency of the SIC method increase with the number of users [3], it is impractical when there is a large number of users in the network. To overcome this issue, it is possible to organize the users into several clusters and deploy orthogonal multiple access (OMA) techniques alongside NOMA. In fact, the hybrid NOMA-OMA approach has been investigated in several works considering different design goals and under different assumptions [4]- [16]. In general, those existing works can be categorized based on different aspects such as performance metrics, optimization techniques, clustering methods and fading channel models. For example, some authors focus on maximizing the sum rate [4], [5], maximizing the energy efficiency (EE) [6], or minimizing the total power consumption [7]. Other authors consider establishing fairness among the users in terms of diversity order [8], data rate [9], outage [10], VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ and throughput [11]. In addition, user clustering algorithms in hybrid NOMA-OMA systems are considered in several works. For example, heuristic user clustering methods are proposed in [12], [17] based on the channel gains, while machine learning methods are studied in [13]. However, none of these methods are based on closed-form expressions that can quantify the resource demand of a cluster and hence can facilitate the clustering algorithm. In contrast, the user clustering algorithm developed in this paper will be based on closed-form expressions of the resource demand. Another observation with regard to the existing user clustering methods for hybrid NOMA-OMA systems is that many of them use static algorithms, which require the total number of users to be fixed before running the algorithm [14]. There are other algorithms that consider dynamic scenarios in which some users can enter or exit the network during the running of the clustering algorithm [13]. Cluster size (N ) is another important parameter in the clustering procedure. This parameter is fixed as N = 2 in some papers [14], [15], and as N ≥ 2 in [16]. Moreover, a recent work considers the more general case of having a variable number of users in each cluster [5], whereas the work in [13] allows users dynamically leave their current cluster and join a better cluster based on some criteria. A differentiating feature in the research works concerning the hybrid NOMA-OMA scheme is the assumption on the channel state information (CSI). Most of the works, such as [4]- [7], assume perfect instantaneous CSI, which is either impractical or imposes heavy signaling overhead to practically achieve it. In contrast, assuming and requiring statistical CSI only (which is also considered in this paper) can mitigate the overhead issue since the channels can be monitored over longer periods of time, and hence requiring less feedback to be sent to the transmitter. Furthermore, it is pointed out that most works on hybrid NOMA-OMA systems adopt the Rayleigh fading channel model [10], whereas a more general fading model, such as the Nakagami-m fading, has not been considered in the literature. For clarity, Table 1 summarizes the key points in the above discussion and highlights the differences among existing works on user clustering in hybrid NOMA-OMA systems with respect to research objectives and assumptions. Considering the above background, in this paper we investigate the problem of user clustering, resource allocation and decoding order selection in a hybrid NOMA-OMA system. In order to guarantee fairness among all the users, we maximize the minimum success probability among them, which is equivalent to minimizing the maximum outage probability. The channel model is Nakagami-m fading and only statistical CSI is available at the transmitter. This channel model presents a high complexity of the resource allocation problem under consideration and it affects all aspects of the solution, including optimal decoding order of the users, and resource allocation factors. For user clustering, we consider three different scenarios: (a) fixed number of users in each cluster, (b) fixed number of clusters, and (c) variable number of clusters and variable number of users in each cluster. In order to solve the problem of maximizing the minimum success probability among all the users in a hybrid NOMA-OMA system, we first prove that at the optimal solution, all users have the same success probability, which is called a common success probability (CSP). Then, we propose an efficient algorithm to find the optimal CSP and optimal resource allocation factors simultaneously. Next, we derive the optimal inter-cluster power allocation factor for each cluster in a closed form, which is the sum of optimal power allocation factors of individual users in that cluster. We also derive closed-form expressions for the optimal decoding order and intra-cluster power allocation factors of individual users based on the optimal CSP and resource allocation factor of each cluster. In summary, the contributions of this paper are as follows: • Proposing a novel scheme for user clustering, resource allocation and decoding order selection in a hybrid NOMA-OMA system to guarantee fairness among all users in terms of success probability (or, equivalently, its complement outage probability). • Proposing an efficient algorithm for finding both the optimal CSP of the users and optimal resource (time slot or bandwidth) allocation factors of the clusters in the system. • Deriving closed-form expressions for the optimal decoding order, individual user power allocation factors and cluster power allocation factors. • Proposing three efficient user clustering algorithms considering constraints such as fixed cluster sizes or fixed number of clusters. • Showing that establishing fairness among all users in a hybrid NOMA-OMA system in terms of the success probability of users can also improve the sum throughput of the system. The rest of the paper is organized as follows. Section II describes the system model. Section III studies the optimal intra-cluster power allocation and decoding order selection for one cluster. Section IV examines the problem of optimal inter-cluster power and resource allocation. Section V proposes user clustering algorithms. Section VI describes the complete proposed scheme. Section VII evaluates performance of the proposed scheme. Section VIII concludes the paper. II. SYSTEM MODEL We consider a hybrid NOMA-OMA downlink system with a single-antenna base station (BS) sending mutuallyindependent information to K single-antenna mobile users. With the hybrid NOMA-OMA, the BS arranges users into L clusters. An orthogonal multiple access scheme such as time division multiple access (TDMA) or orthogonal frequency-division multiple access (OFDMA) is used across different clusters, whereas a power domain NOMA (PD-NOMA) is used within a cluster. The choice for the inter-cluster orthogonal multiple access is irrelevant to the analysis in this paper. In particular, the resource allocation factor obtained for each cluster can be interpreted as a proportion of allocated time in the TDMA or as a proportion of allocated bandwidth in OFDMA. Therefore, in the rest of the paper we only refer to the time/bandwidth allocation factor as a resource allocation factor. Denote the index set of users by K = {1, 2, . . . , K }, the kth user by U k , index set of clusters by C = {1, 2, . . . , L}, the global index set of users in the th cluster by C = {ν ,1 , ν ,2 , . . . , ν ,|C | } and number of users in C by |C |. In fact, C is a subset of K, C ⊂ K, and contains the global (inter-cluster) indices of the users. Clustering should be done such that each user is a member of exactly one cluster. Thus, we should have We also define I = {1, 2, . . . , |C |}, which is intra-cluster index set of the users. Let the total power of the transmitter be P T and the total channel resource is W T , which can be time or bandwidth. The power and resource allocation factor of cluster are denoted with δ and ω , respectively (0 < δ < 1, 0 < ω < 1). Thus, the power and resource allocated to cluster are δ P T and ω W T , respectively. For cluster C , the BS combines |C | independent signals of its users by superposition coding and sends the combined signal to them. Each user has to perform SIC to obtain its own signal. The transmitted signal for cluster C , denoted by x , is given as In the above expression, x ν ,i is the transmitted signal of the ith user in the th cluster, satisfying E(|x ν ,i | 2 ) = 1, and 0 ≤ α ν ,i ≤ 1 is the intra-cluster power allocation factor for U ν ,i . Hence, α ν ,i δ specifies the proportion of the total power P T that is allocated to user U ν ,i . Denote the Nakagamim fading channel coefficient between the BS and U ν ,i by h ν ,i , and additive white Gaussian noise (AWGN) with zero mean and variance N 0 at U ν ,i by z ν ,i . Then, the signal received by U ν ,i is It follows that the normalized instantaneous SNR of the received signal at U ν ,i in the th cluster, ψ ν ,i , is given as where γ ν ,i is the normalized instantaneous SNR of U ν ,i when all the available power P T and resource W T are allocated to cluster C (i.e., δ = 1, ω = 1). Thus, under the assumption of Nakagami-m fading, ψ ν ,i has a Gamma distribution [18] where m ν ,i ≥ 1/2 is the shape factor, The quantitiesψ ν ,i and σ 2 ψ ν ,i , respectively are the mean and variance of the instantaneous SNR ψ ν ,i , and is the Gamma function, defined as VOLUME 10,2022 In this paper, to maximize the minimum success probability among all users in a hybrid NOMA-OMA system, we adopt a bottom-up problem solving approach. We first investigate the intra-cluster power allocation and decoding order optimization for one cluster. Then based on the obtained results, we solve the inter-cluster power and resource allocation problem. Finally we propose clustering algorithms and combine all the results into a unified scheme. For implementation, the BS follows these steps in the reverse order. First it organizes the users into clusters. Then it determines the inter-cluster power and resource allocation factors. Finally it calculates the optimal decoding order and intra-cluster power allocation factor of each user. For each cluster, an optimization problem should be solved to maximize the minimum success probability of the cluster users by optimizing power allocation factor of each user and selecting the optimal decoding order of users within the cluster. In our previous work [19], we solved such a problem for a single NOMA cluster with K users. Specifically, we proved in [19] that at the optimal solution of the problem, all users have an equal success probability, which we called a common success probability (CSP) of the users. Then, the optimal decoding order and optimal power allocation factor of the users were derived based on their CSP in a closed form and an efficient algorithm was proposed for finding the optimal CSP of the users. The results in [19] thus lay a foundations for the analysis and optimization of the hybrid NOMA-OMA system operating over Nakagami-m fading channels wherein users are assigned into several clusters. As such, in the next section we briefly review the results in [19]. In Section VI we extend the results of [19] to the more general case of hybrid NOMA-OMA. Given the large number of parameters and notations used throughout the paper, Table 2 summarizes the main system parameters to facilitate reading the paper. III. INTRA-CLUSTER POWER ALLOCATION AND DECODING ORDER SELECTION Since this section focuses on power allocation and decoding order for users in one cluster, without loss of generality, we assume that all the power P T and resource W T are allocated to cluster C (δ = 1 and ω = 1). Thus, according to (4), the instantaneous SNR of the ith user in cluster is given as Our objective is to maximize the minimum success probability by optimizing the intra-cluster power allocation factors and the decoding order among all users in the cluster. With SIC decoding, each user decodes other ''prior'' user signals one by one, and cancels out their effects on the received signal until its own signal is obtained. In general, the decoding order is a permutation of users' indices, denoted by π = {π ,1 , π ,2 , . . . , π ,|C | }. If π ,i = k, then x k is the ith signal to be decoded in cluster . The SNR at U π ,k that is relevant to decoding x π ,i can be calculated as follows γ π ,k π ,i = γ π ,k α π ,i γ π ,k α π ,i I where α π ,i I = |C | j=i+1 α π ,j is simply the sum of intra-cluster power allocation factors of the users whose signals are decoded after x π ,i (those signals are treated as noise). Therefore, based on Shannon's theorem, user U π ,k cannot decode x π ,i correctly, if γ π ,k π ,i < 2 r π ,i − 1, (10) or if one of the prior signals was not decoded successfully, before decoding x π ,i . In (10), r π ,i is the data rate of user U π ,i , normalized according to total resource W T and γ π ,i is the normalized SNR of the user (assuming that the total power P T and resource W T are allocated to one cluster C ). Thus, the outage event for user U π ,k in decoding signal x π ,i can be defined as Note that for the notation O π ,k π ,i used for the outage event above, the superscript specifies the user who is performing the SIC, whereas the subscript specifies the signal that is being decoded. Obviously, the outage event for user U π ,k with respect to decoding x π ,k is O π ,k π ,k , which is simply the event that U π ,k cannot decode x π ,k (its own signal) correctly. Hence, the success probability of user U π ,k can be written as In [19], we show that for each user U π ,k , a minimum SNR threshold for successful decoding can be found as γ π ,k th = 2 r π ,k − 1 α π ,k − (2 r π ,k − 1)α π ,k I , k ∈ I . Using the above expression simplifies the expression in (11) to Consequently, it is shown in [19] that the success probability of user U π ,k can be calculated as where Q(·, ·) is the regularized upper incomplete gamma function, defined as [20] Q(a, Then, we show that for maximizing the minimum success probability among users, all the users have an equal success probability, called the common success probability (CSP) (Theorem 2 in [19]). Subsequently, assuming that the optimal CSP of users in cluster is p , we show that the optimal decoding order is given by the ascending order of parameter β π ,k , defined as where Q −1 (·, ·) is the inverse function of Q(a, x) with respect to the second parameter x (see Lemma 3 in [19]). Note that the function Q −1 (·, ·) can be calculated using a numerical method. 1 In other words, the optimal decoding order π should be such that β π ,1 ≤ β π ,2 ≤ · · · ≤ β π ,|C | . The parameter β actually represents the quality of the channel of each user. Thus, if a user has a lower β it should be given a higher power allocation factor and a higher priority in decoding order. Therefore, the optimal intra-cluster power allocation factors for users in each cluster can be calculated as (for more details, see Theorem 3 in [19]) In [22], necessary conditions are derived for power allocation factors of users in a NOMA system to prevent the signal constellations from overlapping in the superposition coding. It is assumed that each of |C | users of the NOMA cluster employs a square quadrature amplitude modulation (QAM) constellation. We know the fact that the modulation order M π ,i and bit rate r π ,i of user U π ,i are related as where R is the symbol rate of the transmitter for the th cluster. Thus, we can restate the conditions derived in [22] for power allocation factors using the notations in this paper as where ζ π ,i = 2 r π ,i − 1, and without loss of generality, we set R = 1 (for more details, the reader is referred to Proposition 1 and Inequality (19a) in [22]). In the next theorem, we prove that our proposed power allocation scheme always satisfies those necessary conditions. Theorem 1: The power allocation factors (19) for any number of users |C | in the NOMA cluster and arbitrary modulation orders M π ,i employed by the users satisfy the conditions given by (21). Proof: See Appendix A. Furthermore, the sum of all power allocation factors as derived in (19) can be calculated in a closed form as which is independent of individual intra-cluster power allocation factors. The sum of intra-cluster power allocation factors S(p , π ) should be exactly one. A value less than one means some of the allocated resource remains unused and a value higher than one means that the cluster is using more resources than what has been allocated to it. Thus, in [19], we incorporated and proved the necessity of the constraint to find the optimal CSP in an efficient way by performing a binary search on parameter p . For completeness, the algorithm for finding the optimal CSP is included in Appendix B. In the next section, we extend that algorithm to simultaneously find both the optimal CSP and optimal inter-cluster resource allocation factors when users are grouped into several clusters in a hybrid NOMA-OMA system. We also generalize the obtained intra-cluster power allocation factors (19) to the case of hybrid NOMA-OMA in Section VI. IV. INTER-CLUSTER POWER AND RESOURCE ALLOCATION As explained in the previous section, within a cluster, maximizing the minimum success probability of all users can be done by the following steps: 1) Find the optimal CSP p by running Algorithm 7 (see Appendix B). 2) Select the optimal decoding order of users in the cluster according to (18). 3) Calculate the optimal power allocation factors for users by (19). Since all users in a cluster have the same success probability p , the problem of maximizing the minimum success probability of users across all clusters can be formulated as Similar to the intra-cluster optimization problem, we can also prove that at the optimal solution of the inter-cluster optimization problem in (24), the success probabilities of all users are equal. This result is summarized in the following lemma. Lemma 1: At the optimal solution of problem (24), the success probabilities of all users across all the clusters are equal and we have Proof: See Appendix C. Furthermore, we have the following results regarding the constraints of problem (24). Lemma 2: At the optimal solution of problem (24), constraints (24b) and (24d) are satisfied with inequality, and constraint (24c) is satisfied with equality. Proof: This lemma can be proved by contradiction. Suppose that for one of the clusters, either constraint (24b) or (24d) is satisfied with equality. Then the success probability of that cluster would be zero, which contradicts with the objective of maximizing the minimum success probability of all users. On the other hand, if constraint (24c) is satisfied with inequality, then all the cluster power allocation factors, δ , ∈ C can be multiplied by 1/ ∈C δ . Because the success probability is a strictly increasing function of power allocation factors, the increase of power allocation factors increases the success probabilities of users in all clusters, which is a contradiction. Thus, the lemma is proved. Recall that the results in the previous section were obtained when the total power P T and resource W T are allocated to a single cluster C and the resulting data rates and SNRs of users in the cluster are normalized according to those parameters. In this section the power and resource allocated to cluster C are δ P T and ω W T , respectively. Thus, instead of parameters r andγ , we need to use parameters r/ω and δ γ /ω , respectively, in the function S(p , π ) defined in (22). On the other hand, from Lemma 1 we know that at the optimal solution of problem (24) the success probability of all users across all the clusters are the same. Thus, assuming that the CSP is p we can rewrite (23) for each cluster as follows: where π ,i is the index of the ith user in the optimal decoding order of cluster C . From (26) we can derive the power allocation factor of each cluster based on its resource allocation factor, CSP and statistical CSI in a closed form: In Lemma 2 we proved that at the optimal solution of problem (24), the sum of all inter-cluster power allocation factors δ is equal to one. Thus, if we denote the vector of all inter-cluster resource allocation factors as ω = [ω 1 , ω 2 , . . . , ω L ] and define h(p, ω) as then according to (27) and (28) we should have Therefore, we can reformulate the problem in (24) as follows: Under the normal and expected condition 2 that 0.5 ≤ p ≤ 1, we can prove that problem (30) is convex. As such, we propose an efficient algorithm for solving it by utilizing Karush-Kuhn-Tucker (KKT) conditions [23]. Proof: Refer to Appendix D. According to Lemma 3, after modifying last constraint of problem (30) to 0.5 ≤ p ≤ 1 problem is convex and the KKT conditions of the resulting convex optimization problem are as follows: • Stationarity: • Dual feasibility: Based on (31h), (31i) and (31j), it is straightforward to verify that all µ values should be equal to zero. Otherwise it will result in special cases that are not practically feasible nor important. For instance, any of µ , ∈ {1, 2, . . . , L} not being zero means that the resource allocation factor and consequently, the success probability of that cluster is zero. Thus, the KKT conditions (31) can be rewritten as To simplify the relations we define λ = − λ 2 λ 1 in (32a). Algorithm 1 is then proposed to find inter-cluster resource allocation factors ω , ∈ C, and CSP p of all users simultaneously. In this algorithm, the parameter specifies the precision of the output parameters and can be chosen arbitrarily as an input of the algorithm. As a default value we set it to = 10 −3 in the simulations. In lines 4 and 7 of this algorithm, it is necessary to find λ and ω such that (32a) and (32d) are satisfied. These parameters can be found using Algorithms 2 and 3, respectively, and they are discussed further below. In Algorithm 1, the value of p is not restricted to the interval (0.5, 1) as stated by constraint (32f). For the case p ∈ (0, 0.5), the algorithm will converge to a solution that guarantees fairness among the users, however, Find λ, ω according to Algorithms 2 and 3, respectively. 8: if h(p, ω) < 1 then 9: p L = p 10: else if h(p, ω) > 1 then 11: p H = p 12: else 13: return p, ω 14: end if 15: end while 16: return p, ω. we cannot prove the optimality of such a solution by using the KKT conditions. As pointed before, the case that the success probabilities are less than 0.5 are not practically important. Parameter λ can be found using (32a). In Appendix D (proof of Lemma 3), we obtain ∂h(p,ω) ∂ω in (62). Therefore, λ can be derived as follows: where f (x) = x(ln x − 1). It is clear from (33) that λ < 0 always holds true. On the other hand, from (65) in Appendix D we have which means that λ is a strictly increasing function of ω , and also the converse function ω is a strictly increasing function of λ. Thus, using the fact that ω values should be such that (32d) is satisfied, we can derive boundaries for the acceptable range of λ values. We know that ω ∈ (0, 1) and based on (33), choosing ω in the neighborhood of zero results in λ → −∞. This means that if λ is less than a threshold value, then all ω values will be near to zero, and their sum would not add up to one to satisfy (32d). Therefore, an acceptable range for parameter λ is as follows: λ ∈ (λ min , λ max ). (37) Find ω , ∈ C according to Algorithm 3. Recall that ω is a strictly increasing function of λ. Thus, if λ > λ max the summation ∈C ω > 1 and if λ < λ min the summation ∈C ω < 1. Now that the parameter λ is bounded, we can adopt a binary search for finding its value, as outlined in Algorithm 2. In line 6 of this algorithm, it is necessary to calculate ω values, which are bounded to the interval (0, 1) and should satisfy (33) with the given value for λ. To this end, we consider the following function The root of e(ω ) is the optimal value of ω . Therefore, based on the fact that λ is a strictly increasing function of ω , a binary search can be used to find the optimal ω as proposed in Algorithm 3. By using Algorithms 1, 2 and 3 we can obtain the optimal CSP p and the optimal inter-cluster resource allocation factors ω , ∈ C. Then, the inter-cluster power allocation factors δ , ∈ C can be readily found from the closed-form expression in (27). It should be pointed out that Algorithms 1, 2 and 3 are operated jointly to find the optimal CSP and resource allocation factors. Specifically, Algorithm 1 performs a binary search on CSP p of the clusters, and finds the optimal value in log 2 (1/ ) iterations. In each iteration, it calls Algorithm 2, which also performs a binary search to find the proper value of λ in log 2 (1/ ) iterations. Likewise, Algorithm 2 in each iteration calls Algorithm 3 to find values of ω by another binary search. These three nested binary search algorithms find the optimal values of ω and CSP p of all L clusters in L log 2 (1/ ) 3 iterations, which grows linearly with the number of clusters L. In contrast, the exhaustive search method would need to evaluate (1/ ) 2L+1 states to find the optimal ω values, optimal δ values and optimal CSP of users, which grows exponentially with the number of clusters L. Thus, the computational complexity of our proposed method is much less than that of the exhaustive search method. V. PROPOSED USER CLUSTERING ALGORITHMS Building on the results given in the previous section, in this section we shall propose user clustering algorithms for the following three cases: 1) The number of users in each cluster |C | is fixed. 2) The total number of clusters L is fixed, but the number of users in each cluster |C | can be variable. 3) Both |C | and L are variable. All three algorithms are developed based on the same principle of minimizing the power consumption of all clusters according to the closed-form expression (27) for power allocation factor of each cluster. We consider constant values for the resource allocation factor ω and target success probability p. In the clustering step, the goal is to find users who can cooperate the best in a NOMA setting, in the sense that they need the least power to achieve a given target success probability. After finding the clustering structure, the optimal power allocation factor, resource allocation factor, and optimal CSP of users are determined based on the total available power and resource at the transmitter, according to the results of the previous section. We also investigate the impact of selecting the initial value of CSP on the performance of user clustering by simulations and show that even without iterating over multiple initial values of CSP our proposed algorithms outperform existing algorithms (see Section VII). Therefore, in developing clustering algorithms we assume that resource is allocated equally to all clusters and consider p = 0.95 as a target success probability (but they can be chosen any other value arbitrarily). To derive the cost metric δ for any cluster C , it is necessary to select the optimal decoding order π according to (18). A. CASE 1: EQUAL NUMBER OF USERS IN ALL CLUSTERS Let K be the total number of users and N the number of users in each cluster. Then the number of clusters is L = K N (the number of users in the last cluster may be less than N if N does not divide K ). For initialization of the clustering algorithm, we assume that the total available resource is divided equally among the clusters, i.e., ω = 1/L, ∈ C. We also consider an arbitrarily given target success probability, for example p = 0.95. We first sort users based on the ascending order of parameter β π k , defined in (17). The first user in the list is simply selected as the first user of the first cluster. To choose the second user of the first cluster, we examine every remaining user in the list together with the first user and form a two-user cluster. We calculate δ for each of these two-user clusters according to (27) with ω = 1/L, p = 0.95 and the optimal decoding order in (18). Then the user having the lowest δ is chosen as the second user of the first cluster. The same procedure is then repeated in order to choose the 3rd, 4th, . . . , and N th users of the first cluster. After selecting the N th user of the first cluster, we continue with the same procedure to create the next clusters until all users are clustered. Algorithm 4 provides pseudo-code for the proposed clustering scheme. Ignoring the complexity in selecting the first user in each cluster, for selecting the second user in the first cluster δ should be calculated K − 1 times, and for selecting the third user, δ needs to be calculated K − 2 times, etc. Thus, the computational complexity of Algorithm 4 is at most which increases polynomially in time with the total number of users K . It should also be pointed out that Algorithm 4 is a static algorithm since all the users should be available before running the algorithm. B. CASE 2: FIXED NUMBER OF CLUSTERS L Recall that Algorithm 4 assumes that the number of users in each cluster is fixed, which also means the number of clusters is fixed. For the case considered in this subsection, we relax that constraint and require that only the total number of clusters is fixed, whereas there is no constraint on the number of users in each cluster. To put K users into L clusters, we first sort the list of users based on the ascending order of parameter β π k in (17). Then we choose the first L users of the sorted list (who have the weakest channels) and put them into L clusters. Thus, after this step, each cluster has one user. For clustering the rest of users, based on the sorted list, we calculate δ , ∈ {1, 2, . . . , L} for each user assuming that it has joined cluster C and select the cluster that results in the minimum value of δ (after adding that user). Algorithm 5 gives pseudo-code for this clustering scheme. It is pointed out that this algorithm can be deployed in a dynamic scenario as well. Since any newly arrived user can join one of the existing clusters based on the criterion of minimizing δ without changing the whole clustering structure. Sorting the users based on β π k in advance has the VOLUME 10, 2022 U sel is selected as the first user of K, and remove it from K. benefit of simplifying the calculation of δ as explained next. In calculating δ for a cluster, it is necessary to select the optimal decoding order for that cluster according to (18). But if we sort the users first, each user who joins a cluster will be the last user in the optimal decoding order of that cluster. However, for the newly arrived users in a dynamic scenario, the optimal decoding order should be calculated. Ignoring the complexity in clustering the first L users, for clustering each of the remaining users, δ should be calculated L times. Thus, the computational complexity of Algorithm 5 is proportional to (K − L)L, which increases polynomially in time with number of users K and number of clusters L. |C | AND VARIABLE CLUSTER COUNT L In this case, we examine the most general scenario that the number of users in each cluster as well as total number of clusters are variable. Considering the latency and computational complexity of SIC, it is reasonable to set limits on the minimum and maximum numbers of clusters, L min and L max , respectively. In general, when the number of clusters decreases, more resource can be allocated to each cluster. On the other hand, as the number of users in each cluster increases, each cluster needs more power to achieve a target success probability. The computational complexity and latency of SIC also increase for a larger cluster. In this case, we employ Algorithm 5 to search over all numbers of clusters L in the range {L min , L min + 1, . . . , L max }. For each value of L, we cluster the users according to Algorithm 5 and by assuming a target common success probability (such as p = 0.95) and equal resource allocation (ω = [1/L, 1/L, . . . , 1/L]), we derive the sum of power allocation factors of the clusters according to the closed-form expression h(p, ω) given in (28). Then, we choose the best clustering that results in the minimum sum of power allocation factors for all clusters. Algorithm 6 provides pseudo-code for this clustering scheme. Since this algorithm runs Algorithm 5 in each iteration, its computational complexity is proportional to where L avg = [L min + L max ]/2. Thus, the computational complexity of this algorithm still increases polynomially in time with the number of users K and the number of clusters L. VI. THE COMPLETE USER CLUSTERING, POWER AND RESOURCE ALLOCATION SCHEME In previous sections we developed and presented user clustering algorithms, inter-cluster power and resource allocation schemes, and intra-cluster power allocation and decoding order selection separately. In this section, we combine them in a unified procedure that can be implemented at the BS to organize users into clusters, and allocate power and resource to guarantee fairness among users. Recall that we require the statistical CSI, which contains mean and variance of SNR of users be reported to the BS via feedback channels once in every coherence time interval. The user clustering algorithm and resource allocations can have separate update intervals. For instance, if the resource allocation update interval is T , then clustering can have an update interval of kT to reduce the computational complexity. In all calculations we assume that all the rates and SNRs of the users are normalized according to the total available power P T and total resource W T . Thus, if a user reports ψ ν ,i and σ 2 ψ ν ,i which are the mean and variance of its SNR, normalized according to δ P T and ω W T of its cluster, then the BS should replace them withγ ν ,i and σ 2 γ ν ,i , respectively, which according to (4) can be derived as Likewise, for the downlink rates of users, the BS has to normalize them according to the total available resource W T . In Section III, we derived the optimal intra-cluster decoding order and power allocation factor of users assuming that the total power P T and resource W T of the transmitter are allocated to cluster C (δ = 1 and ω = 1) in (18) and (19), respectively. To extend those results to the general case that δ and ω are not necessarily equal to one, we need to replace the rate r with r/ω and the mean of SNRγ π ,k with (δ /ω )γ π ,k in the definition of parameter β π ,k in (17) and the intra-cluster power allocation factors in (19). Thus, the optimal decoding order is based on the ascending order of parameter β π ,k , which is defined as However, since δ and ω do not change for users inside each cluster, deriving the optimal decoding order based on (17) or (43) gives the same result. Since (17) is more compact, we shall always use it for selecting the optimal decoding order. Performing variable replacements in (19) for the generalized intra-cluster power allocation factors, we obtain Flowchart of the complete proposed scheme for user clustering, power and resource allocation in the base station. Finally, the complete procedure for user clustering, power and resource allocation is summarized in the flowchart of Figure 1 and elaborated further below. 1) Obtain the means and variances of SNRs of all users from the feedback channels. 2) Calculate the shape factor m of Nakagami-m fading channels for all users according to (6). 3) Initialize/Reinitialize a target common success probability (CSP) for user clustering algorithm. 4) Based on the predefined assumption about cluster size and total number of clusters (i.e., being fixed or variable) run one of Algorithms 4, 5 or 6 to cluster the users. 5) Run Algorithm 1 to obtain the optimal CSP p and optimal inter-cluster resource allocation factor ω of all clusters (Algorithm 1 will call for Algorithms 2 and 3 inside itself). 6) If the obtained optimal CSP in Step 5 is good enough (e.g. the absolute difference is less than 0.05) as compared to the initial value of CSP considered, continue to Step 7. Otherwise, go to Step 3 and reinitialize the CSP with the obtained CSP in Step 5. 7) Derive the optimal decoding order for users of each cluster based on the ascending order of parameter β π ,k defined in (17). 8) Use equation (27) to compute the optimal inter-cluster power allocation factor δ of each cluster and obtain the optimal intra-cluster power allocation factor of each user α π ,k according to (44). Then, the value δ α π ,k is the proportion of the total power P T that has been allocated to the kth user in the optimal decoding order of the th cluster. 9) Obtain the signal to be transmitted to each cluster by superposition coding according to (2) and send it to the users of that cluster. Note that each user has to perform SIC to obtain its own signal. If the BS follows the above procedure, fairness among the users will be guaranteed in terms of the outage or success probability of users, i.e., the minimum success probability among them will be maximized. It is pointed out that according to (2), the BS does not need the values of the optimal inter-cluster power allocation factor δ and optimal intra-cluster power allocation factor α π ,k separately to form the superimposed signal for each cluster. It only needs their product δ α π ,k , which specifies the proportion of the total power P T that should be allocated to user U π ,k and it can be derived directly from (44) by moving δ to the other side of the equation. However, we obtain them separately to keep the logical flow, improve the modularity and readability of the paper, and also to emphasize the fact that the closed-form expression for the inter-cluster power allocation factor δ can be used as a cost metric for user clustering algorithms. A. COMPUTATIONAL COMPLEXITY ANALYSIS To complete Section VI, we analyze the computational complexity of our proposed scheme for user clustering, power and resource allocation. It's noteworthy that the main loop of the proposed scheme for iterating over multiple initial target common success probabilities (CSPs) only affects performance of clustering algorithms, since the power and resource allocation algorithms establish fairness among the users for any given clustering. Besides, in Section VII-D, we show that without iterating over this loop and only with a fixed initial target CSP such as p = 0.95, our proposed scheme outperforms existing works. However, if the computing power at the BS and latency constraints of the system are flexible, performing a few iterations (less than 5) over the main loop will decrease the gap between the initial target CSP and the optimal CSP. Consequently, that results in a better performance of user clustering algorithm and in increasing the value of the optimal CSP (see Section VII-B for more details). Therefore, we only analyze the computational complexity of one iteration of the complete proposed scheme as depicted in flowchart of Figure 1. The first step in the proposed scheme acquires the statistical CSI of users and should be done periodically once in the coherence time interval of the channels. If a user fails to send CSI feedback to the BS in the coherence time interval, it can be omitted from the set of users or served with the previously reported CSI (which may be outdated). Nevertheless, incorporating these details is out of scope of this paper. We assume that there are K users that have reported their statistical CSI to the BS and we derive efficient algorithms to cluster these users and allocate power and channel resources to them such that the minimum success probability among them is maximized. On the other hand, requiring only the statistical CSI is the most practical assumption as it has the minimum signaling overhead compared to other assumptions, especially the assumption of having perfect instantaneous CSI at the BS as considered in many other papers (see Table 1). Thus, we skip the computational complexity of collecting the statistical CSI of users, which can be performed periodically over the feedback channels. Since, the derived equations for the Nakagami-m distribution of the SNR of users are in closed-form and initializing the target CSP is a constant parameter selection, they can be ignored in computational complexity of the proposed scheme. However, for the next major step which is clustering the users, one of Algorithms 4, 5 and 6 should be used. We showed that the computational complexity of these algorithms increase polynomially in time with increasing numbers of users and clusters. If the total number of users is K , then none of these clustering algorithms requires more than O(K 2 ) iterations to perform the clustering. Thus, we consider O(K 2 ) to be the computational complexity of the clustering step. The next step is to find the optimal CSP and resource allocation factors according to Algorithm 1. As discussed in the last paragraph of Section IV, by considering the acceptable error in finding all the parameters to be , the computational complexity of Algorithm 1 is L log 2 (1/ ) 3 which increases linearly in time with increasing number of clusters L. All the remaining steps of the scheme are to calculate some parameters such as decoding order and power allocation factors according to closed-form expressions. Thus, their computational complexity is negligible. Therefore, the overall computational complexity of our proposed scheme is proportional to operations. On the other hand, using an exhaustive search method for finding the K optimal user power allocation factors and L cluster resource allocation factors with precision requires investigating 1 K +L states that increases exponentially in time with the number of users K and number of clusters L. Moreover, considering all the possible clustering and decoding orders of users with fixed N = K /L users in each cluster, the number of states in exhaustive search is which increases exponentially in time with the numbers of users and clusters. Hence, our proposed scheme significantly decreases the computational complexity of solving the problem. We will also evaluate the run time of the complete proposed scheme by simulations in section VII-D. VII. SIMULATION RESULTS In this section, performance of the proposed algorithms is evaluated by simulations and compared to those of existing algorithms. All simulations were executed on a laptop with Intel(R) Core(TM) i5-5200U CPU2.20 GHz and 8 GB of RAM. A. PERFORMANCE OF POWER AND RESOURCE ALLOCATION SCHEME In this subsection we investigate performance of our proposed scheme for power and resource allocation and compare it to the following power and resource allocation schemes: 1) Equal allocation: Power and resource are allocated equally to all clusters. 2) Proportional allocation: Power and resource are allocated to each cluster proportional to the ratio of the number of users in that cluster to the total number of users. 3) Method of [24]: Power is allocated to users according to the distributed power control method proposed in [24] (for more details see Equations (25), (26) and (28) in [24]). However, since no resource allocation scheme is proposed in that paper we use a proportional resource allocation scheme in this case. For the first two of these inter-cluster power and resource allocation schemes, we employ our proposed intra-cluster power allocation to maximize the minimum success probability of users inside each cluster separately. However, for the third scheme we use the power allocation method proposed in [24]. The main goal of our proposed scheme is to establish fairness among all the users. Thus, we first compare performance of these schemes using Jain's index [25] in terms of the success probability of the users. This metric has been adopted in many works (e.g. [11], [26]) to evaluate fairness among users. The Jain's index for the success probability of K users is defined as follows: If the success probabilities of all the users are equal, then Jain's index is maximum and equal to one. In the worst case, where all the success probabilities are zero, except for one user, the index is minimum and equal to 1/K . Then for having different values of the sum rate, all the users' rates are multiplied by a proper constant factor. It is clear from Figure 2 that as r increases, the performance of our proposed scheme stays the same and fairness is established among all users. However for the other schemes, Jain's index quickly decreases as r increases. Recall that the goal of our proposed scheme is to maximize the minimum success probability of users. Thus, we also compare the minimum success probability of the users among these schemes in the same simulations that we perform for Jain's index, and the results are plotted in Figure 3. The results show that by establishing fairness among all users in our proposed algorithm, the minimum success probability of users is significantly improved when compared to that of the other power and resource allocation schemes. B. IMPACT OF ITERATION OVER MULTIPLE INITIAL CSP VALUES In this section, we investigate the impact of the initial CSP value on the performance of our proposed scheme. To this end, 30 users are generated with random parameters as explained before and run Algorithms 4, 5 and 6 separately. We perform 8 iterations over the loop of the proposed scheme and reinitialize the CSP value of the clustering with the optimal CSP obtained in the last iteration as described in the flowchart of Figure 1. Figure 4 plots the averages of the optimal CSP values over 100 simulation runs versus the number of iterations for different clustering algorithms. We set the first ''initial CSP'' value to be 0.8 (i.e., in iteration 0). It is clear that as the initial CSP value gets closer to the optimal CSP value, the clustering algorithm performs better and the optimal CSP of users increases. This is because changing the initial CSP value affects both the optimal decoding order and δ values for each cluster. Thus, by selecting an initial value of the CSP closer to the optimal CSP, the clustering algorithm determines the power demand and optimal decoding order of each cluster more accurately. In addition, this simulation shows that the proposed scheme converges very quickly, only after a few iterations, to the optimal CSP of users. Thus, to keep the computational complexity of our proposed scheme as low as possible, in the next two subsections we only consider a predefined CSP value of 0.95 and show that even without iterating over multiple CSPs, our proposed scheme still outperforms existing schemes. C. PERFORMANCE OF USER CLUSTERING ALGORITHMS In this section we evaluate the performance of Algorithms 4, 5 and 6 in terms of the minimum success probability of users. In addition to our proposed algorithms, we also consider two other algorithms for comparison. The first one is random user clustering, which does not utilize the statistical CSI of users for clustering and represents a lower bound of performance for other user clustering algorithms. In the simulation results this algorithm is labeled as ''Random clustering''. The second algorithm is the method proposed in [14], which is designed to cluster users into two-user clusters. In that method, users are sorted based on their average SNRs. Then, the first and last users are paired together, the second user and the one before the last user are paired, and in general, the kth user is paired with the K − k + 1th user, where the total number of users K is assumed to be even (refer to Theorem 3 in [14] for more details). In the simulation results this method is labeled as ''Method of [14]''. In order to focus on the impact of user clustering algorithms on the performance, we implement our proposed power and resource allocation scheme for all the aforementioned clustering algorithms. As before, here we also consider 30 random users and scale up/down their sum rate by multiplying all the rates by a constant scale factor. Figure 5 plots the minimum success probability of the users against their sum rate. It is clear that our proposed algorithms outperform the two other reference algorithms. By comparing our proposed clustering algorithms, it can be seen that as we relax the constraints on the number of users in each cluster and the total number of clusters, the performance of the clustering algorithm improves. This is expected as a higher degree of freedom should help to form a better clustering structure. D. PERFORMANCE OF THE COMPLETE PROPOSED SCHEME In this section, we evaluate performance of our complete proposed scheme for user clustering and resource allocation and we compare it with the following reference methods: 1) Clustering method of [14] + equal resource allocation + power allocation of [24] 2) Random clustering + equal resource allocation + power allocation of [24] 3) OMA technique such as TDMA + our proposed power and resource allocation It's noteworthy that our proposed scheme is capable of allocating resources to singleton clusters, which consist of only one user. By considering all the clusters to be singleton, our proposed hybrid NOMA-OMA scheme simplifies to the OMA since all the users will use orthogonal resources in that case. Thus, in the third reference method we consider singleton clusters to clarify the superiority of the hybrid NOMA-OMA scheme in comparison to the pure OMA. In the simulations, we compare the Jain's index, minimum success probability and sum throughput of users for the proposed and reference algorithms. The simulations are repeated 100 times and the averages of the obtained results are plotted in Figures 6, 7 and 8. At each repetition, K = 30 users with random CSI and rate parameters are simulated. Then, for each value of r , the rates of all users are scaled up or down with a proper constant. It is clear that as r increases, our proposed schemes outperform other reference algorithms in all the considered performance metrics. It's notable that the curves of our proposed clustering algorithms are very close to one another and they appear overlapped. Figure 9 depicts the runtime of our proposed scheme versus the number of clusters. From this figure, it is seen that the computational complexity of our proposed scheme almost linearly increases with the number of clusters, which is consistent with the complexity analysis given in Section VI-A. In this simulation for different number of random users, the proposed scheme is repeated 500 times and the average runtime of the whole scheme is calculated. The precision of calculating parameters such as resource allocation factors is set to = 10 −3 . It's noteworthy that we implemented the scheme in a single-thread mode. However, utilizing parallelism and multi-threading is possible for the implementation of binary searches of Algorithms 1, 2 and 3, which should reduce the runtime of the proposed scheme. VIII. CONCLUSION In this paper, we have tackled the problem of optimizing user clustering, power allocation to users, resource (time slot or bandwidth) allocation to clusters, and decoding order in each cluster for the downlink of a hybrid NOMA-OMA system operating over Nakagami-m fading channels. In a hybrid NOMA-OMA system, users are organized into several clusters, where clusters use an orthogonal multiple access scheme to utilize channel resources while users in each cluster employ power-domain NOMA. The goal was to maximize the minimum success probability (or equivalently minimize the maximum outage probability) among all users. We first proved that at the optimal solution of the problem, all the users have a common success probability (CSP). We then proposed an efficient algorithm for finding the optimal CSP and resource allocation factors of clusters simultaneously. We also derived the inter-cluster power allocation factor for each cluster, intra-cluster power allocation factor for each user, and optimal decoding order of users inside each cluster in a closed-form expression based on the CSP, statistical CSI of users and resource allocation factor of each cluster. We proposed efficient algorithms for user clustering under three different scenarios where the number of users in each cluster and/or the total number of clusters are fixed or variable. All three algorithms were developed based on the same principle of minimizing the power consumption of each cluster while achieving a given target success probability. Simulation results show that our proposed schemes for user clustering, power and resource allocation outperform existing schemes not only in terms of fairness and the minimum success probability of users, but also in terms of the sum throughput. An interesting topic for a future work is to develop efficient user clustering and resource allocation methods for the uplink of a NOMA system operating over Nakagami-m fading channels in order to guarantee fairness among users. The closed-form expressions for power allocation factors in (19) are derived based on these recursive equations. By using (56) the inequality (55) reduces to 1 β π ,2 < 1 β π ,1 , which is always true according to the optimal decoding order condition (18). Proving the conditions (21) for other values of i = 2, . . . , N is straightforward by following the same method. Therefore, the proof of theorem is complete. Algorithm 7 Finding the Optimal CSP p of Users in Cluster C Input: m π i ,γ π i , r π i ∀i ∈ I and . APPENDIX B ALGORITHM FOR FINDING THE OPTIMAL CSP OF USERS FOR ONE CLUSTER In Algorithm 7 we recall the algorithm proposed in [19] as a reference to facilitate comparison with its extended version developed in this paper, namely Algorithm 1. Algorithm 7 is designed to find the optimal CSP of K users when all of them are grouped into one cluster. In this algorithm, is the precision of calculating the common success probability (CSP). Algorithm 1 finds both the optimal CSP and optimal inter-cluster resource allocation factors across clusters when users are grouped into several clusters. APPENDIX C PROOF OF LEMMA 1 In [10] a similar lemma is proved for the case of Rayleigh fading. For the case of Nakagami-m fading, the success probability is derived in (15) as p π ,k = Q m π ,k , m π ,k γ π ,k th /γ π ,k . The function Q(·, ·) is a strictly decreasing function of the second parameter and according to (13) γ π ,k th is a strictly decreasing function of α π ,k . Thus, the success probability p π ,k is a strictly increasing function of power allocation factor α π ,k . The lemma can be proved by contradiction. Suppose that at the optimal solution, the success probabilities of all clusters are not the same. Thus, some clusters have the minimum success probability. Denote those clusters by C = arg min ∈C p and the rest of clusters by C . For these subsets we have C ∪ C = C and C ∩ C = ∅. Based on the fact that the success probability function of each cluster is a strictly increasing function of the power allocation factor of that cluster, we can find an appropriate positive value such that by subtracting from all the power allocation factors of clusters in C , and adding |C | |C | to the power allocation factors of clusters in C , the minimum success probability of the clusters can be increased, which contradicts the optimality of the solution. This proves the lemma. APPENDIX D PROOF OF LEMMA 3 To prove Lemma 3, we know that the objective function and all the constraints of problem (30) are linear, except for (30b). Thus, to prove the convexity, it suffices to prove that in the constraint (30b), h(p, ω) is a convex function. To this end, we investigate the positiveness of the second order derivatives of h(·, ·) with respect to its parameters. To derive h ω (p, ω) = ∂ 2 h(p,ω) ∂ω 2 we use the parameter: (62) VOLUME 10, 2022 Subsequently, by using f (x) = ln x and some straightforward algebraic manipulations the second order derivative can be calculated as × ln 2 i j=1 r π ,j /ω 2 2 i j=1 r π ,j /ω /ω + 1 β π ,|C | ln 2 Q(a, b). Thus, by utilizing (80) and considering y = 2 a we have Q(a + 1, a + 2) = 1 − P(a + 1, a + 2) Thus, we know that the inequality Q(m π ,i , m π ,i + 1) ≤ 0.5 is always true and in the interval 0.5 ≤ p ≤ 1 the inequality (79) and (78) are also always true. This proves that h p (p, ω) ≥ 0 and completes the proof of Lemma 3.
14,462
2022-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Functional recovery of a resilient hospital type Four adaptation options for ‘Nightingale’-type hospital ward buildings devised with practising clinicians are presented and evaluated. The adaptations recover functionality in an archaic ward configuration by delivering care to current UK National Health Service (NHS) models whilst preserving resilience to summer overheating. The investigation builds on recent work that demonstrates the significant resilience to heatwaves enjoyed by such traditionally constructed communal dormitories, the dominant UK hospital type between the late 1850s and 1939. Nightingale wards are potentially well-ventilated naturally, with good dilution of airborne pathogens. Although condemned as outdated by health ministers in recent years, many remain in use. As financial retrenchment suggests economical, creative refurbishment of hospitals will be required rather than new-build and replacement, the authors argue for health estates’ strategies that place value on resilience in a changing climate. Proposed adaptation options are investigated to assess resulting internal airflows and patient exposure to airborne pathogens. Options are costed and payback periods calculated to the standard public sector methodology. The proposed adaptations save time and cost over new-build equivalents. Selection of the most appropriate option is dependent on the characteristics of the patient cohort and care required. Introduction In 2001 the UK government demanded that the National Health Service (NHS) abandon the traditional healthcare model of a communal hospital ward and adopt the principle of single patient rooms, hitherto reserved for the very unwell and the privately insured. Although presented as a patient-facing 'consumerist' policy, part of a comprehensive NHS Modernization programme, more complex performance drivers preoccupying the UK Department of Health (DH) at the time determined the policy shift: increased space standards to facilitate inclusiveness in the implementation of the consumerist agenda, in effect destabilizing existing ward geometries (Department of Health, 2008a); poor infection control statistics at a significant number of acute hospitals damaging public confidence and adding significantly to healthcare cost (Plowman et al., 2001); patient pressure to achieve universal single-sex accommodation across the NHS Estate (NHS, 2013a); achieving patient privacy and 'Dignity on the ward' (Department of Health, 2008a); and the policy for wholesale replacement of the retained NHS Estate through publicprivate partnerships (PPPs) (Pollock, Shaoul, & Vickers, 2002). What came to be known as the Darzi Report, published in 2008(Department of Health, 2008b, further stressed the empowerment of patients and the principle of dignity and respect towards patients, although it was curiously mute on policy directions for the physical environment for healthcare. Nonetheless, the 2008 Health Building Note 04-01 (Department of Health, 2008a) specifically recommended that 50% of beds should be in single rooms, reminding readers that this merely repeated the advice given in the original Health Building Note 4 published in 1997. (Department of Health, 2013) The 2001 Labour Party manifesto, 'Renewing Public Services: NHS Reform', promised a re-elected Labour government would 'create a new type of hospitalspecially built surgical units, managed by the NHS or the private sector . . . ' (Labour Party, 2001). Pre-1949 inpatient wards built to Florence Nightingale's original mid-19th-century specifications as open-plan dormitories for 24 -30 patients (Nightingale, 1859) were specifically condemned. The DH put forward £120 million for their replacement, stating that: Older hospitals may care for older people on Nightingale wards -wards where staff can find it difficult to provide an appropriate environment for older people. £120 million will be spent over three years, making many of these wards into wards particularly for the use of older people. This will bring in more fourbedded bays, with more privacy and peace; rooms available for private conversations; and single rooms for those who are most vulnerable. the potential disadvantages of this form of accommodation (single bed wards) include: reduced social interaction with other patients leading to isolation, less surveillance by staff, an increased likelihood of rescue failure, and higher rates of slips, trips and falls. (Maben, 2009) Maben referred to Mooney (2008) and Snow (2008) for concerns about impacts on nursing staff, who argued that single rooms require greater nursing input. She also cited Roger Ulrich's highly influential arguments for single-room hospital wards delivered as key findings of his group's researches into 'evidence-based design' (EBD) in healthcare of which he was the originator. Ulrich et al., reporting on a collective literature review carried out in 2004, advised: The findings further support the importance of improving outcomes for a range of design characteristics or interventions, including single-bed rooms rather than multi-bed rooms, effective ventilation systems, a good acoustic environment, nature distractions and daylight, appropriate lighting, better ergonomic design, acuity-adaptable rooms, and improved floor layouts and work settings. (Ulrich et al., 2008) Sadler et al., citing Ulrich, were explicit: 'create 100% single-patient rooms'. In their summary of the cost of their recommended improvements they concluded: 'Single-patient rooms are now the standard for new hospital construction and, therefore, no additional costs are assumed' (Sadler, DuBose, Malone, & Zimring, 2008). Ulrich was invited to Britain to advise government; the single-room policy remains fundamentally unaltered (Department of Health, 2010). However, despite the very positive reception of this work across the NHS (NHS Scotland, 2011), the deteriorating financial situation implies that an assumed 'nil cost' is not applicable in the UK. The intense resourcing challenge faced by all NHS Trusts militates against new buildings. Furthermore, Trusts also face the probability of an increasing frequency of extreme climate events, not least summer heatwaves, whilst being obligated to deliver energy and carbon reductions to ambitious NHS Carbon Reduction Strategy targets (NHS Sustainable Development Unit, 2010). Overheating in hospitals due to heatwaves is documented as impacting on the health and well-being of patients, staff and visitors (Carmichael et al., 2012). National Statistics report a clear correlation between mortality in hospitals and heatwaves in England and Wales (Kovats, Johnson, & Griffiths, 2006, p. 8), concluding: After accounting for the usual pattern of mortality by place of death, a larger than expected proportion of the excess deaths in the elderly occurred in hospitals and nursing homes; and In the non-elderly population, there was a large excess of mortality observed in nursing and residential homes, although the absolute numbers of deaths were small. The report attributes more than 2,000 deaths to the August 2003 heatwave in England and Wales. Members of the nursing profession have consistently expressed reservations about this policy. For example, nursing staff reported in interviews with the authors (at Bradford Royal Infirmary (BRI) in 2011) and in published articles their belief that the communal Nightingale ward arrangement is actually beneficial in certain care settings, for older patients, for example, a rapidly increasing in-patient constituency. The Nursing Times reported in 2011: We should not dismiss the value of Nightingale wards. [ . . . ] I would like the profession to take the positive aspects of the Nightingale wards and see them tailored to today's needs. (David, 2011) The article gained many positive comments from nurses, the majority anonymised, who deduced that the single-room agenda was driven by non-clinicians, policy-makers and the public specifically. Characteristics of the 'Nightingale' ward The configuration of a Nightingale ward belongs to the genus of 'pavilion plan' hospitals. The 'pavilion plan' separates hospital wards into discrete, cross-ventilated buildings, connected by a circulation route across one end only. Although the nurse Florence Nightingale is the figure popularly associated with the introduction of this idea into British hospital design at the end of the 1850s, the so-called 'Nightingale' ward resulted from advocacy on the part not only of Nightingale but also of the Scottish surgeon John Roberton and the Editor of The Builder, George Godwin (King, 1966). In a paper of 1856 given in Manchester, Roberton criticized the poor ventilation of many British hospitals, suggesting that a better approach was found in the 'pavilion plans' of many continental European examples. He was especially positive about the hospital at Bordeaux, France. Roberton's paper, which was subsequently published with illustrations, informed a critical account by Godwin in The Builder of the recently completed Netley Military Hospital. Godwin continued to deploy and promote Roberton's ideas, with several further articles appearing in The Builder on the subject of hospital ventilation; some were written by Roberton himself (King, 1966;Godwin, 1858) (Figure 1). During this period, Nightingale was preparing answers for the Royal Commission on Barracks and Hospitals, which began taking evidence in May 1857. Her ideas were published in her book Notes on Hospitals (Nightingale, 1859), which also included three papers from The Builder likely to have been written by Roberton (King, 1966). Nightingale's interest in this subject stemmed from her experience of very high mortality rates in the two military hospitals at Scutari during the Crimean War of 1853 -56. She deduced that patient recovery was linked to the opportunities to vent out 'bad air'. 'Good' ventilation, i.e. cross-ventilation in her model, would suppress the incidence of 'cross-infection', not understood as bacterial but derived from 'miasmas' (Thompson & Goldin, 1975). These 'miasmas' developed in part from the exhaled products of the human body, especially when sick, and were 'always highly morbid and dangerous' (Nightingale 1859). William Farr (1807 -83) and fellow sanitary reformers promoted belief in miasma, attributing to it the 1849 cholera epidemic (Halliday, 2001). Nightingale insisted that 'natural ventilation, or that by open windows and open fireplaces, is the only means for procuring the life-spring of the sickfresh air', adding, 'no artificial ventilation will do this'. This comment may refer to recent attempts to ventilate hospitals mechanically, notably at the fully sealed York County Hospital of 1849, a bellowsdriven forced ventilation experiment by Queen Victoria's 'Physician Extraordinary' in which air was driven into the wards and from which it was removed by an aspirator (Burdett, 1893). The failure of the system led to the installation of opening windows in 1859. Nightingale, like Roberton, argued that each hospital ward should be located in a free-standing pavilion, connected to the rest of the hospital by a circulation route crossing one end only of the block. Her ideal was a single-storey pavilion containing just one ward, though she admitted that two-storey wards were acceptable. The pavilions were to be separated by a distance equal to twice the height of the buildings. Beds for 20 -32 patients were to be located along each side of the ward perpendicular to the walls, with one window per bed and the windows located opposite each other. A ward for 20 patients would be 80 ft long, 25 ft wide and 16 ft high (Nightingale, 1859), the preferred axis being north -south, a window for every two beds, the windows consuming at least onethird of the wall surface area, located 2-3 ft off the floor and within 1 ft of the ceiling (Figure 2). Nightingale suggested that the windows might be made of plate or double glass, but heating was seen as a trivial exercise relative to the challenge of ventilation. She derived the required supply of fresh air by calculating the 'miasmatic' emanations from a typical sick soldier: 370 ft 3 per day from 16 soldiers and 123 ft 3 a night, generating 16 pints of water, the fatal scenario being: 'the consequent re-introduction of excrementious matter into the blood through the function of respiration . . . ' (Nightingale, 1859, p. 11). In fact, assuming the relative humidity to be 50%, to replace this quantity of vitiated air this yields a very low minimum air supply rate, some 0.167 litres/second/ person, approximately 1/60th of the contemporary standard. Her ideas were replicated nationally. The first completed 'pavilion plan' hospital was the Herbert Military Hospital in Woolwich (begun in 1863), though Blackburn Infirmary had been planned on these lines in early 1858; its completion was delayed and Woolwich was finished first (King, 1966). In fact Nightingale's version of the continental pavilion hospital plan dominated hospital design in the UK into the 1930s (Thompson & Goldin, 1975) and also beyond, sometimes coupled to advanced ventilation strategies (Fair, 2014), though there was a parallel interest in circular ward planning during the 1880s (Taylor, 1988) Infection control and ward design Whilst Nightingale did not fully understand the mechanisms of infection, she advocated good ventilation and adequate bed-spacing to reduce disease risk, a principle that still applies today. Although theories of 'miasma' had been disproved by the 1880s (Ayliffe & English, 2003), ventilation strategies intended to dilute and disperse 'miasma' are now known to be effective also in venting out airborne microorganisms emanating from patients, visitors and staff. Indeed, early advocates of germ theory effectively appropriated the language of the proponents of miasmas in advocating good ventilation and this move may have aided their success (Tomes, 1998). Evaluating the relationships between design and infection risk requires consideration of transmission routes. The transmission of infection through direct contact and poor hand hygiene is predominantly a behaviour rather than a design issue, though it is likely that good practice is promoted by physical segregation of patients and proximity of hand basins as well as maintaining manageable bed occupancy rates and demands on healthcare staff (Kibbler, Quick, & O'Neill, 1998;Beggs et al., 2006). Indeed, poor management and overcrowding in large open wards has been associated with high rates of healthcare-associated infections (Commission for Healthcare Audit and Inspection, 2007). Understanding of airborne transmission of infection stemmed from the pioneering work of Wells (1935). True airborne transmission is cited as when pathogen-carrying particles, typically , 5 mm in diameter, are released through actions such as coughing and sneezing, travel with the air in a space and then are inhaled by susceptible occupants. Tuberculosis, measles and chicken pox are all well-known airborne infections, and there is evidence that influenza (Tellier, 2009;Milton, Fabian, Cowling, Grantham, & McDevitt, 2013) and severe acute respiratory syndrome (SARS) (Yu et al., 2004;Qian, Li, Nielsen, & Huang, 2009) may be transmitted in this way too. Droplet transmission also involves airborne dispersion, but is often regarded as a form of indirect contact transmission as the mechanism involves deposition of particles onto surfaces leading to environmental contamination. Common infections such as influenza and rhinovirus are thought to be predominantly dropletborne, however there is evidence that many hospital pathogens such as C. difficile (Roberts et al., 2008), methicillin-resistant Staphylococcus aureus (MRSA) (Kumari et al., 1998) and norovirus (Marks et al., 2003) may all be spread in this way. Regardless of the exact transmission mechanism, both airborne and droplet-borne transmission involves the release of particles into the air, and risk therefore depends on the fate of the particles. Quantitative evidence directly relating ventilation to airborne infection risk originated in operating theatre studies (Lidwell et al., 1982) and early work on tuberculosis transmission (Riley et al., 1957). Together with a raft of experimental-and modelling-based studies this has led to consensus that infection does indeed transmit via airborne routes and that ventilation is an appropriate control measure (Li et al., 2007). This body of research has led to the guidance on healthcare ventilation used today (Department of Health, 2007;WHO, 2009;ASHRAE, 2003). Resilience of the Nightingale ward building type The basic resilience of the Nightingale ward pavilions at BRI was established as part of the 'DeDeRHECC' ('Design and Delivery of Robust Hospital Environments in a Changing Climate') project (Lomas, Giridharan, Short, & Fair, 2012). Temperature data were collected in two Nightingale Wards at the hospital between 2009 and 2011. Figure 3 depicts the current ward building. Aluminium thermal break windows with a central top-hung light limited to 100 mm maximum opening providing less than 0.09 m 2 were installed in the late 1990s to replace the original quadruple-banked steel hopper windows, alternately topand bottom-hung, offering approximately 50% free area, some 1.35 m 2 . The data collected indicate the temperatures in all spaces monitored fell comfortably within the recommendation for hospital wards recommended by HTM 03-01 of 18 -288C , although peak external temperature was an undemanding 24.18C in this period. report that although night-time temperatures regularly exceeded 248C, potentially affecting sleep, these occurred largely during the heating season, suggesting that a reduced set-point may reduce the incidence of higher night temperatures. A temperature of 268C was exceeded for only 3 h in Ward 8 and for 1 h in Ward 9, with an absolute maximum of 27.48C, despite the much-reduced opening window area. There was no evidence of overheating due to higher summer ambient temperatures or solar gain. The DeDeRHECC team adopted the adaptive thermal comfort standard BS EN 15251 for freerunning naturally ventilated buildings as a more reliable indicator of comfort (British Standards Institution) than the current DH guidance. For health buildings it offers bands of tolerance related to vulnerability, Category 1 being most vulnerable. summarize the researchers' position. Fewer than 2% of the recorded temperatures exceed the Category I upper threshold, but the data yielded insufficient evidence of resilience to high ambient temperatures. A dynamic thermal model of the ward was developed using Integrated Environmental Solutions modelling software (IES, 2011) and was calibrated against observed data to investigate the wards' thermal performance and energy demands By comparison with other standard NHS building types, a recent paper studying the demand of a 1960s' tower building with hybrid ventilation predicted an energy demand for space conditioning alone of 101 GJ/100 m 3 (Short, Lomas, Renganathan, & Fair, 2012). Meanwhile CO 2 emissions for the BRI Nightingale wards are predicted to be about 30 kgCO 2 /m 2 for environmental control purposes, which, using the crude adjustment noted above, would uplift to about 53 kgCO 2 /m 2 , very significantly less than the Chartered Institution of Building Services Engineers (CIBSE) TM46 benchmark for 'Hospitals; clinical and research' of 129.3 kgCO 2 /m 2 (CIBSE, 2008). Here, then, is a hospital ward type that could deliver the NHS carbon reduction target, but which must be realigned with contemporary service models in order to become an operational solution to the NHS conundrum. Adaptive options for the Nightingale building envelope Though good, the performance of the Nightingales can be improved. described three simple and incremental refurbishment options, summarized in Figure 4. The first option adds 100 mm of insulation to the walls and 300 mm to the roof, opens up the triple light windows (ensuring safety with an external steel grillage), and provides a sunshade at each opening. Trickle vents are recovered behind a new perimeter heating element, for winter ventilation. The second option adds to this strategy ceiling fans operated by patients; whilst the third option introduces 100 mm diameter high-level air inlets above each bed space, between each window, with a damper and a simple convective heating device fixed to the internal face to enable supply air to be pre-heated and/or recirculated within the space. Primary heating and cooling are delivered through the installation of radiant panels. The addition of radiant cooling eliminates entirely the risk of overheating. The present paper assumes the second option as the base treatment of the envelope but without operating fans except during summer heatwaves. Prediction of the dispersal of pathogens considers wind effects but not the action of multiple ceiling fans. Annual energy demands and CO 2 emissions of the refurbished Nightingale ward were predicted using the dynamic thermal model and the Bradford 2010 weather file for the summer period, 1 May to 30 September, as recorded in Table 1. The performance of the Nightingale wards in a future climate has been predicted for current and future typical and extreme temperature years, the 2005 test reference year (TRY), containing monthly data typifying Bradford chaining the most typical January to the most typical February, etc., and the 2004 design summer year (DSY) depicting the third hottest year in the 22-year string based on the mean temperature recorded between April and September, the 90th percentile. Future weather years were created from the UKCP09 future climate projections by the University of Exeter assuming an A1B global emissions development scenario producing TRYs and DSYs for the 30year periods centred around 2030, 2050 and 2080 for the 5 km grid square covering Bradford. The method used has been fully described by Eames, Kershaw, and Coley (2011), and is summarized in . Higher temperatures increase gradually in the TRYs but quite rapidly in the DSYs; the difference between the temperatures in typical and extreme years becomes more pronounced so that risk-based decisions on the incorporation of mechanical cooling become more complex. The dynamic thermal model was used to predict temperatures in the Nightingale ward as currently exists and in the refurbished ward as option 2, as appropriate. The internal heat gains, window-opening strategy and control strategies (e.g. for the cooling option) were maintained as for the 2010 analyses described above. Neither the existing nor the refurbished building will overheat in typical years, as judged by the HTM 03 and BS EN 15251 criteria, but in the 2050s warmer night-time temperatures may be experienced (1) remove stone, insulate (70 mm) and replace stone; (2) High level 100 mm air inlet ducts through solid wall; (3) radiant panel for hot and cold water; (4) opening lights in the existing windows with guards as needed externally; (5) introduce slow wide-span fans above the beds, an option not incorporated into the computational £uid dynamics (CFD) analysis; (6) shading and lightshelves of perforated white powder-coated aluminium to suppress glare and achieve a more even daylight distribution; and (7) seal vents and remove radiators/convectors as the changing climate requires some measure of radiant cooling about 2050 Source: freeze-frame image, same as Figure 3. Highlighted results show that the exceedance is deemed important in that it could not be easily corrected by re¢ning the control strategy. It is assumed that during the winter (October^April) the space will not overheat due to elevated ambient temperatures and solar gains, so the exceedance limit for BS EN 15251 is . 438 h above upper category threshold. Functional recovery of a resilient hospital type (although these might be ameliorated easily with a refined window-opening regimen if the windows are openable to a useful degree in sufficient numbers). However, although HTM 03 shows overheating occurs in the existing building and with refurbishment option 1 in the DSYs as early as the 2030s, the BS EN 15251 approach indicates that the refurbishment options that do not incorporate cooling will remain comfortable in both typical and extreme years right up to the 2080s. The existing building is predicted to overheat based on the Category I thresholds during the 2050s. The addition of mechanical cooling (i.e. the radiant ceiling innovation in option 3) would ensure there is no overheating in either typical or extreme temperature years right up to the 2080s. The analysis summarized here indicates that the inherent resilience of the Nightingale wards, together with the northerly location of Bradford (and thus modest summertime temperatures even in the 2080s), would enable passive retrofit to succeed in producing a building that is comfortable until towards the end of this century. However, it would be essential also to offer a reconfiguration of the open wards to align the spaces more closely with the NHS Modernization policy objectives. The rest of the paper presents and tests options for such reconfiguration. Infection control analysis of Nightingale wards Studies assessing airflows and infection risk in naturally ventilated wards are few, not least because of the challenges involved in measuring airflows. Escombe et al. (2007) showed that in some cases ventilation rates in naturally ventilated spaces could be much higher than those achievable by mechanical ventilation, while Qian et al. (2010) conducted a detailed study in a naturally ventilated ward in Hong Kong and showed the dependency of the ventilation rate on the external wind conditions. Of relevance to the current paper is a recent study conducted in a Nightingale ward at St Luke's Hospital, also in Bradford (Gilkeson, Camargo-Valero, Pickin, & Noakes, 2013). This is an older ward than those considered in , but the construction, orientation and ventilation strategies are similar. Using CO 2 as a tracer gas, this study showed that with external wind speeds of 1.0 -4.0 m/s ventilation rates of 3.4 -6.5 air changes per hour (ACH) were achieved in the ward with only 60% of the windows open. Moreover, the study explored the influence of partitions between beds on the distribution of a tracer released at a representative patient location on the leeward or windward side of the building. Results considered the total exposure to the tracer over 25 min at different patient locations. In an open ward this was seen to be relatively uniform, suggesting a reasonable degree of mixing. With partitions in place the results showed a redistribution of the tracer, with higher exposure close to the source and in the bed immediately opposite, but lower exposure in neighbouring beds. However, the overall exposure in the ward was comparable in both cases. If exposure can be considered analogous to infection risk, then these results suggest that the cross-ventilated Nightingale ward, with the right wind conditions, is capable of achieving ventilation rates comparable with the six ACHs recommended by the Department of Health (2007) and that partitioning the ward may be feasible without substantial impact on infection risk. Adaptive recon¢guration options These basic options were initially outlined in schematic form and then developed in consultation with staff at BRI, examining patient safety, supervision, the cycle of inspection, the dispensing of medicines and treatments, the distribution and collection of food, and the night-time care model with reduced staff. It emerged that a key factor in considering the future of the Nightingale wards as functioning clinical spaces was the perceived risk of airborne cross-infection both in the open ward and in variants offering some subdivision. The relative safety of the propositions was evaluated using a computational approach to predict dwell times for air in the region of patients and their neighbours giving some insight into infection control implications. Air£ow simulation approach While ventilation and infection risk are clearly related, assessing the ventilation performance of a hospital ward, particularly a naturally ventilated one, is not straightforward. Tracer gas experiments offer a possible approach to assessment, however they cannot be carried out in occupied spaces and the methodology is not straightforward (Escombe et al., 2007;Gilkeson, Camargo-Valero, Pickin, & Noakes, 2013). Moreover it is not generally feasible to use such an approach to explore design changes. Computational fluid dynamics (CFD) is a numerical simulation approach that can be used quickly to explore building airflows and ventilation effectiveness under a range of different circumstances. By discretizing mass, momentum and energy equations that govern fluid flow across a mesh of elements representing the room geometry, it is possible to evaluate spatial distribution of parameters such as velocity, pressure and temperature within a building. While computational resources often limit simulations to idealized steady-state scenarios, the approach is widely used in building airflow assessment, including hospital-based studies (Tang et al., 2011). Here a series of CFD simulations were carried out to investigate the impact of the various ward internal redesigns on both ventilation characteristics and thermal comfort. Model geometry and mesh For consistency, each ward configuration is based on the same plan area of 10 × 8 m and the height remains constant at 3.7 m, leading to a ward air volume of approximately 300 m 3 . Figure 6 illustrates the simplified ward layouts considered in the study, derived from those proposed in Figure 5. The traditional Nightingale ward model, Figure 6(a), is a section of the full ward consisting of six beds (three per side), each measuring 2.0 × 1.0 × 0.7 m. Figure 6(b) shows the second configuration with fullheight partitions between beds; previous studies have underlined the potential for these as an infectioncontrol measure (Noakes, Sleigh, Escombe, & Beggs, 2006;Gilkeson et al., 2013). In the Pullman-style layout, Figure 6(c), the beds are arranged into three pairs and the doors to each compartment are assumed to be open allowing for cross-ventilation via the corridor. Figure 6(d) shows the zigzag scheme comprising an island of skewed beds together with 2.1 m-high partitions segregating the immediate vicinity of each bed. Although not shown in Figure 6(d), a slight variation of the skewed island design is also considered with both the partitions and beds raised above the ground by 0.15 m. The final configuration, Figure 6(e), is similar in layout to the traditional ward, subdivided to achieve gender separation but with the beds positioned on the end walls (as opposed to the side walls) with the addition of an external access corridor. A notable feature of this layout is the asymmetric distribution of windows; only two are present on the windward side of the ward (the central one is missing to cater for a toilet/shower room), whereas a total of six are present on the leeward side, i.e. three on the corridor and three supplementary windows positioned above it. Computer-aided design (CAD) models were developed using Ansys Workbench, version 13.0.0 SP2 (Ansys Figure 7 shows this grid structure for the 'Pullman' ward layout. The global cell count ranged from 0.8 to 1.5 million cells depending on the ward layout. Boundary conditions Cross-ventilation is modelled by treating the open windows as rectangular inlets, each measuring 0.15 × 1.00 m. On the windward side an inlet velocity, U IN , was imposed with a magnitude of 0.5 or 1.5 m/s. For the traditional ward these inlet velocities correspond to ventilation rates of two and six ACHs, respectively. Air was assumed to enter at an angle of 458 to the horizontal axis. This methodology has been shown to replicate experimentally determined flow patterns through casement windows in a Nightingale hospital ward (Gilkeson et al., 2011). All ward configurations were assumed to contain similar small radiators. With the focus of this study being on the relative performance of ventilation and thermal comfort between wards, small geometrical details (e.g. lighting and equipment) are neglected. For the same reason patient geometry is omitted, however typical thermal output (per patient) is accounted for using an appropriate temperature applied to the upper surface of each bed. Thermal boundary conditions for summer and winter cases are given in Table 2. Solution process Steady-state airflow simulations were run in parallel on a high performance computing cluster with 16 processors (2 × dual-quad core 2.26 GHz Nahalem processors and 24 Gb of SDD RAM) using Fluent, version 13.0 (Ansys Inc., 2013, http://ansys.com/ products/fluid-dynamics). Solutions for the governing fluid flow equations (continuity, momentum and turbulence) were computed using second-order discretization and the standard k-e turbulence model in conjunction with standard wall functions and the SIMPLE algorithm. Simulations ran for 10 000 iterations during which the absolute residual levels had dropped between four and six orders of magnitude ensuring good convergence. Although the simulations were steady-state, it is possible to exploit the resulting velocity field and determine the residence time in each region of the air volume being studied. In order to achieve this, a passive scalar equation was solved in conjunction with domain-wide source terms which enabled the distribution of the local ventilation rate, V L , to be found. Air£ow simulation results Altogether eight different ward-ventilation configurations are considered and they are summarized as follows: a: Traditional -windward ventilation b: Partitioned -windward ventilation c1: Pullman style -windward ventilation c2: Pullman style -leeward ventilation d1: Skewed island -windward ventilation d2: Raised skewed island -windward ventilation e1: External corridor -windward ventilation e2: External corridor -leeward ventilation The rationale for simulating leeward ventilation exclusively for the Pullman (c) and external corridor wards (e) is that these are the only asymmetric ward configurations, therefore it is important to consider both potential wind directions. In the analysis of ward airflows three ventilation parameters are considered, namely: (1) V G , the global ventilation rate through each ward, (2) V B , the ventilation rate local to each bed, calculated in a spherical volumes, 1.0 m diameter in the breathing zone of a laying patient, and (3) V L , the local ventilation rate measured in each individual cell of the grid. Figure 8 shows the observed airflow patterns in the form of path lines released from the inlet windows which are coloured by relative ventilation rate. These reveal the complexity of indoor airflow even for relatively simple cross-ventilation. The traditional (a), partitioned (b) and skewed island (d1, d2) flow fields are all very similar. They are dominated by the inlet flows which are guided cross-ward by the ceiling. A small proportion of this fresh air directly bypasses the ward and exits through the outlet windows, whereas the remainder mixes in the patient zone before exiting. The asymmetry present in the Pullman (c1, c2) and external corridor ward (e1, e2) geometries leads to differing flow fields depending on the flow direction. For both of these wards, leeward ventilation leads to substantial mixing regions in each respective corridor with significantly reduced air movement in the vicinity of the hospital beds. Therefore, the asymmetry and greater geometric complexity present in these particular designs do not lend themselves to stable and uniform ventilation characteristics. In contrast, the simpler designs appear to be very effective solutions and they are more suited to natural crossventilation. Airflow patterns Bed-level ventilation rates Figure 9 shows the distribution of bed-specific ventilation rates for all ward configurations under the influence of low wind speeds (i.e. 0.5 and 1.5 m/s, respectively). For proposals (a) to (d) the two imposed wind speeds lead to global ventilation rates of two and six ACHs, respectively; the inclusion of partitions does slightly reduce the air volume in some cases but the overall effect on V G is minimal. The addition of the corridor to the extended ward (cases (e1) and (e2)) increases the overall volume from 300 to 346 m 3 , which reduces V G accordingly (constant wind speeds are assumed in all cases). Considering the results for the traditional ward (Figure 9(a)), beds A, B and C experience uniform ventilation rates, which correspond to the ventilation supplied by the adjacent inlet windows. On the opposite side of the ward the central bed (E) experiences an increase in ventilation for V G ¼ 2 ACH, which is notably higher than that experienced by beds D and F at either side. This is attributable to the airflow at either end of the ward being slowed down by a boundary layer attaching itself to the end walls. In contrast, the higher wind speed (V G ¼ 6 ACH) leads to proportionately higher ventilation rates for beds D and F, which suggest that the distribution of ventilation rates at this side of the ward is sensitive to the wind speed. In the partitioned ward case ( Figure 9(b)) there is a distinction present in the observed ventilation rates at either side of the ward but with little variation across the inlet and outlet walls. The presence of the partitions serves to stagnate the airflow as it progresses cross-ward, leading to noticeable reductions in V B on the leeward side. For the Pullman-style hospital ward (Figure 9(c)) there is a clear variation in ventilation rate throughout the Here, the maximum local ventilation rate observed for the high wind speed is almost nine ACHs, which is 50% higher than the supply rate. This occurs above bed D which is directly opposite the central inlet window and adjacent to the door opening. It follows that as the airflow passes this central bay, it rushes through the door opening thereby increasing the ventilation rate local to bed D. In contrast, the leeward ventilation regimes show that when the air enters the opposing side of the building, the route of entry to each bay is poorly defined and this leads to reduced air movement and thus lower ventilation (Figure 9(d)). Results from both skewed island cases (Figure 9(e, f)) show that the raised partitions have very little impact on the observed ventilation rates. The differences in V B from bed to bed are minimal which underlines the uniformity of the airflow in the centre of the ward. This characteristic can be expected because the beds are clustered centrally which is away from the regions experiencing high flow gradients, i.e. near inlets, outlets and perimeter walls. Overall, the ventilation rates are slightly lower than those observed in the traditional ward, however the uniformity of flow suggests improved stability in the ventilation conditions (assuming straight cross-ward ventilation). The observed ventilation rates seen in the external corridor ward are fairly constant under the influence of windward ventilation (Figure 9(g)). This is explained by the fact that the beds adorn both end walls of the ward, each of which is supplied with fresh air from the same window. A point of note is that the absolute ventilation rates are significantly lower for this case because, as described above, the central window on the windward side of the building is omitted due to a toilet/shower room being present. There are six windows present on the opposing side of the building, which explains the dramatically higher ventilation rates seen for the leeward case (Figure 9(h)). The peak ventilation rate is approximately 14 ACHs occurring by bed C, which is just behind the doorway connecting the corridor to the ward. As with the Pullman layout, leeward ventilation for the external corridor case leads to a significant variation in ventilation from bed to bed, however in this case the variability stems from the interaction between the three high-mounted inlets and the inflow through the corridor opening. Local ventilation rates Whilst the bed-specific ventilation rates give a clear indication of the air exchange rates, which can be expected by patients, analysing the distribution of local ventilation rates throughout each ward is a convenient way of comparing them. This was achieved using a horizontal plane spanning each ward with 7700 analysis points situated at a height of 1.2 m above the ground. Figures 10 and 11 present histograms of the distribution of V L in this plane for low and high wind speeds, respectively. For the low wind speed cases, the most striking feature is the range of V L observed, which is particularly broad for the Pullman ward (Figure 10(c) and (d)) and the extended ward with leeward ventilation (Figure 10(h)). These wider ranges are indicative of poorer mixing which leads to a greater spectrum of data. In contrast, the markedly narrower spectrum observed for the traditional ward ( Figure 10(a), skewed-island designs (Figure 10(e) and (f)) and the external corridor ward with windward ventilation (Figure 10(g)) indicate greater mixing levels with a more even distribution. It is also noticeable that the peak of the distribution is below two ACHs, the supply ventilation rate for the ward, in the skewed island design, compared with slightly above in the traditional open layout. The same overall trends are seen in the histograms for the high wind speed of 1.5 m/s in Figure 11, which shows that the distribution of ventilation rates is insensitive to wind speed. Although the histograms provide a wealth of absolute ventilation rate data, Figure 12 shows contour plots of the relative ventilation rate, V L /V G , which is a measure of how well the inlet flows reach various parts the wards. For V L /V G ¼ 1.0 the ventilation rate at any given point is equal to the global value, V G ; when V L /V G , 1.0 the local ventilation rate is below V G ; and for V L /V G . 1.0 the ventilation is greater than V G . As would be expected, the contour plots clearly show a high relative ventilation rate near the inlets and in most cases this extends along the roof where the inlet flows are entrained. The range of relative ventilation rates is relatively small for the traditional, partitioned and skewed island designs (Figures 12(a), (b), (c) and (d)), whereas great variability is present in the remaining cases. In particular, the Pullman ward (leeward ventilation) exhibits very high air exchange rates in the corridor adjacent to the inlets and the bay walls restrict airflow, thereby slowing the flow rates through the actual bays. Thermal characteristics In analysing the temperatures in the ward two parameters were of interest, namely: (1) the mean average of the local bed-level temperatures (i.e. the average of all six beds), T B-AVE (8C) and (2) the difference between the minimum and maximum bed-level temperatures, DT (8C). Tables 3 and 4 show the data for summer and winter conditions respectively. Overall, the difference in average temperatures between wards during summer is relatively small, whereas these differences are more pronounced during winter. The greater variation in winter occurs by virtue of the greater temperature ranges present; incoming air has a temperature of 128C and the radiator temperature is 458C, yet the summer conditions assume the inlet air matches the wall temperatures (228C) and the radiators are turned off. As the ventilation rate increases, the temperatures generally drop as the faster airflow removes more heat. For summer conditions the warmest ward is the Pullman layout (windward ventilation) and the greatest variation in bed-to-bed temperatures is also highest for this case but with a leeward ventilation regime; the latter observation reinforces the view that this ward induces the greatest variability in ventilation flows (and thus the temperature distribution). For winter conditions, the traditional ward experiences the in reality variation in surface temperatures and proximity of patients to radiators and external walls will influence thermal comfort as well as the air temperatures. Functional evaluation of the adaptation options Figure 13 records the original plan of the BRI Nightingale wards. The adaptation options were interrogated by clinical staff at BRI responsible for the care of elderly patients within Ward 3, a Nightingale ward in its original configuration but broadly divided by gender into three sections. Those consulted were the divisional general manager, nursing sister, head of nursing, a consultant in elderly care and various staff nurses. They all reported positive operational benefits in caring for the elderly from the open arrangement, echoing Maben: for the exceptionally frail and fearful, the other patients are a resource, intervisibility between patients is a positive benefit. Staff were keenly aware of coroners' reports into the ward deaths criticizing the use of side rooms, verdicts recording that patients were not viewed sufficiently often and were generally isolated. Divisional General Manager Neary believed: '"security" was equated directly with "dignity"', using 'security' in the broadest sense to include infection risk, and this was a misconception. 2 Patrolling of wards had been phased out, but was likely to be reintroduced. Relatives needed to be able to locate staff quickly. Ease of circulation within the ward was critical, nurses' stations were more effective as mobile trolleys, particularly for the patient admission process; 'crash' trolleys for use in treating cardiac arrest needed to be 'on hand', not parked in corridors; relatives should not be parked in corridors; trolleys bringing food and drink, dispensing pharmaceuticals, bearing X-ray equipment, ultrasound, all required to pass each other. Medical students progressed in groups of six at BRI. The central aisle of the Nightingale enables staff to assess patients' walking capability in judging their potential for home discharge. Discussion of the adaptation options recorded the following observations by the staff and allowed the options to be developed further in light of their likely operational use: . Subdivision of the ward to create single-bedded bays This approach moves the ward closer to the DH ideal (and ironically is the pattern of the mediaeval hospital, e.g. the Heiligen Geist in Lubeck). Cubicles are formed in full-height partitioning, the doors likely to be maintained open. Twelve beds can be accommodated with a nurses' station rather than the 20 currently accommodated. Dimensionally the subdivision of open Nightingale wards into single rooms with dedicated bathrooms is inherently inefficient, not least because of the minimum bed spacing dimension and therefore the window spacing dictated by Nightingale. The drawing depicts the arrangement within a Nightingale pavilion at Bradford somewhat narrower than the norm so that circulation is restricted but in a full-width ward the additional 2-3 ft would enable greater functionality. Access to bathrooms remains limited, providing external bathroom pods to each stack of rooms would remove almost all glazing, however four are shown at the each corner of the ward (Figure 14). The BRI clinical staff opposed this configuration for elderly care but were more positive about its potential for short-stay, day-case elective surgery. Nonetheless many of the positive benefits associated with the open-ward arrangement are negated. . Subdivision of the ward to create multi-bedded bays on one side of the ward and a corridor on the other This approach is reminiscent of a traditional first class 'Pullman' railway carriage forming six compartments of two beds. Each two-bed ward is provided with a bathroom pod, a significant step towards the NHS modernization aspiration. The additional width of a standard ward would ease the space available for the ward rooms ( Figure 15). BRI clinical staff were enthusiastic. Rooms could be organized by gender but on a flexible basis, beds could be offset to break immediate intervisibility. . Centralization of the beds This option places a wardrobe-height spine bulkhead carrying water supply and other services required at the bedhead, along the centre of the wards, with 14 beds facing the windows. The Cabinet Office showed some enthusiasm for this configuration, which the authors do not claim to have invented. 3 It is vaguely reminiscent of an airline business class cabin arrangement. Patients no longer face each other but enjoy a view protected from direct solar gain and glare, retain some contact with patients to either side, and can receive visitors in relative privacy. Each patient has an individual washbasin, wardrobe and drawer unit. Tracked curtains could provide complete visual if not acoustic privacy for treatment. Discussion with hospital staff suggested that earlier versions with a straight central spine could develop a 'zigzag' geometry in plan, with beds offset for greater privacy, enabling beds to be pushed in and out of the ward to operating theatres, imaging suites and treatment rooms ( Figure 16). BRI staff were unconvinced that this arrangement offered benefits for elderly care because there is no direct intervisibility between beds. 'Wayfinding' may be difficult for the confused, delirious and alcoholic who would have difficulty identifying their allocated bed. However, the arrangement provides an interesting compromise for care of the less vulnerable, preserving the beneficial airflows observed within the original type. . Externalized circulation An external corridor is added to the eastern perimeter of each floor, steel-framed with timber stud construction clad in lightweight materials, insulated to contemporary UK Building Regulation standards, accessed through doorways formed in window openings cut down to floor level. The design attempts to recover opening window area on the corridor side but the asymmetry affects the dispersion of pathogens as reported above. The removal of circulation from the main body of the building releases spaces to take three to five beds, each ward served by a new bathroom tower (Figures 17 -19). Here the additional width gained within the ward rooms is highly beneficial, the full width available afforded adequate space between beds for It emerges, therefore, that different ward arrangements are appropriate for different care purposes, that more open arrangements may have higher functionality, if only in patient safety terms, and this safety is less likely to be imperilled by enhanced risk of airborne cross infection than hitherto (1) windowsill dropped to the £oor and the opening widened to form a door into the ward; (2) upper section of the window opening retained to vent/light directly into the ward; (3) £oor to new external circulation, 200 mm insulation; (4) steel hangers carry additional corridors and suspend steel frames of added circulation; (5) steel-framed roof construction, 300 mm insulation; (6) elevation of the added circulation element; and (7) horizontal centre pivot windows retain the former opening window areas in the Nightingale ward behind Costs Table 5 records elemental costings of the four fundamental refurbishment options per floor. These costs were calculated by DeDeRHECC project partners Davis Langdon AECOM. Services renewals account for 40 -50% of overall construction cost, weighting the Skewed island option in particular. Table 6 reveals however that the Skewed island and Pullman options are the less cost-intensive in costs per m 2 and 60 -70% of the equivalent new-build cost. The benefit of reduced contract time is only indicated indirectly by the allowances for main contractor preliminaries. Six months to one year to recover lost bed spaces may be saved with additional value-formoney (VfM) benefits. Table 7 records discounted lifecycle energy costs for the base case 'do nothing' option against option 1. Little lifecycle cost difference between the various refurbishment options emerged. The only major difference affecting energy performance between the 'do nothing' case and the four other options is the addition of insulation resulting in a reduction in heating energy. The four options, then, by reconfiguring patient environments to offer more privacy and dignity, recover inherently resilient buildings. Savings in energy use are available but trivial in comparison with other operational costs. More significant are the potential savings in the avoidance of mechanical cooling installation later in the century. Current UK Treasury VfM models are unable to include such future savings, let alone intangibles such as business continuity or reduced mortality as a consequence of more effective observation. Note: Refurbishment costs include external insulation installed behind the existing outer stone skin, allowing for removal and reinstatement, at »150 000. Source: Derived from the Davis Langdon AECOM cost database. Notes: (1) Costs represent energy costs for one £oor only to allow a comparison to be made with the cost model. Simplistically these energy costs could be prorated by £oor area for multiple £oors. (2) Energy costs only include energy demands included in the Loughborough model . It is believed that these energy demands only include space heating, some small power and lighting. They therefore do not include various other energy demands commonly found in hospitals, such as medical equipment. (3) No maintenance or replacement costs are included since these are thought to be similar between the base case and the various architectural/building services options. (4) Retail energy costs are taken from Department for Energy and Climate Change projections^IAG guidance. (5) The reduction in costs is mainly a result of the addition of insulation, which should be seen in the light of the estimated »150 000 costs for insulation rather than the full costs of refurbishment which result in functional improvements (improved privacy, for example) but do not have a bene¢t that could be monetized easily. Conclusions This paper has amplified previous work on the resilience of Nightingale wards by analysing new internal configurations, devised in close consultation with clinicians. These arrangements all introduce higher levels of privacy and dignity (in line with current policy) into the resilient envelopes of Nightingale ward buildings without compromising that innate resilience. Insofar as the desire to achieve greater control of airborne pathogens drove recent disenchantment with Nightingale wards, the reconfiguration options are modelled to predict patterns of airflow and likely concentrations of pathogens in air with a higher dwell time. Model results indicate that good cross-ventilation is still possible with adjustment to the internal layout, provided care is taken to ensure there are appropriate ventilation openings to enable through flow of air. This is particularly important in the two cases with a corridor at the side of the patient rooms (options c and e) where the presence of an internal wall can act to block effective airflows. The construction detail of the adaptation schemes is developed and costed in detail. It is significantly lower than new-build alternatives and quicker to deliver. Net present value calculations using the UK government discount rate are undertaken to predict payback periods, the fundamental data required by HM Treasury in determining viable policy. The authors argue for a review of policy as the deep financial retrenchment required of the NHS redirects emphasis towards refurbishment. Current economic circumstances place a particular premium on lighttouch refurbishment of NHS sites; similarly, a wholly reasonable concern to improve the patient experience also means that cosmetic changes can be favoured over more substantial interventions. In addition, Trusts are reluctant to lose capacity and have concerns about construction noise and dust transmission. The options presented here address these concerns whilst delivering fundamental improvements in privacy and dignity and demonstrating that, with sensible reconfiguration, Florence Nightingale's original approach might yet be viable for the 21st-century NHS. Indeed, this work implies that designers might productively re-evaluate techniques and strategies for good ventilation first deployed in the pre-modern era in order to answer the challenges of contemporary low-energy architecture.
12,227.6
2014-09-03T00:00:00.000
[ "Environmental Science", "Engineering", "Medicine" ]
A Salient Object Detection Method Based on Boundary Enhancement Visual saliency refers to the human’s ability to quickly focus on important parts of their visual field, which is a crucial aspect of image processing, particularly in fields like medical imaging and robotics. Understanding and simulating this mechanism is crucial for solving complex visual problems. In this paper, we propose a salient object detection method based on boundary enhancement, which is applicable to both 2D and 3D sensors data. To address the problem of large-scale variation of salient objects, our method introduces a multi-level feature aggregation module that enhances the expressive ability of fixed-resolution features by utilizing adjacent features to complement each other. Additionally, we propose a multi-scale information extraction module to capture local contextual information at different scales for back-propagated level-by-level features, which allows for better measurement of the composition of the feature map after back-fusion. To tackle the low confidence issue of boundary pixels, we also introduce a boundary extraction module to extract the boundary information of salient regions. This information is then fused with salient target information to further refine the saliency prediction results. During the training process, our method uses a mixed loss function to constrain the model training from two levels: pixels and images. The experimental results demonstrate that our salient target detection method based on boundary enhancement shows good detection effects on targets of different scales, multi-targets, linear targets, and targets in complex scenes. We compare our method with the best method in four conventional datasets and achieve an average improvement of 6.2% on the mean absolute error (MAE) indicators. Overall, our approach shows promise for improving the accuracy and efficiency of salient object detection in a variety of settings, including those involving 2D/3D semantic analysis and reconstruction/inpainting of image/video/point cloud data. Background In recent years, deep convolutional neural networks (CNNs) have demonstrated exceptional performance in various visual tasks and are highly valuable for applications. However, they often perform poorly when evaluated on datasets with different distributions from the training set and are susceptible to overfitting. To enhance the model's generalization, numerous data augmentation methods and regularization strategies have been proposed. Salient target detection involves detecting and segmenting the most noticeable object area in an image, making it a two-stage visual task. Although most methods perform both stages simultaneously, this paper takes a deep learning approach that captures different levels of features for salient target detection. By training on a dataset and utilizing the image's salient information to generate diverse data at the feature level, our proposed model avoids generating interference data while maintaining the key information of the image. This approach improves the model's performance on the test set. Salient object detection (SOD) is a fundamental problem in computer vision, which aims to automatically identify and localize the most visually conspicuous regions in an image or a scene. The demand for SOD has been steadily growing due to its wide-ranging application in various fields. In the field of medical imaging, SOD plays a critical role in assisting medical professionals in diagnosing diseases and abnormalities. By accurately detecting salient regions in medical images, such as MRI or CT scans, SOD can aid in identifying important structures, lesions, or anomalies, improving medical diagnosis and treatment planning. In robotics and autonomous systems, visual attention mechanisms inspired by human saliency are of paramount importance. For robots to efficiently navigate and interact with their environment, they need to focus on relevant objects or regions. SOD helps robots prioritize salient visual information, enhancing their decision-making and perception capabilities. Furthermore, SOD finds applications in video analysis and surveillance, where the detection of salient objects in a sequence of frames can aid in tracking moving objects or identifying potential threats. Additionally, in fields like human-computer interaction and augmented reality, SOD contributes to creating more immersive and intuitive user experiences by highlighting relevant elements of interest. The ability to efficiently detect salient objects is also valuable in image and video editing tasks, such as object removal, background inpainting, and image retouching. By accurately identifying salient regions, these editing tasks can be performed more effectively and with higher visual quality. Overall, the significance of SOD in image processing lies in its ability to mimic human visual attention, enabling efficient and effective analysis of visual information. By identifying salient objects and regions, SOD improves the performance and accuracy of various image-related applications, including medical diagnosis, robotics, human-computer interaction, and content-based retrieval. Its integration into these fields can lead to breakthroughs in technology, making systems more intuitive, responsive, and capable of handling complex real-world scenarios. Given the diverse applications of SOD, there is a growing demand for robust and efficient algorithms that can handle different types of images, scenes, and visual challenges. The proposed method based on boundary enhancement addresses some of the limitations of existing techniques and offers improved detection performance, making it a valuable contribution to the field. Problem Description Scale variation constitutes one of the key challenges of the SOD task, which is hard for CNNs to handle, owing to the constraint of the downsampling operation. Different layers of features only possess the ability to handle specific scales, and the level of target information contained in features with different resolutions varies. In the top-down path, the features of every layer are produced horizontally, and the approach of upsampling to a unified resolution and then merging only utilizes a separate resolution feature in each layer, which falls short of coping with problems of various scales. To tackle these issues, the Amulet network employs shrinking and expanding methods to change feature sizes and fuses all levels of features at each level in the transmission layer to make effective use of multi-scale information [1]. However, this fusion method tends to generate redundant information and noise interference. Additionally, atrous spatial pyramid pooling (ASPP) [2] and pyramid pooling module (PPM) [3] are two common options for multi-scale information extraction, typically fixed at the last layer of network feature extraction. These methods also struggle to effectively handle small-scale objects as deep features, especially top-level features, which contain less information regarding such objects. The feature extraction process in pixel-level saliency often results in the loss of detailed information and unsatisfactory boundary areas for salient targets. To address this issue, some methods recursively refine high-level features using low-level local information [4,5]. The method proposed by Zhao et al. [6] provided direct supervision on salient object boundary detection, taking into account both salient boundary information and salient objects. Other methods [7,8] used superpixels for preprocessing to extract boundaries before saliency detection or adopt conditional random field (CRF) for postprocessing on saliency prediction maps to preserve object boundaries. However, these methods require additional processing and have relatively low efficiency. Cross-entropy loss is a commonly used loss function but treats each pixel equally without considering the boundary pixels, leading to blurred boundaries. The method proposed by Qin et al. [9] addressed this issue by modifying the loss function to emphasize the pixels in the border area. This study proposes a salient object detection method based on boundary enhancement to address the challenges in salient object detection. The method builds on the Feature Pyramid Network (FPN) [10] using a multi-level feature aggregation module to aggregate features from adjacent layers and enhance the expressive power of features at different resolutions. A multi-scale information extraction module is then applied to further improve the expressive ability of features. The module is implemented in each layer of the decoder and can be easily adapted to other network models. In addition, the method explicitly models the boundary of salient objects and fuses salient boundary information with salient object information to supplement the salient object information. The method utilizes a set of mixed losses for supervised training of the saliency detection task, further enhancing the detection effect and highlighting the salient regions uniformly. The proposed method is evaluated on four public saliency datasets and outperforms many existing mainstream methods on multiple indicators. Certain techniques, such as multi-level feature aggregation and multi-scale information extraction [11][12][13], have been explored in prior research on SOD. While the proposed method shares common modules with these earlier publications, the main contributions of our method lie in the innovative combination and adaptation of these techniques to address specific challenges in SOD. The primary contributions of our work can be summarized as follows. (1) Boundary Enhancement Technique: Our method introduces a novel boundary extraction module, which enhances the detection accuracy of salient objects by effectively capturing the boundary information of the regions of interest. This innovative technique improves the precision and robustness of the saliency prediction results. (2) Multi-level Feature Aggregation: While multi-level feature aggregation has been used in previous works, our method introduces a specific multi-level feature aggregation module that addresses the challenge of large-scale variation of salient objects. It enhances the expressive ability of fixed-resolution features by effectively utilizing adjacent features to complement each other, leading to improved detection results. (3) Multi-scale Information Extraction: Although multi-scale information extraction has been explored in the literature, our method proposes a novel approach to capture local contextual information at different scales for back-propagated level-by-level features. This enables better measurement of the composition of the feature map after back-fusion, resulting in more accurate saliency predictions. (4) Boundary and Salient Target Fusion: The proposed method introduces a boundary and salient target fusion technique that refines the saliency prediction results by addressing the low confidence issue of boundary pixels. This fusion enhances the overall accuracy of the salient object detection. Salient Object Detection Based on Boundary Enhancement The use of CNNs can lead to loss of necessary information during the acquisition of advanced semantic features through continuous subsampling. This will result in inaccurate and inconsistent segmentation of salient objects of different scales. To address these issues, this paper proposes a network model based on the FPN structure, which combines bottomup and top-down approaches to effectively integrate features from different levels. The proposed model also incorporates advanced semantic features for localizing salient objects and low-level features for refining deep-layer features. Additionally, the model explicitly models the boundaries of salient objects and fuses the extracted boundary information with the salient object information to improve the overall detection accuracy. A set of mixed losses is used for supervised training to further enhance the detection effect and uniformly highlight the salient regions. Figure 1 shows the overall framework of the proposed salient target detection model based on boundary enhancement. The feature extraction process follows these steps: First, an input image is processed by the encoder to extract multi-level features. Then, the decoder and horizontal features are progressively fused from top to bottom, and the resulting fused features are fed into the boundary extraction module to extract the boundary information. This information is then sent back to the decoder, where it is merged with the salient target information, resulting in a pixel-level salient target prediction map. Feature Extraction Network As shown in Figure 2, the feature extraction network consists of two stages: the first stage is the encoder feature extraction stage, which uses the feature extraction network to extract features and abstract information at different resolutions, and aggregates adjacent features once using a multi-level feature aggregation module. The second stage is the decoder reverse fusion stage, which uses a multi-scale information extraction module in the reverse fusion process, extracts higher-level semantic information and multi-scale information, and fuses it with the multi-level aggregation features of the previous level after upsampling. In addition, in the decoder stage, the boundary extraction module is used to extract the boundary of salient targets on features containing multi-scale information. The boundary information is used to supplement the salient target information, thereby obtaining a more accurate salient target prediction map. Figure 1 illustrates the overall framework of our method, showcasing the interactions between the encoder, decoder, and boundary extraction modules. Additionally, the edge section, as depicted in Figure 2, is connected to the boundary extraction module, contributing to the method's boundary enhancement capability. Moreover, the multi-scale information extraction module, a critical component of our approach, captures local contextual information at different scales for improved saliency predictions. • Encoder During the common feature extraction process in the encoder stage, VGG-16 and ResNet-50 are used as backbones in our model. We directly remove the last pooling and fully connected layers of the VGG-16 and ResNet-50 models. For example, using ResNet-50, the input image is processed through the backbone network to extract five sets of convolutional layers, resulting in five feature maps F1-F5 with different resolutions and channel numbers. If VGG-16 is used as the backbone, the sizes of the five feature maps obtained are 1/1, 1/2, 1/4, 1/8, and 1/16 of the original image, respectively. The feature maps vary in size from 1/2 to 1/32, with the number of channels being 64, 256, 512, 1024, and 2048, respectively, from F1 to F5. From F1 to F5, the level of detail decreases while the level of semantic information increases. As shown in the encoder stage on the left side of Figure 2, the input image is processed through the backbone network to extract five features with different resolutions and channel numbers. To enhance the extraction of effective information from fixed-resolution features, we modified the encoder section by adding a new multi-level feature aggregation module to the transmission layer. In the encoder stage, depicted in the left section of Figure 2, every level of features is input into the feature aggregation module, along with the higher and lower-level features, to enhance the expressive ability of various resolution features. Subsequently, we obtained five multi-level aggregation features, denoted as MF1-MF5, which are transmitted to the decoder as the corresponding horizontal output at each level. • Decoder The second stage is the reverse fusion phase, which aims to gradually fuse features extracted by the encoder with varying resolutions and channel numbers. This approach maximizes the use of information from different scale features and integrates them to perform upsampling, eventually restoring the salient target prediction map to the original image size. Instead of directly upsampling features of different sizes and restoring them to the size of the original image, then splicing and merging, the reverse fusion process is conducted step by step from top to bottom. Each operation only requires merging two features with similar resolutions, thereby reducing the introduction of noise and producing more accurate results. The decoder stage in Figure 2 illustrates the reverse level-by-level fusion process using ResNet-50 as an example. The process starts from the highest-level multi-level feature MF5 and proceeds sequentially downwards. The feature is first input into the multi-scale information extraction module to extract its multi-scale information, followed by upsampling it to twice its original size. The channel dimension is then reduced using 1 × 1 convolution to ensure that the size and channel dimension of the multi-scale feature match those of the upper-level horizontal features. This approach helps to reduce the parameters and calculations. The feature is then added directly to the elements of the low-level horizontal features for fusion. Finally, a 3 × 3 convolution is applied to further abstract the features and obtain the fusion feature map, which is then input to the next level. After performing the reverse fusion stage, the resulting feature maps h 4 , h 3 , h 2 , and h 1 contain rich multi-scale information. These feature maps are then passed through the boundary extraction module to obtain the feature EF, which contains the boundary information of the salient object. The feature map h 1 is also passed through the multi-scale extraction module to extract multi-scale information, which is then concatenated with the boundary feature EF along the channel dimension. After reducing the channel dimension with a 1 × 1 convolution, the feature map is upsampled to restore the original image size and passed through a Sigmoid function to obtain the final saliency target prediction image. The reverse step-by-step fusion process is shown in Equation (1). where h i+1 represents the feature map generated at each level in the backward fusion process; MF i represents the multi-level aggregated features at different scales in the horizontal connection; U p(., 2) represents an upsampling operation, which doubles the size of the input feature map each time; Conv 3×3 (·) represents the 3 × 3 convolution operation; Conv 1×1 (·) represents the 1 × 1 convolution operation; and M(·) represents the multi-scale information extraction module. In the proposed SOD method, several potential pre-trained backbone models were considered for feature extraction. Some of the commonly used backbone architectures include VGG-16, ResNet-50, EfficientNet, and Inception, among others. VGG-16 and ResNet-50 were selected as the backbone models for the study for the following reasons: (1) VGG-16 is a widely used deep convolutional neural network architecture known for its simplicity and effectiveness. It consists of 16 layers, primarily composed of small 3 × 3 convolutional filters, making it easy to implement and interpret. Despite its straightforward structure, VGG-16 has shown excellent performance in various computer vision tasks, making it a reliable choice as a backbone for salient object detection. (2) ResNet-50 is a deeper architecture that introduces the concept of residual connections to address the vanishing gradient problem in deep neural networks. With its skip connections, ResNet-50 enables the training of very deep networks, allowing for more complex feature representations. This architecture has achieved remarkable results in image recognition tasks and has become a standard choice for various computer vision applications. The impact of using another backbone model, such as EfficientNet or Inception, can be substantial and can influence the performance of the salient object detection method. The following are some potential impacts. (1) Computational Efficiency: EfficientNet is specifically designed to achieve better efficiency by scaling the model's depth, width, and resolution in a balanced manner. Using EfficientNet as a backbone could lead to faster inference times and lower memory requirements compared to VGG-16 and ResNet-50. (2) Feature Representation: Different backbone models capture different levels of feature representations. Inception, for example, incorporates various convolutional filter sizes to capture multiple scales of information. Using a different backbone could alter the feature extraction capabilities, potentially affecting the detection accuracy and robustness. (3) Generalization: The choice of backbone model may impact the generalization of the salient object detection method to unseen data. Some backbones may generalize better to diverse datasets, while others may perform better on specific datasets based on the nature of the learned features. (4) Transfer Learning: The choice of the backbone model can affect the ease of transfer learning. Pre-training on large-scale datasets and fine-tuning on specific SOD datasets might require different strategies, depending on the backbone's architecture. Overall, the choice of the backbone model depends on a trade-off between computational efficiency, feature representation, and the specific requirements of the application. While VGG-16 and ResNet-50 have shown good performance in our study, using other backbones like EfficientNet or Inception may lead to different trade-offs and performance outcomes. It is essential to experiment with various backbone architectures to find the best fit for a particular salient object detection task or dataset. Multi-Level Feature Aggregation Module In the process of feature extraction, convolution and pooling operations are used to abstract image features at multiple levels. However, different levels of features have varying degrees of abstraction, and integrating feature information from other levels can enhance their expressive ability. For instance, shallow features can integrate information from high-level features, which can help suppress noise and enhance detailed information. On the other hand, high-level features can integrate feature information from shallow features, which can further enhance semantic information. Some features can also integrate information from both shallow and high-level features. To improve the expressive ability of fixed-resolution features by aggregating multi-level features and further enhance the model's performance, this paper proposes a multi-level feature aggregation module. The proposed multi-level feature aggregation module, illustrated in Figure 3, consists of two stages: complementarity and aggregation. The resolution of the input features gradually decreases from F i−1 to F i+1 , while the number of channels gradually increases. In the complementary stage (S 1 ), the three input features undergo 1 × 1 convolution to ensure consistency in the number of channels, which reduces computational costs and facilitates subsequent element fusion. Next, feature F i is pooled and upsampled to supplement the information of F i−1 to F i+1 , respectively. Similarly, F i−1 to F i+1 are pooled and upsampled to complement F i . These pooling and upsampling operations are performed so that the complementary features have the same resolution. The preprocessing process is shown in Equation (2), and the feature complementation process is calculated using Equations (3)-(5). where F j represents the jth level feature after reducing the channel dimension; F i(S 1 ) represents the ith level feature in the complementary stage S 1 after supplementation; Conv(·) represents convolution responsible for changing the channel dimension; ReLU represents the Rectified Linear Unit non-linear activation function; U p(·) represents the upsampling operation; and AvgPool(·) represents the average pooling operation. It is worth noting that the top-level feature F 5 and the bottom-level feature F 1 have only one neighbor, so in the complementary stage, they have only two channels, L 2 + L 3 and L 1 + L 2 , respectively. The feature aggregation stage, S 2 , aggregates complementary features from different channels to obtain horizontal features that contain multi-level information and provide them to the decoder. The calculation process is illustrated in Equation (6). On the other hand, the features are fused element-wise in both the complementary and aggregation stages. MF 1 and MF 5 aggregate the features of only one neighbor. Moreover, all element fusion operations in the aforementioned stages are followed by a set of 3 × 3 convolutions, which includes a combination of regularization and non-linear variation ReLU to further abstract features. Multi-Scale Information Extraction Module Convolutional networks with frequent downsampling operations suffer from a loss of target information, which accumulates with the feature layer. Additionally, the amount of information contained in each level of feature layers varies for targets of different scales, and each convolutional layer can only capture information with a fixed receptive field size. Small objects have scant information in the top-level features, and commonly used top-level multi-scale extraction methods prove insufficient for handling this situation. Meanwhile, the size of salient objects in visual scenes varies significantly, complicating the salient object detection task. Effectively extracting information from data with changing target scales becomes a crucial issue. Therefore, network design must consider how to extract and aggregate multi-scale information more effectively to tackle the challenge of large target scale changes in visual scenes. Based on the aforementioned issues, this section proposes a set of multi-scale information extraction modules inspired by the ASPP idea [2]. These modules are capable of extracting multi-scale information from specific-level features and addressing the problem of target scale changes more effectively. As depicted in Figure 4, each module comprises four branches, out of which three correspond to convolutional layers with distinct hole ratios. These layers can extract features with different receptive fields, containing information of various scales, which are then fused further to obtain the final output featuring multi-scale information. Specifically, given an input feature h, during the forward process, the feature is first extracted by dilated convolutions with dilation rates of 2, 4, and 8 to obtain features sh 1 , sh 2 , and sh 3 , respectively, which contain information at different scales. The calculation Equation (7) is expressed as follows: Here, AstrousConv 3×3 (·) represents a 3 × 3 dilated convolution kernel, dilate represents the sampling rate of the dilated convolution, and in order to maintain the same feature resolution, padding is consistent with the dilate. The second step involves fusing the original features with the extracted multi-scale information using residual operations to preserve the original feature information. The resulting features are further aggregated using a convolution operation and an activation function to improve their non-linear capabilities, and a feature M is obtained that contains multi-scale information. Equation (8) expresses the calculation process. where Conv 3×3 indicates a 3 × 3 convolution operation and sh i + h is the fusion of element-wise addition of feature layers. The multi-scale extraction module presented in this paper offers three significant advantages. First, it can capture contextual information of diverse scales in spatial features. Second, it can expand the receptive field without increasing the number of parameters. Lastly, the module has a straightforward structure and can adjust hole rate parameters to suit various datasets. It is also effortless to integrate this module with existing network architectures. Boundary Extraction Module In this paper, we propose a method for extracting the boundaries of salient objects by constructing an additional branch for boundary detection. This branch runs parallel to the top-down path of the network and takes the features containing multi-scale information generated in the reverse stepwise fusion process as input for the boundary extraction module. By using this approach, we are able to predict the boundaries of the saliency targets. The detailed structure of the proposed approach is illustrated in Figure 5. Four features, h 1 , h 2 , h 3 , and h 4 , generated in the decoder are utilized as input. They first pass through a ResBlock module for information conversion and channel dimension reduction. Different levels of edge features, eh 1 , eh 2 , eh 3 , and eh 4 , are obtained by 1 × 1 convolution to change the number of channels to 1, and then upsampled to obtain the saliency boundary prediction map, e 1 , e 2 , e 3 , and e 4 . The boundary of each level is used for network training and forecasting supervision. The ResBlock module is a combination of two residual blocks that have the functions of converting information and reducing channels, as shown in the upper right corner of Figure 5. The calculation process of multi-level boundary extraction is shown in Equations (9) and (10). In these equation, h i represents the feature map generated at each level during the reverse fusion process; eh i represents the saliency boundary feature at each level; e i represents the saliency boundary prediction at each level; U p(·, n) represents the upsampling operation, which upsamples by a factor of n; Conv1 × 1(·) represents the 1 × 1 convolution operation; ResBlock represents the information transformation module; and Sigmoid(·) represents the activation function. In order to obtain more precise saliency boundary predictions, we fuse the saliency boundary predictions of different levels, namely, e 1 , e 2 , e 3 , and e 4 . The fused output also needs to be supervised and trained. The calculation formula for this process is shown in Equation (11). where Concat(·) indicates splicing features along the channel; Conv 1×1 (·) represents the 1 × 1 convolutional operation, reducing the channel dimension to 1; and e f use represents a saliency boundary prediction map that fuses multiple levels. In order to transfer the extracted boundary information of salient targets to the salient target prediction branch to compensate for the missing details, this paper will first upsample the extracted multi-level boundary features, eh 1 , eh 2 , eh 3 , and eh 4 , and then concatenate them along the channel before inputting them into the EdgeInfo module for further fusion to obtain features containing rich boundary information. The structure of EdgeInfo is shown in the lower right corner of Figure 5, which includes four convolutional layers and has the function of further fusing features and changing the number of channels. The calculation process of multi-level boundary feature fusion is shown in Equation (12). EF = EdgeIn f o(Concat(U p(eh 1 ), U p(eh 2 ), U p(eh 3 ), U p(eh 4 ))) (12) where EdgeIn f o(·) represents the boundary feature aggregation module and EF represents the salient boundary features that aggregate multi-level information and can be used to fuse with the salient object features in the next step. Loss Function The proposed method uses two categories of loss functions during training to address different tasks: salient boundary detection and salient target detection. Specifically, the salient target detection loss is a mixture of two loss functions. The total loss function used for training is shown in Equation (13): where λ 1 is a hyperparameter used to balance the losses of the two tasks, and its value is set to 1 in the experiments. The salient target detection loss function is a combination of two types of loss functions with different focuses. It includes binary cross-entropy loss for individual pixels and consistency-enhancing loss for foreground regions. Its calculation, Equation (14), is expressed as follows: ( 1) Binary Cross Entropy Loss (BCE) The detection of salient objects commonly utilizes BCE loss as its primary loss function. As a pixel-level loss function, BCE loss does not consider pixel connectivity and therefore cannot differentiate foreground and background pixels. This loss function does not prioritize the pixels in the boundary area or account for the integrity of the target during the training process. Instead, convergence is achieved across all pixels using the following calculation, Equation (15). where P represents the predicted saliency map, p represents a pixel in P, G represents the ground truth map, g represents a pixel in G, and log(·) represents the logarithmic operation at the pixel level. Consistency Enhanced Loss (CEL): Referencing Intersection over Union (IoU) loss, this loss function uses image-level consistency loss, which can make the loss function more focused on the foreground and less susceptible to scale changes. The calculation Equation (16) and its corresponding gradient calculation Equation (17) of this loss function are expressed as follows: ∂L cel ∂p where P denotes the predicted salient target map, while p denotes a pixel in this map. Equation (17) demonstrates that the gradient of the consistency enhancement loss is related to the pixel category, resulting in the same gradient for pixels of the same category and different gradients for pixels of different categories. This difference can increase the contrast between the foreground and the background, making the internal pixels of both foreground and background more uniform. ( 2) Saliency Boundary Detection Loss (SBDL) The imbalance between the number of boundary pixels and non-boundary pixels, caused by the high sparsity of boundary pixels, presents a challenge in the supervision of the salient boundary learning process. To address this issue, a balanced binary entropy loss is employed. This loss function ensures that the supervision is not biased towards any pixel type, hence solving the problem of pixel imbalance. The specific expression of the balanced binary entropy loss function is shown in Equation (18). where β represents the proportion of non-boundary pixels to all pixels. Experiments and Analysis Below is the pseudo-code for the proposed SOD method based on boundary enhancement and multi-scale information extraction: # (1) preprocess (image): This function preprocesses the input image to prepare it for feature extraction. Common preprocessing steps include normalization, resizing, and data formatting to match the input requirements of the pre-trained backbone model. (2) extract_features (preprocessed_image): This function extracts high-level features from the preprocessed image using a pre-trained backbone model such as VGG16 or ResNet50. The backbone model captures rich image representations that serve as the basis for subsequent processing. (3) boundary_enhancement (features): In this step, the features obtained from the backbone model are further enhanced by incorporating boundary information. The method aims to improve the completeness and distinctiveness of salient object boundaries, making use of edge-aware techniques to highlight sharp transitions and edges in the image. (4) multi_level_feature_aggregation (enhanced_features): This function aggregates features at multiple levels to handle large-scale variations of salient objects. By combining adjacent features and leveraging complementary information, the expressive ability of fixed-resolution features is enhanced. (5) multi_scale_information_extraction (aggregated_features): The multi-scale information extraction module captures local contextual information at different scales for the aggregated features. This helps measure the composition of the feature map after back-fusion, leading to more accurate saliency predictions. (6) boundary_extraction (extracted_info): The boundary extraction module processes the extracted information to identify and extract boundary information of salient regions. This is crucial for refining the saliency prediction results, especially in regions with low confidence. (7) fusion (boundary_info, extracted_info): The fusion process combines the boundary information with the extracted salient target information. This fusion step refines the saliency prediction results by incorporating boundary cues, leading to improved accuracy. (8) predict_saliency (fused_info): Finally, the fused information is used for saliency prediction, resulting in a saliency map highlighting the most visually significant regions in the input image. Datasets and Evaluation Metrics This paper presents experiments conducted on four publicly available datasets. The first dataset used in this study is DUTS [14], which is the largest saliency dataset available. It is comprised of 10,553 training images and 5019 test images and is known for its complex saliency target scenes. Many salient object detection methods in recent years have employed the DUTS training set to train their models. The second dataset is DUT-OMRON [15], which consists of 5168 images containing singular or plural salient objects, diverse image content, and complex background. Apart from pixel-level annotations for salient object detection tasks, the dataset includes bounding box annotations and gaze point data, which can be used for object localization and gaze point prediction. The third dataset used in the experiments is HKU-IS [16], comprised of 4447 images characterized by multiple disconnected salient objects and inconspicuous contrast between objects and background. Lastly, the ECSSD [17] dataset used in this study is an extension of the CSSD dataset. It includes 1000 images with rich semantics but complex structures sourced from the BSD dataset, PASCAL VOC dataset, and image data on the Internet. The experiments conducted in this paper employ five widely used evaluation metrics for salient object detection tasks. The precision-recall (PR) curve, which was often used in early studies, measures precision and recall by calculating the binarized predicted output and the true value map of the image. The formula for the calculation process is shown in Equation (19): where TP represents the number of correctly predicted salient pixels, FP represents the number of background pixels that are incorrectly predicted as significant points, and FN represents the number of incorrectly predicted background pixels. The threshold selection range for the indicator is between 0 and 255. A set of P, R values is obtained by outputting the binarization according to each threshold pair and then using the binary image and the corresponding true image to calculate the P, R values. The PR curve is plotted with Recall as the abscissa and Precision as the ordinate based on the average P and R values of all images under each threshold value. F-measure: This metric is a weighted harmonic mean of precision and recall, and its calculation Equation (20) is expressed as follows: where β 2 is the weight parameter of precision and recall, which is usually set to 0.3 in practical applications to emphasize accuracy. We calculate the maximum F β , defined as F max , by taking the P value and R value from the PR curve. The larger the F max , the better the performance of the model. Mean Absolute Error (MAE): MAE takes into account all pixels and reflects the similarity between the predicted image and the ground truth image. The calculation Equation (21) of MAE is expressed as follows: where H and W represent the height and width of the image, respectively, while S(r, c) and G(r, c) respectively represent the predicted and ground truth values of the pixel located at (r, c). The smaller the MAE score, the better the performance. E-measure [18]: It combines local pixel values with the image-level mean to capture both image-level statistical information and local pixel-matching information. Its calculation Equation (22) is expressed as follows: where φ S (·, ·) represents an enhanced alignment matrix, which can be used to combine local and image-level information. S-measure: This combines structural similarity at the region-aware level and objectaware level, supporting the acquisition of foreground structure in salient regions. It overcomes the shortcomings of evaluation criteria based on the pixel level that cannot obtain structural information. The calculation Equation (23) is as follows: where S o refers to the structural similarity of object perception , S r refers to the structural similarity of region perception and α is set to 0.5. Experimental Environment Experiments were implemented based on the PyTorch framework, and all experiments were completed on an NVIDIA GeForce RTX 3090 graphics card with 24GB of memory. The experiments in this section trained models on the DUTS training set and used random horizontal flips to augment the dataset during the training phase to avoid overfitting. The number of training cycles was 24 and the batch size was 1. The parameters were updated once every 10 samples' gradients accumulated. The input image size during the training and testing phases was the original size. This section used a pre-trained VGG-16 or ResNet-50 model to initialize the parameters of the main feature extraction network, and the remaining parameters were initialized with a normal distribution. When the main network was VGG-16, the network used the Adam optimizer for gradient descent, and the related learning rate was 1 × 10 −4 , the weight decay was 5 × 10 −4 , and the learning rate was one-tenth of the original when the cycle number was 8 or 16. When the main network was ResNet-50, the learning rate changed to 5 × 10 −5 . The network was trained until the loss converged, and the entire training duration was 15 h. The ablation experiments were conducted on the DUTS dataset with VGG-16 as the backbone. Ablation Experiment This section proposes a boundary-enhanced salient object detection method for salient object detection tasks. To address the problem of significant scale variations of the target, this section proposes a multi-level feature aggregation module on the extracted features of the encoder to aggregate features of different levels and enhance the representation ability of features. Additionally, this section inserts a multi-scale information extraction module into each level of the decoder's fusion to further extract context information of different scales. To address the issue of blurred boundary pixels of salient objects, this section proposes a boundary extraction module to extract salient boundary information on the boundary branch and then further supplement the salient object information. Furthermore, this section uses a set of mixed loss functions, which includes BCE for individual pixels and CEL for foreground areas, to supervise the model from different perspectives during training. To validate the effectiveness of the innovative points in this section's method, detailed ablation experiments and analysis were conducted. The ablation experiments included structural ablation experiments for different modules and loss function ablation experiments. This section used the VGG-16 backbone FPN network structure as the baseline and binary cross-entropy as the loss function. The dataset used for the ablation experiments was the DUTS dataset, and the metrics included MAE, maximum F-measure, E-measure (E m ), and S-measure (S m ). The results of the two sets of experiments are shown in Tables 1 and 2, respectively. Structural ablation experiments: In order to evaluate the effectiveness of the proposed modules, a quantitative comparison of the results obtained using different modules was performed. The structural ablation experiment adds a separate multi-level feature aggregation module, multi-scale information extraction module, and boundary extraction module to the Baseline, and then further adds their two-by-two combinations and three combinations for experimentation. The results of the structural ablation experiment are shown in Table 1. It can be seen that all modules have significant performance improvements, with the single multi-scale information extraction module having the best performance. Compared to the Baseline, the MAE indicator is improved by 20.6%, the F max indicator is improved by 5.7%, the E m indicator is improved by 4.2%, and the S m indicator is improved by 2.3%. The combination of two modules has a further improvement compared to the single module, and the performance is best when all three modules are used together, with a 30.2%, 7.1%, 5.3%, and 3.5% improvement in MAE, F max , E m , and S m , respectively. Loss function ablation experiment: In the premise that all modules are equipped, single loss function and the combination of two loss functions were used respectively in the loss function ablation experiment. As can be seen from Table 2, CEL loss has a very large improvement on the model, and the effect is highest when two losses are mixed. The MAE, F max , E m , and S m indicators are improved by 6.8%, 0.8%, 1.4%, and 0.2%, respectively, achieving the best results in this section. In order to show the effect more intuitively, this section further visualizes the detection results of Baseline and this section's method, as shown in Figure 6. It can be seen that the method of this section is significantly better than the Baseline method and not only can detect targets of different scales, but also has clear boundaries and uniformly consistent internal regions. Comparative Experiments with Existing Methods In order to demonstrate the superior performance and fairness of the proposed method, this section conducted comparisons on four common salient object datasets, including DUTS-TE, DUT-OMRON, HKU-IS, and ECSSD, as well as 11 mainstream methods based on VGG-16 or ResNet-50 in the past three years, including AFNet [19], PAGE [20], MLMSNet [21], CPD [22], GateNet [23], ITSD [24], AMPNet [25], EGNet [6], BANet [26], BASNet [9], and DNA [27]. The evaluation metrics used in all comparison experiments included E m , S m , MAE, and the maximum F-measure. It is worth noting that all the results of the comparative experiments are from the original literature or calculated from the saliency detection maps provided by the authors. The comparison results based on VGG-16 and ResNet-50 backbones are shown in Tables 3-6, respectively. Table 3. Comparative experiments on DUTS-TE and DUTS-OMRON, using VGG-16 as the backbone. Table 4. Comparative experiments on HKU-IS and ECSSD, using VGG-16 as the backbone. Table 6. Comparative experiments on HKU-IS and ECSSD, using ResNet-50 as the backbone. From Tables 3 and 4, it can be seen that the method in this section based on VGG-16 as the backbone has achieved advanced results on the four evaluation indicators of the four datasets. When using MAE as a reference, the method in this section is better than all the comparison methods, and the average improvement over the methods with slightly worse performance in each dataset is 3.6%. When using F max as a reference, the method in this section is better than other methods in three datasets and ranks second in the DUTS-ORMON dataset, with an average improvement of 0.2% relative to the methods with slightly worse performance in the three datasets with the best performance. When using Em as a reference, the method in this article leads all comparison methods on DUTS-TE and HKU-IS, with an average improvement of 0.3%. It ranks second in the other two datasets, but the difference from the best method is only 0.2% on average. When using Sm as a reference, the method in this article leads all comparison methods on the HKU-IS dataset, with an improvement of 0.2% compared to the best comparison method. It ranks second on the DUTS-TE and DUTS-OMRON datasets, but the average difference from the best method is only 0.5%. Analysis of the results shows that the DUTS-ORMON and ECSSD datasets contain many visually significant objects with complex structures and diverse contents, while the method in this section focuses more on solving the problems of scale variation and boundary blur, and imposes constraints at the pixel and image levels. Therefore, it does not achieve the best performance on the E m index that focuses on local pixel information and the S m index that focuses on local structural similarity. In addition, when VGG-16 is used as the backbone, the learning ability of the network model is limited and cannot fully utilize the performance of the module, resulting in poor performance on some data, but it is still very competitive. HKU-IS ECSSD It can be seen from Tables 5 and 6 that the proposed method in this study based on ResNet-50 as the backbone achieves greatly advanced results on the four evaluation metrics of the four datasets. When using the MAE metric as a reference, the proposed method outperforms all comparison methods, with an average improvement of 6.2% compared to the second-best method in each dataset. Similarly, when using the Em metric as a reference, the proposed method also outperforms all comparison methods, with an average improvement of 0.7% compared to the second-best method in each dataset. When using the F max metric as a reference, the proposed method outperforms existing methods on three datasets and achieves an average improvement of 0.3% compared to the second-best method in the best three datasets. When using the Sm metric as a reference, the proposed method only outperforms all comparison methods on the DUTS-TE and HKU-IS datasets, with an average improvement of 0.2%. The analysis results show that using a more complex ResNet-50 as the backbone can further improve the performance of the module compared to using VGG-16 as the backbone, indicating that more complex backbone networks can further enhance the performance of the module. The experiment did not achieve the best results among the comparison methods in the F max metric on DUTS-OMRON, and the performance on the Sm metric on DUTS-OMRON and ECSSD was average. This is mainly because the proposed method focuses more on solving the problems of scale variation and boundary blur, and there is still room for further improvement in handling complex structured image data. In addition, this section also shows the PR curve and F-measure curve comparisons between the proposed method and other mainstream methods in Figure 7. It can be seen that the proposed method achieves advanced results on the DUTS-TE, HKU-IS, and ECSSD datasets, and it is also very competitive on the DUT-OMRON dataset. Tables 3-6 and Figure 7 demonstrate the superior performance of our method in terms of quantitative metrics such as F-measure and MAE. As shown in Figure 8, in order to more intuitively demonstrate the significant object detection performance of our method, this section presents visualizations of the detection results of our method and other comparison methods in some scenarios. Figure 8 displays various scenes, including large objects, small objects, multiple objects, linear objects, partially occluded objects, and complex scenes. It can be seen from the figure that our method not only highlights the complete salient region, but also detects clear boundaries, while ensuring that the predicted salient region has uniformly consistent pixels inside. Regarding the analysis of Laguerre-Gaussian beams in both monochromatic [28] and polychromatic cases [29,30], as well as beams passing through turbulent atmosphere [31,32], it is believed that the SOD method has the potential to be applied in these areas. SOD techniques can play a crucial role in identifying and analyzing important regions or features in complex optical wavefronts, including Laguerre-Gaussian beams. The ability to accurately detect and analyze salient regions in these optical structures can provide valuable insights into their characteristics and behavior. While the proposed method has demonstrated promising performance in diverse settings, it is acknowledged that there is a need for tailored adaptations and further investigations to address the specific challenges presented by Laguerre-Gaussian beams and beams affected by turbulent atmosphere. These scenarios may require modifications in the feature extraction process, boundary enhancement, and multi-scale information extraction to account for unique optical properties and disturbances. Exploring such adaptations and conducting thorough experimental validations in these specific research areas are avenues for future research. The limitations and challenges of the proposed approach lie in the following points. (1) Boundary Extraction Accuracy: The performance of the proposed method heavily relies on the accuracy of the boundary extraction module. If the boundary information is not accurately captured, it may lead to suboptimal saliency maps and affect the overall performance of the method. (2) Sensitivity to Noise: The method's performance might be sensitive to noise in the input data, which could result in false positives or false negatives in the saliency maps. Addressing noise robustness is essential for achieving consistent and reliable results. (3) Computational Complexity: The proposed approach involves multiple stages, including edge extraction, multi-level feature aggregation, and multi-scale information extraction. This might lead to increased computational complexity, making real-time applications challenging on resource-constrained devices. Future research directions for the SOD method based on boundary enhancement lie in the following points. (1) End-to-End Learning: Exploring end-to-end learning approaches could streamline the salient object detection pipeline, reducing computational complexity and potentially improving performance by jointly optimizing all modules. (2) Attention Mechanisms: Investigating the integration of attention mechanisms into the proposed approach could enhance the model's focus on salient regions, leading to improved saliency map quality. (3) Saliency Propagation: Researching techniques for propagating saliency information through hierarchical architectures could enable more precise localization of salient objects at different scales. (4) Saliency in Videos and 3D Data: Extending the proposed method to video data and 3D point cloud data would open up new applications, such as video object segmentation and 3D scene understanding. Conclusions This paper proposes a salient object detection method based on boundary enhancement to solve the problem of large object scale variation and blurred border area pixels in the salient object detection task. This method has four main innovations: First, this method proposes a multi-level feature aggregation module for horizontal connection, using feature information of different resolutions to complement each other, and further enhancing the expressive ability of single-resolution features. Second, this method proposes a multi-scale information extraction module, which is inserted into the fusion process of each stage of the decoder to extract information of different scales from fixed-resolution features to better solve the problem of large changes in the target scale. Third, this method explicitly models the saliency boundary, uses the boundary extraction module to extract the boundary information and further supplements the salient target feature information, which solves the problem of unclear boundary pixels to a certain extent. Fourth, the method uses a hybrid loss function to supervise model training at different levels to highlight salient regions more uniformly. The experimental results show that the method can achieve competitive performance on four common saliency datasets and also outperforms current mainstream methods on multiple indicators.
11,747.4
2023-08-01T00:00:00.000
[ "Computer Science", "Engineering" ]
A unified approach to goodness-of-fit testing for spherical and hyperspherical data We propose a general and relatively simple method for the construction of goodness-of-fit tests on the sphere and the hypersphere. The method is based on the characterization of probability distributions via their characteristic function, and it leads to test criteria that are convenient regarding applications and consistent against arbitrary deviations from the model under test. We emphasize goodness-of-fit tests for spherical distributions due to their importance in applications and the relative scarcity of available methods. Introduction Let ∥ • ∥ be the Euclidean norm in R d , d ≥ 2, and write S d−1 := {x ∈ R d : ∥x∥ = 1} for the surface of the unit sphere in R d .In this paper, we consider the problem of testing goodness-of-fit for distributions defined on the sphere S2 or on the hypersphere S d−1 , where d > 3.In this respect, there is a plethora of such tests for distributions defined on R d , even in the multivariate case d > 1. Besides, also goodness-of-fit tests on the circular domain S 1 is a relatively wellexplored area.For the latter case we refer, e.g., to [16] (Chapters 6 and 7), [20] and [21]. On the other hand, the same problem for data taking values on S d−1 , where d ≥ 3, has been mostly confined to testing for uniformity.Nevertheless, and while the notion of "non-preferred direction" and hence testing for uniformity is certainly central to (hyper)spherical data analysis, there are several more flexible distributions, which in fact often have the uniform as a special case. The reader is referred to the monographs of [23], Section 2.3, and [24], Section 9.3, for such non-uniform models for (hyper)spherical data.At the same time, it seems that goodness-of-fit tests specifically tailored to hyper(spherical) laws are scarce, certainly in the case of a composite null hypothesis, where distributional parameters need to be estimated from the data at hand, but also for a completely specified hypothesis with fixed (known) parameter values.For the latter case, the test based on nearest neighbors proposed in [13] seems to be one of the few tests available, while to the best of our knowledge, there is much need for research in the case of a composite hypothesis. In view of these lines, we suggest a procedure for testing goodness-of-fit for distributions defined on S d−1 , where d ≥ 3. 1 The suggested test is novel in that it is general-purpose suitable for arbitrary (hyper)spherical distributions, either with fixed or estimated parameters, and it is straightforwardly applicable provided that one can easily draw Monte Carlo samples from the distribution under test.Suppose X is a random (column) vector in R d taking values in S d−1 with a density f with respect to surface measure and characteristic function (CF) φ(t) = E(e it ⊤ X ), t ∈ R d , where ⊤ denotes transpose, and i = √ −1 stands for the imaginary unit.We start our exposition with the simple null hypothesis where f 0 is some given density on S d−1 , which should be tested against the general alternative H A that the distributions pertaining to f and f 0 are different. If X 0 has density f 0 and CF φ 0 (t) = E(e it ⊤ X0 ), t ∈ R d , say, the standard CFbased statistic for testing H 0 versus H A is given by Here, is the empirical CF of X 1 , . . ., X n , and X 1 , . . ., X n are independent and identically distributed (i.i.d.) copies of X.In (2), the domain of integration as well as the nonnegative weight function w(•) will be specified below in a way that D n,w is amenable to computation and that a test of H 0 that rejects H 0 for large values of D n,w is consistent against each alternative to H 0 .Notice that D n,w is an estimator of the population Fourier-type discrepancy measure between f and f 0 . The starting point of this paper is that the approach outlined above assumes that the functional form of the CF φ 0 is known.Such knowledge, however, is only available for distributions on the real line R 1 and for a few selected cases of multivariate distributions, such as the multivariate normal and the multivariate stable distribution; see [12] and [26]. In order to circumvent this obstacle, which is even more challenging for distributions taking values on S d−1 , we suggest the test statistic Here, is the empirical CF of Y problem, one sample being the data X 1 , . . ., X n at hand, while the other consists of artificial data generated under the null hypothesis H 0 .For more details on CF-based tests for the two-sample problem the reader is referred to [25] and [14]. This idea also applies to the problem of testing the composite null hypothesis against general alternatives.Here, {f 0 (•, ϑ) : ϑ ∈ Θ} is a given family of densities on S d−1 that is parameterized in terms of ϑ ∈ Θ, where Θ ⊂ R s for some s ≥ 1. In this setting, the test statistic in ( 5) is modified according to with φ n (t) defined in (3) and where Y j , j ∈ {1, ..., m}, are i.i.d.copies of a random vector having density In this connection, we note that the idea of a goodness-of-fit method that employs an artificial sample from the distribution under test seems to date back to [15], at least for independent data and simple hypotheses.Recently, [9] proposed a CF-based method using the notion of artificial samples for goodness-offit within the family of multivariate elliptical distributions, [10] employ artificial samples in order to specifically test multivariate normality in high dimensions using nearest neighbors, while [2] applies a test procedure for mixed data by means of artificial samples. The remainder of this work unfolds as follows.In Section 2 we obtain the limit null distribution of T n,m,w as well as the corresponding law under fixed deviations from H 0 .In Section 3, the validity of a bootstrap resampling scheme necessary for actually carrying out the test for simple hypotheses with fixed parameters is established, while in Section 4 a corresponding bootstrap resampling for the composite hypothesis test statistic T n,m,w is suggested.Section 5 contains an extensive Monte Carlo study of the finite-sample behavior of the new tests including comparisons, while Section 6 illustrates real-data applications. The final Section 7 provides some discussion. Asymptotics In this section, we provide the limit distribution of T n,m,w defined in (5).To be flexible with respect to both the region of integration and to the weight function w, let M be some nonempty Borel set in R d , and let µ be some finite measure on (the Borel subsets of) M .Thus, M could be R d itself, and µ could be absolutely continuous with respect to the Lebesgue measure in R d , or M could be S d−1 , and µ could be absolutely continuous with respect to spherical measure.Notably, M could also be some countable subset T of R d , with µ having a probability mass with respect to the counting measure on T . 2n this setting, let X, X 1 , X 2 , . . .and Y, Y 1 , Y 2 , . . .be independent M -valued random vectors that are defined on some common probability space (Ω, A, P). Moreover, let X, X 1 , X 2 , . . .be i.i.d. with density f with respect to µ and CF are the empirical CF's of X 1 , . . ., X n and Y 1 , . . ., Y m , respectively.This section tackles the limit distribution of as m, n → ∞, under each of the conditions φ = ψ and φ ̸ = ψ. Since S(−x) = −S(x) and S(0 d ) = 0, where 0 d is the origin in R d , the sum of the imaginary parts vanishes, and we obtain A further simplification is obtained if we assume that the set M -like R d , Now, writing B(M ) for the σ-field of Borel sets on M , let be the separable Hilbert space of (equivalence classes of) measurable functions and put , where U and V are centred random elements of H having covariance kernel K given in (12).From (11), we obtain where [5]).In view of (15) and the continuous mapping theorem, the assertion follows. The next result gives the almost sure limit of (m+n)T n,m /(mn) as m, n → ∞. Under the conditions on M and µ stated in Theorem 2.1, we have and, regarded as random elements of H, put Likewise, write a = ϱ(•) and b = b(•) for the degenerate random elements of H that are the expectations of A n and B m , respectively.By the strong law of large numbers in Banach spaces (see, e.g., [19]), we have It is readily seen that ∆ equals the almost sure limit figuring in Theorem 2.2. Thus, ∆ is the measure of deviation between the distributions of X and Y , expressed in form of a weighted L 2 -distance of the corresponding characteristic functions, and this measure of deviation is estimated by T n,m .As the next result shows, the statistic T n,m , when suitably normalized, has a normal limit To prove this result, we need the condition lim for some τ ∈ [0, 1].In contrast to Theorem 2.1 and Theorem 2.2, this condition is needed now to assess the asymptotic proportions of the X-sample and the In what follows, put and let Furthermore, let where ϱ(t) and b(t) are given in ( 16). Theorem 2.3 Suppose the standing assumptions on M and µ hold.If ∆ > 0, under the limiting regime (18), where Proof.The proof follows the lines of the proof of Theorem 3 of [3].In view of Theorem 2.2, condition (22) of [3] holds.Let A n and B m as in (17), and put Z n,m := A n − B m .Furthermore, write z = z(•) for the degenerate random element of H, where z(t) is given in (21), and define where a m,n and b m,n are given in (13).By the central limit theorem for H-valued as m → ∞, where A and B are independent centred Gaussian random elements of H with covariance kernels K 1 and K 2 , respectively, where K 1 and K 2 are given in (19).In view of ( 18), the continuous mapping theorem yields (20).Thus, also condition ( 23) of [3] holds, and the proof of Theorem 2.3 follows in view of Notice that the second summand on the right hand side is o P (1) in view of the tightness of (c m,n (Z n,m − z)), and the first summand converges in distribution to 2⟨Z, z⟩, which has the stated normal distribution N(0, σ 2 ). Remark 2.4 Compared to [9], who address the problem of composite hypotheses, the limit results of this section are obtained for simple hypotheses without estimated parameters.However, the results obtained herein hold for artificial sample size m ̸ = n, which is much more general and thus flexible than the case m = n treated by [9].Moreover, our setting is different from that of elliptical distributions on the classical Euclidean space R d .In the following section, we suggest a resampling version of the test, and we prove its asymptotic validity. Resampling under a simple hypothesis Since, under H 0 , both the finite-sample and the limit distribution of T n,m as n, m → ∞ depend on the unknown underlying distribution of X To prove that this bootstrap procedure yields a test of H 0 of asymptotic level α, we use a Hilbert space central limit theorem for triangular arrays (see [22]). This theorem reads as follows. Theorem 3.1 Let {e j : j ≥ 1} a complete orthonormal basis of the separable Hilbert space H with inner product ⟨•, •⟩ and norm ∥ • ∥ H .For each m ≥ 1, let X m1 , X m2 , . . ., X mm be independent H-valued random elements such that E(⟨X mj , e ℓ ⟩) = 0 and E∥X mj ∥ 2 H < ∞ for each j ∈ {1, . . ., m} and each ℓ ≥ 1.Put S m = m j=1 X mj , and let C m be the covariance operator of S m .Assume that the following conditions hold: ), and write where a m,n and b m,n are defined in (13). Theorem 3.2 In the setting given above suppose that, under the limiting regime Proof.The proof is similar to that of Theorem 2 of [4] and will thus only be sketched.Notice that are i.i.d.centred random elements of the Hilbert space H = L 2 (M, B(M ), µ) that, for a fixed complete orthonormal system of H, satisfy E(⟨X m,n,j , e ℓ ⟩) = 0 and E∥X m,n,j ∥ 2 H < ∞ for each j ∈ {1, . . ., n} and each ℓ ≥ 1.The covariance function of the process U m,n = U m,n (•) is given by and the covariance operator Since the function CS is bounded and continuous, X m,n By dominated convergence, we obtain ⟨C m,n g, h⟩ → ⟨C ∞ g, h⟩ (g, h ∈ H), which shows that condition (i) of Theorem 3.1 holds.The proof of condition (ii) of Theorem 3.1 follows the reasoning given on p. 603 of [4] by The latter limit has the same distribution as W ∞ . Notice that the test statistic T n,m figuring in (11), computed on the ran- , where W n,m is given in (22).From Theorem 3.2, we thus have the following corollary.Corollary 3.3 The limit distribution of the test statistic T n,m under the limiting regime (18) Let H , where W is given in Theorem 2.1.This shows the asymptotic validity of the bootstrap. Remark 3.4 The resampling bootstrap procedure applied herein may also be replaced by a permutation procedure.The validity of the exhaustive permutation (that includes all possible permutations) may be directly obtained by observing that, under the null hypothesis H 0 , the observations (x 1 , ..., x n , y 1 , ..., y m ) are exchangable.Another potential resampling scheme may be that of weighted bootstrap; see [1]. Resampling under a composite hypothesis Analogously to T n,m,w , the limit null distribution of T n,m,w = T n,w (x 1 , ..., x n , y 1 , ..., y m ) depends (in a very complicated way) on unknown quantities, and hence it cannot be used to compute critical values and actually carry out the test.To this end, we consider a parametric bootstrap procedure involving the test statistic in (8) computed on the basis of bootstrap observations from f 0 (•; ϑ), where the parameter ϑ is replaced by estimators. More precisely, let T n,m,w,obs denote the observed value of the test statistic. For given α ∈ (0, 1), write t * n,m,α for the upper α-percentile of the bootstrap distribution of T n,m,w .We then define the test function as In practice, the bootstrap distribution of T n,m,w,obs is approximated as follows: Then we approximate the upper α-percentile t * n,m,α in (24) of the null distribution of T n,m,w by the upper α-percentile of the empirical distribution of T * n,m,w,1 , . . ., T * n,m,w,b .Although we provide no asymptotic theory for the resampling under a composite hypothesis, our simulations show that the above method works well.Nevertheless, it remains an open problem to formally prove that this bootstrap is asymptotically valid. Simulations In this section we provide results of competitive Monte Carlo simulations for the case of both a simple and a composite hypothesis.We throughout restrict the simulation to the spherical setting for the dimension d = 3 and, for computational feasibility, to the sample size n = 50.All simulations are performed using the statistical programming language R, see [29].We implement the test statistic by fixing the measure µ(dt) to be the density of the zero-mean spherical stable distribution.Then the test may be computed as in eqn.(10) with C(x) = e −γ∥x∥ ξ , where (ξ, γ) ∈ (0, 2] × (0, ∞) denote tuning parameters which are at our disposal and provide a certain flexibility of the test with respect to power against different alternatives.Another option, although not yielding a proper measure, is to adopt the approach taken in [28], which again results in the test statistic given in (10) with C(x) = −∥x∥ ξ , ξ ∈ (0, 2). The spherical distributions were generated using the package Directional, see [30], and the uniformity tests by the package sphunif, see [17]. Testing the simple hypothesis of uniformity We test the hypothesis ) is the surface area of the (d − 1)-sphere.Hence we test whether f is the density of the uniform law U(S d−1 ), which is a classical testing problem in directional statistics.For an overview of existing procedures we refer to [16].As competing tests we consider the following procedures: • The modified Rayleigh test R n , see [24], Section 10.4.1 based on the mean of the directions, • the [13] test N N J a based on volumes of the J nearest neighbor balls with power a, • the Bingham test B n , see [6], based on the empirical scatter matrix of the sample, • the Sobolev test, see [18], G n based on Sobolev norms, and • the test of [11] CA n , which is based on random projections that characterize the uniform law. Empirical critical values for each testing procedure have been obtained by a Monte Carlo simulation study under H 0 with 100000 replications. We considered the following alternatives to the uniform distribution on S d−1 . These alternatives are chosen to simulate different uni-, bi-and trimodal models. For details on the hyperspherical von Mises-Fisher distribution, see Section 9.3 in [24]. • The density of the von Mises-Fisher distribution depends on the mean direction θ ∈ S d−1 and a concentration parameter κ ≥ 0, and it is given by Here, I d/2−1 is the modified Bessel function of the first kind and order d/2 − 1.This class is denoted with vMF(θ, κ). • In a similar manner as for the mixture of two vMF distributions, we simulate a mixture of three von Mises-Fisher distributions with different centers, by additionally simulating independently a third random vector Y 3 ∼ vMF(θ 3 , κ 3 ) and generating the member X by We denote this class with MMF((p, p, In each of the alternatives, we put µ 1 = (1, 0, . . ., 0), 1 = (1, . . ., 1)/ √ d, and Here and in the following, T {ξ} n,γ stands for the test in eqn.(10) with C(x) = e −γ∥x∥ ξ , where (ξ, γ) ∈ (0, 2] × (0, ∞) as well as T SR n,a for the test with C(x) = −∥x∥ a .The result of the simulation is displayed in Tables 1 and 2 for the choice ξ = 2 and the stated competitors.As can be seen, the suggested tests perform well in comparison, although they are never the best performing procedures.This behavior might be explained by the approximation of the true characteristic function under the null hypothesis.To investigate the impact of the sample size m of the simulated data set Y 1 , . . ., Y m , we simulated the empirical power of the test for four vMF distributions and for different values of m, see Figure 1.Clearly, the choice of m has an impact on the estimation, and larger values of m are desirable, but increasing m leads to longer computation time.Table 3 exhibits the impact of the weighting measure µ and hence of the choice of the function C(•).In terms of power for the uni-and bimodal alternatives considered, the choice of C(•) has nearly no influence on the empirical power, with the exception of the MMF((0.5, 0.5), (−µ 1 , µ 1 ), (2, 2)) alternative, where T Empirical rejection rates for testing uniformity for the test T Empirical rejection rates for testing uniformity for the test T {ξ} n,γ for ξ = 1, 1.5 as well as T SR n,a (n = 50, m = 500, α = 0.05, 10000 replications) Testing the fit to the von Mises-Fisher distribution For the case of a composite hypothesis, we consider the hypothesis that the underlying density belongs to the family of von Mises-Fisher distributions vMF(κ, θ), i.e., we test the hypothesis against general alternatives.The main difference to subsection 5.1 is that we consider a test to a family of distributions, where the parameters are unknown and hence have to be estimated.To test the hypothesis we chose T {2} n,γ in eqn.(10) with C(x) = e −γ∥x∥ 2 , for different values of the tuning parameter γ, and we implemented the parametric bootstrap procedure from Section 4. To approximate the unknown parameters we calculated the maximum likelihood estimates for κ and θ as proposed in Section 10.3.1 of [24].As far as we know, testing composite hypotheses for spherical or hyperspherical distributions with estimated parameters has not been considered before in the literature.As alternative models we chose the same distributions as described in Subsection 5.1. In view of the extensive computation time due to the parametric bootstrap procedure, we considered the simulation setting n = 50, m = 200, a sample size of 500 in the bootstrap algorithm, and 5000 Monte Carlo replications.Throughout the study, we fixed the significance level to 0.05.The results are reported in Table 4. Notably, the novel test maintains the nominal significance level very closely, and its power with respect to bimodal alternatives increases the more these two modes are pronounced. Testing the fit to the angular central Gaussian distribution In this subsection, we consider testing the fit to an angular central Gaussian model, i.e., we test the hypothesis Here, Σ is a symmetric positive definite (d × d)-parameter matrix, which is identifiable up to multiplication by a positive scalar.For information regarding this model, see [24], Section 9.4.4,and for a numerical procedure to approximate the maximum likelihood estimator of the unknown parameter matrix Σ, see [31]. To the best of our knowledge, testing the fit to the angular central Gaussian family has not been considered in the literature. The simulation parameters match the ones of Subsection 5.2.In complete analogy, we considered T Results are presented in Table 5.In this case the bootstrap testing procedure controls the type I error, while performing well for most of the alternatives considered. Real data We revisit the paleomagnetic data in [27], which is an example of spherical data. Paleomagnetic data consist of observations on the direction of magnetism in either rocks, sediment, or in archeological specimens.These data are measured at various geological points in time and spatial locations.The directions are usually measured as declination and inclination angles based on strike and dip coordinates, see [27] and the references therein for more information.The data considered are taken from the GEOMAGIA50.v3database, see [8].For simplicity, we analyse the data provided in the supplementary material of [27].The full data set consists of n = 1137 entries (variables are age, dec, inc, lat, and lon) collected at a single spatial location, which is the Eifel maars (EIF) lakes in Germany with relocated nearby data, for details see [27].The analysed di- 6.For all significance levels and each choice of the tuning parameter, the tests reject the null hypothesis, indicating a poor fit of the von Mises-Fisher family for the subset of the data. As a second parametric family of distributions, we consider the Kent distribution, defined by the density Here, κ > 0 is a concentration parameter and θ ∈ S d−1 is the mean direction. Moreover, A is a symmetric d × d-matrix with tr(A) = 0 and Aθ = 0 that depends on an 'ovality' parameter β, see [24].Hence we test the hypothesis d− 1 or the grid Z d -is symmetric with respect to the origin 0 d , i.e., we have −M = M , where −M := {−x : x ∈ M }.Furthermore, we suppose that the measure µ is invariant with respect to the reflection T (x) := −x, x ∈ R d , i.e., we have µ = µ T , where µ T is the image of µ under T .By transformation of integrals, we then obtain S(x) = M sin(t ⊤ x) µ(dt) = −S(x), x ∈ R d , and thus S(x) = 0, x ∈ R d .Putting CS(ξ) := cos ξ + sin ξ, ξ ∈ R, and using the addition theorem cos(α − β) = cos α cos β + sin α sin β, some algebra yields where G is a centred random element of H with covariance operator C characterized by ⟨Ch, e ℓ ⟩ = ∞ j=1 ⟨h, e j ⟩a jℓ for each h ∈ H and each ℓ ≥ 1.To apply Theorem 3.1 in our situation of the Hilbert space H := L 2 (M, B(M ), µ) let, in greater generality than considered so far, X m,n 1 , . . ., X m,n n , Y m,n 1 , . . ., Y m,n m be i.i.d.M -valued random vectors with common distribution, and put E with µ, and the region of integration with M .To prove condition (iii) of Theorem 3.1, notice that, with X m,n,j defined in(23), the fact that |CS(•)| ≤ 2 and Hölder's inequality give⟨X m,n,j , h⟩ ≤ 2 √ n (µ(M )) 1/2 ∥h∥ 2 H , h ∈ H. ⟨X m,n,j , h⟩ 2 1 |⟨X m,n,j , h⟩| > ε ≤ 4µ(M )∥h∥ 2 H P(|⟨X m,n,1 , h⟩| > ε),and thus also condition (iii) of Theorem 3.1 holds.According to Theorem 3.1, we have U m,n D −→ W ∞ .In the same way, V m,n D −→ W ∞ , where, due to the independence of U m,n and V m,n , W ∞ is an independent copy of W ∞ .In view of (22) and the continuous mapping theorem, it follows that W n,m F n and G m denote the empirical distributions of X 1 , . . ., X n and Y 1 , . . ., Y m , respectively, and write H n,m := n m+n F n + m m+n G m for the empirical distribution of the pooled sample X 1 , . . ., X n , Y 1 , . . ., Y m .By the Glivenko-Cantelli theorem, H n,m converges weakly to H ∞ := (1−τ )F +τ F = F with probability one under the limiting regime (18), and thus the bootstrap distribution of T n,m converges almost surely to the distribution of ∥W ∞ ∥ 2 H .The latter distribution coincides with the distribution of ∥W ∥ 2 outperform the T SR n,a -procedures for some values of the tuning parameter γ. bootstrap p-values are reported in Table6.With the exception of the tuning parameter γ = 0.5, the p-values indicate that we are not able to reject the hypothesis of fit of an underlying Mises-Fisher distribution at any level.For their analysis, the authors in[27] consider rocks of age 1250 and hence determine a subset of the data of sample size n = 50, for a plot see Figure2(right).They propose to use a new spherical model, namely a distribution of Kent type, by applying a transformation to the von Mises-Fisher density.The results of our test of fit to the von Mises-Fisher law for the subset are displayed in the second row of Table 1 , . . ., Y m , where, independently of X 1 , . . ., X (5) the random vectors Y 1 , ..., Y m are i.i.d.copies of X 0 .Of course, realizations of Y 1 , ..., Y m are generated via Monte Carlo.Notice that ψ m is an estimator of the CF φ 0 .In this way, the functional form of φ 0 is not needed in the test statistic T n,m,w in(5), which is reminiscent of a CF-based test for the two-sample equipped with the inner product ⟨u, v⟩ = M uv dµ and the norm ∥u∥ H = ⟨u, u⟩ 1/2 , u ∈ H. Suppose that φ = ψ.If M is symmetric with respect to 0 d and µ Since U n and V m are independent for each pair (n, m), also U and V are independent, and we have (U n , V m ) 1, we use a bootstrap procedure in order to carry out a test that rejects H 0 for large values of T n,m .The bootstrap distribution of T n,m is the conditional distribution of T n,m given the pooled sample X 1 , . . ., X n , Y 1 , . . ., Y m , and a test of H 0 at nominal level α rejects H 0 if T n,m exceeds the (1 − α)-quantile of this bootstrap distribution.Since the bootstrap distribution is difficult to compute, it is estimated by a Monte Carlo procedure that repeatedly samples from the empirical distribution of the pooled sample.To be specific, one first computes the observed value t n,m of T n,m based on realizations x 1 , . . ., x n , y 1 , . . ., y m of X 1 , . . ., X n , Y 1 , . . ., Y m , respectively.In a second step, one generates b inde- pendent samples by Monte Carlo simulation.Here, for each j ∈ {1, . . ., b}, the j th sample consists of x 1 (j), . . ., x n (j), y 1 (j), . . ., y m (j), where these values have been chosen independently of each other with a uniform distribution over {x 1 , . . ., x n , y 1 , . . ., y m }.For each j ∈ {1, . . ., b}, one then computes the value Table 1 Empirical rejection rates for testing uniformity for the test T
7,113.4
2023-05-24T00:00:00.000
[ "Mathematics" ]
Depolarization Increases Phosphatidylinositol (PI) 4,5-Bisphosphate Level and KCNQ Currents through PI 4-Kinase Mechanisms* A growing body of evidence shows that membrane phosphatidylinositol 4,5-bisphosphates (PtdIns(4,5)P2, PIP2) play an important role in cell signaling. The presence of PIP2 is fundamentally important for maintaining the functions of a large number of ion channels and transporters, and for other cell processes such as vesicle trafficking, mobility, and endo- and exocytosis. PIP2 levels in the membrane are dynamically modulated, which is an important signaling mechanism for modulation of PIP2-dependent cellular processes. In this study, we describe a novel mechanism of membrane PIP2 modulation. Membrane depolarization induces an elevation in membrane PIP2, and subsequently increases functions of PIP2-sensitive KCNQ potassium channels expressed in Xenopus oocytes. Further evidence suggests that the depolarization-induced elevation of membrane PIP2 occurs through increased activity of PI4 kinase. With increased recognition of the importance of PIP2 in cell function, the effect of membrane depolarization in PIP2 metabolism is destined to have important physiological implications. Phosphoinositides are minor phospholipids in cellular membranes. However, they play an important role in cellular signaling. Phosphatidylinositol 4,5-bisphosphate (PIP 2 ) 3 is a major phosphoinositide of the plasma membrane that comprises about 1% of plasma membrane phospholipids (1). PIP 2 has long been known as the precursor of two important second messengers, diacylglycerol (DAG) and inositol trisphosphate (IP 3 ), produced when PIP 2 is cleaved by phospholipase C (PLC). However, it is now well documented that PIP 2 is also important in the attachment of the cytoskeleton to the plasma membrane, exocytosis, endocytosis, membrane trafficking, and the activation of enzymes (1)(2)(3). Among the targets of PIP 2 signaling, ion channels have been the focus of recent studies. Many members of ion channel fam-ilies have been shown to be PIP 2 -sensitive (4 -6). The physiological significance of PIP 2 modulation of ion channels is best manifested when the channel function is altered under conditions of PIP 2 hydrolysis, a process that is initiated by the activation of membrane receptors by a variety of neuronal transmitters or hormones. One well-studied case is the receptor-mediated inhibition of M/KCNQ potassium currents. It had long been a mystery until PIP 2 was implicated in the inhibition of M/KCNQ currents when a G q -coupled receptor like muscarinic M 1 is activated (7)(8)(9). It is now accepted that PLCmediated hydrolysis of PIP 2 serves as the major mechanism of neurotransmitter-and neuropeptide-induced inhibition of M/KCNQ currents (5,6,10). Apart from G protein-coupled receptors, activation of other membrane receptors such as EGF and NGF receptors also employ a similar mechanism to the modulation of M/KCNQ function (11,12). Apart from PLC-induced cleavage of membrane PIP 2 , steady state PIP 2 levels in the cellular membrane are dynamically balanced by the activities of specific phosphoinositide kinases and specific lipid phosphatases. These kinases and phosphatases targeting PIP 2 in cells are likely regulated to control the PIP 2 level. For example, PIPKI␥ and synaptojanin 1 (5-phosphatase) antagonize each other in determining PIP 2 levels and the subsequent recruitment of clathrin coats at the synaptic membrane (13). Alterations in the activities of these kinases and phosphates inevitably change PIP 2 levels, and subsequently PIP 2 -dependent cellular signals. Thus, blockage of PI4 kinase by wortmannin or phenylarsine oxide blocks the re-synthesis of PIP 2 and the reactivation of M/KCNQ currents (7,8). Expression of PIP 2 5-phosphatase depresses KCNQ2/Q3 currents, as expected for channels that need PIP 2 for their function (14,15). When membrane PIP 2 abundance is elevated by overexpression of PI(4)P5K, the channel activities of KCNQ2 and KCNQ2/Q3 are dramatically increased (14), a similar maneuver greatly blunts the extent of M/KCNQ current inhibition by G q/11 -coupled receptor stimulation (15,16). Recently, a phosphoinositide phosphatase linked to a transmembrane voltage-sensing domain homologous to the S1-S4 segments of voltage-gated channels was described in Ciona intestinalis (named Ci-VSP) (17). Ci-VSP is activated when the membrane potential is depolarized, which results in cleavage of membrane PIP 2 and inhibition of PIP 2 -dependent K ϩ currents (17,18). This is the first example showing that PIP 2 levels in the membrane can be modulated by a phosphoinositide-metabolizing enzyme in a manner similar to PLC-mediated cleavage of PIP 2 , namely a fast breakdown of PIP 2 driven by a single event of either an activation of PLC or membrane depolarization. In this study, we describe a novel mechanism of membrane PIP 2 modulation. The membrane depolarization elevates membrane PIP 2 levels and enhances PIP 2 -dependent KCNQ2/Q3 currents expressed in Xenopus oocytes. The depolarization-induced elevation of PIP 2 levels is a result of increased activity of PI4 kinase. Only cells with negligible leaky current were used for experiments. Therefore, no leak subtraction was used. All experiments were carried out at room temperature (23-25°C). EXPERIMENTAL PROCEDURES Membrane PIP 2 Assay by TLC-The method of thin layer chromatography (TLC) was modified from [ 32 P]PIP 2 TLC analysis. Oocytes lipids were extracted with chloroform-methanol. The mobile phase for TLC was chloroform/methanol/4 N NH 4 OH (45:35:10, v/v/v) (2). Phospholipids were visualized with iodine vapor. PIP and PIP 2 were confirmed by mass spectrometry (MS). Synthesis of DNA Template and dsRNAs-A 634-bp PI4K DNA template was synthesized by PCR using a ProofStart PCR kit. The PCR template is the cDNA of PI4 kinase ␤ from oocytes of Xenopus laevis (ordered from Open Biosystems, clone no.: BC073706.1). See supplemental information for details. A 21-mer small interference RNA (siRNA) against PI4␤ was also used. See supplemental information for details. Chemicals-All chemicals were purchased from Sigma. Stock solutions were made in DMSO, stored at Ϫ20°C, and diluted in the appropriate solution immediately before use. The final concentration of DMSO was less than 0.1%. Data Analysis and Statistics-Currents were analyzed and fitted using Clampfit 9.2 (Axon Instrument) and Origin 7.5 (Originlab Corp.) software. Results are expressed as mean Ϯ S.E. Each experiment was replicated between 3 and 15ϫ. Differences were analyzed with Student's paired/unpaired t test or one-way ANOVA when appropriate, and were considered significant at p Ͻ 0.05. Membrane Depolarization Augments the Amplitude but Does Not Affect the Kinetics of KCNQ2/Q3 Currents Expressed in Xenopus Oocytes-The heterologous currents of KCNQ2 and KCNQ3 K ϩ channels are believed to be the major components of neuronal M currents. Earlier studies (8,19) demonstrate that KCNQ2/Q3 expressed in Xenopus oocytes has most of the characteristics of native neuronal M currents. However, we noticed that when expressed in Xenopus oocytes, the amplitudes of KCNQ2/Q3 currents activated by a depolarizing voltage always increased with time (Fig. 1A). Fig. 1A shows the increase of KCNQ2/Q3 currents. The arrow shows the timedependent increase in KCNQ2/Q3 currents activated at 0 mV. Fig. 1B shows the depolarization-dependent nature of the current increase. A long (30 min) depolarization (ϩ20 mV) led to a continuous increase in KCNQ2/Q3 currents, and the increase was gradually reversed when the membrane was repolarized to Ϫ80 mV (Fig. 1B). Multiple brief (10 s) depolarization pulses of ϩ20 mV from Ϫ80 mV demonstrated the time course of reversion of the increased currents, which can be fitted by a single exponential decay (dotted line) with a time constant of 7.5 Ϯ 0.4 min (n ϭ 6). Similarly, the time course of the depolarizationinduced current potentiation can be fitted nicely by a single exponential growth with a time constant of 10.3 Ϯ 0.5 min (n ϭ 7). The depolarization-induced increase could be fully reversed by the repolarization, and the following depolarization induced the same increase as the first depolarization (time constant is 9.8 Ϯ 0.6 min, n ϭ 6, Fig. 1B). The depolarization-induced potentiation of KCNQ2/Q3 currents was clearly voltage-dependent (Fig. 1C). The voltage that produced a half-maximal increase (V 1/2 ) was Ϫ26.1 Ϯ 0.5 mV (n ϭ 5-14). Depolarization did not affect the kinetics of KCNQ2/Q3 currents. The activation and deactivation time constants of KCNQ2/Q3 currents measured either before or after the currents had been increased by the depolarization were not significantly different (Fig. 1D). Similarly, the conductance-voltage relationship of KCNQ2/Q3 activation was also not affected (Fig. 1E). Depolarization Induced Larger Potentiation of Homomeric KCNQ2 Currents-Homomeric KCNQ2 currents were also sensitive to depolarization. Actually, KCNQ2 currents were increased to a larger extent by depolarization than KCNQ2/Q3 currents (Fig. 1F). However, the voltage dependence of the induced increases was similar for both KCNQ2 and KCNQ2/Q3 currents (V 1/2 is Ϫ29.1 Ϯ 2.4 mV for KCNQ2 and Ϫ26.1 Ϯ 0.5 mV for KCNQ2/ Q3). Similarly, the kinetics of KCNQ currents were not affected (data not shown). We were not able to see measurable homomeric KCNQ3 currents in Xenopus oocytes. External High Potassium Depolarizes the Membrane Potential and Increases KCNQ2/Q3 Currents-Increased external K ϩ is known to increase the conductance of some K ϩ channels (e.g. Kv2.1, (20)). To exclude the possibility that the depolarization-induced potentiation of KCNQ2/Q3 currents is due to an increased outflux of K ϩ and an increased concentration of external K ϩ , an external solution with elevated K ϩ (10 mM, ND96-10K) was used to see if this solution would exclude the depolarization-induced increase. In the presence of ND96-10K, a 15-min depolarization to 0 mV induced similar potentiation of KCNQ2/Q3 currents as was observed in the presence of ND96 solution (Fig. 2, A and B). We next tested the effect of high (96 mM) external K ϩ solution (ND96K). The membrane potential in the presence of ND96 was around Ϫ50 mV, and that in the presence of ND96K was around 0 mV (data not shown). Incubation with ND96K for 15 min led to potentiation of KCNQ2/Q3 currents (Fig. 2C). The average fold increase induced by ND96K incubation was similar to that induced by the depolarization at 0 mV (183 Ϯ 14% versus 205 Ϯ 6%, for ND96K and 0 mV, respectively) (Fig. 2D). Similar to voltage-clamp depolarization, high K ϩ -induced depolarization did not affect the activation properties of The dashed line shows the zero current level. B, membrane potential-dependence of KCNQ2/Q3 current amplitudes. KCNQ2/Q3 currents were increased by a continuous depolarization (ϩ20 mV) with the time constant shown; the increased currents recovered when the membrane was repolarized (Ϫ80 mV) with the time constant shown. Brief depolarization pulses (ϩ20 mV, 10 s) during the repolarization were applied to assess the changes of the current amplitudes. C, voltage-dependence of the depolarization-induced increase of KCNQ2/Q3 currents. The depolarization-induced current increases were assessed by the voltage protocol shown. I1 is the control current amplitude; I2 is the current amplitude after a 15 min conditioning voltages at different level. The ratio of I2 and I1 was plotted against the conditioning potentials. The data were fitted with the Boltzmann function, and V 1/2 is Ϫ26.1 Ϯ 0.5 mV. D, depolarization did not affect the activation and deactivation of KCNQ2/Q3 currents. Time constants of activation and deactivation of KCNQ2/Q3 currents before and after a 15-min depolarization (0 mV) were compared. The upper panel shows the voltage protocol used, and the activation and deactivation of KCNQ2/Q3 currents elicited by I1 and I2 are summarized and compared in the lower panel. E, conductance-voltage relationship (G-V) of KCNQ2/Q3 currents before (open squares) and after a 15-min depolarization (filled squares). The whole cell conductance was measured from the tail currents at Ϫ60 mV. Conductance at each depolarizing voltage was normalized to the G max . The data were fitted with the Boltzmann function, and V 1/2 is Ϫ28.7 Ϯ 0.5 mV and Ϫ29.6 Ϯ 0.4 mV, respectively, for before and after the depolarization (n ϭ 6). F, summary of fold increases for KCNQ currents induced by the depolarization. **, p Ͻ 0.01. KCNQ2/Q3 currents (Fig. 2E). The above results suggest that depolarization per se, and not increasing K ϩ outflux, contributes to the observed potentiation of KCNQ2/Q3 currents. Depolarization Increases KCNQ2/Q3 Currents through Increasing Membrane PIP 2 Levels-The membrane PIP2 is an essential and sufficient factor for KCNQ2/Q3 function (8). It has been shown that the resting membrane PIP 2 concentration is not at a saturating concentration for KCNQ2/Q3 activity, less so for KCNQ2 activity (14). These facts and our observation that the depolarization increased KCNQ2 currents more than KCNQ2/Q3 currents (Fig. 1F), led us to speculate that the depolarization-induced enhancement of KCNQ2/Q3 currents could be the result of increased membrane PIP 2 level. Two strategies were used to test this hypothesis. First, we utilized the recently described voltage sensor-containing phosphatase (Ci-VSP) (17). Ci-VSP is a membrane voltage (depolarization) activated phosphoinositides phosphatase that can dephosphorylate PIP 2 and thus inhibit the function of PIP 2 -dependent channels including KCNQ2/Q3 (17). If the increased activity of KCNQ2/Q3 we observed here is indeed through PIP 2 then the same depolarization-dependent activation of Ci-VSP should antagonize the increase. Co-expression of Ci-VSP with KCNQ2/Q3 made the channel currents susceptible to depolarization-induced inhibition; the activated KCNQ2/Q3 currents quickly declined when the membrane was depolarized to more positive than ϩ40 mV (Fig. 3A). A protocol was designed to see both the depolarization-induced potentiation and inhibition of KCNQ2/Q3 currents in Ci-VSP-expressing oocytes. In this case, the membrane was depolarized from Ϫ80 mV to Ϫ10 mV and then was held at Ϫ10 mV for 10 min, but interrupted by three short (800 ms) steps of further depolarization to ϩ40 mV. The depolarization to Ϫ10 mV is positive enough to activate the potentiation process (Fig. 1C), but negative enough to keep the Ci-VSP inactive (supplemental Fig. S1B); a short (800 ms) further depolarization to ϩ40 mV is positive enough to activate Ci-VSP, but is too short to have a significant effect on the potentiation process (Fig. 1B). Depolarization to Ϫ10 mV gradually increased KCNQ2/Q3 current, and this increase was clearly antagonized by the Ci-VSP activated at ϩ40 mV, manifested by an abrupt and significant reduction of the current (Fig. 3B). It seems that activation of Ci-VSP interrupted but did not cancel the final potentiation capacity of the depolarization on KCNQ2/Q3 currents. This was confirmed by the results shown in Fig. 3C. In this experiment, depolarization to ϩ40 mV (activate KCNQ2/Q3 and Ci-VSP) immediately inhibited the activated KCNQ2/Q3 currents almost completely. The inhibition persisted during the sustained depolarization at ϩ40 mV. When Ci-VSP was inactivated by changing the potential to 0 mV, the currents recovered rapidly and actually to a higher level, which was unexpected since the membrane potential was now 0 mV instead of the original ϩ40 mV. When the recovered current was enlarged, assuming the potential was ϩ40 mV instead of 0 mV (driving force is the only difference in this range of membrane potential, Fig. 1E), it reached the level (light dotted line, Fig. 3C) that would be expected for KCNQ2/Q3 currents after being exposed to ϩ40 mV for 10 min in the absence of Ci-VSP action (dotted line) (Fig. 3D). The second strategy we used was to measure phosphoinositide levels directly by using the TLC method (see "Experimental FIGURE 2. External high potassium increases KCNQ2/Q3 currents. A, external K ϩ increased from 1 (ND96) to 10 mM (ND96 -10K) did not affect the depolarization-induced potentiation of KCNQ2/Q3 currents. Black and gray lines indicate the current traces before and after a 15-min depolarization, respectively. B, summary data for A. C, incubation with high external K ϩ solution (ND96K) increased KCNQ2/Q3 currents. D, comparison of KCNQ2/Q3 current potentiation by depolarization (0 mV, 15 min) and high K ϩ solution incubation (15 min). E, G-V curves for KCNQ2/Q3 channels before and after a15-min incubation with high K ϩ solution incubation. The data were fitted with the Boltzmann function. **, p Ͻ 0.01. We also tested the effect of the depolarization on two members of inwardly rectifying K ϩ channels (Kir), Kir2.1 and Kir2.3. These two channels have been well characterized in regard to their modulation by PIP 2 (22). The membrane depolarization increased the activity of both channels (Fig. 4). Furthermore, the depolarization increased Kir2.3 currents more than it did to Kir2.1 currents and with a slower time course, consistent with the fact that Kir2.3 has a lower apparent affinity than Kir2.1 with PIP 2 (22). A recent study demonstrated that hypertonic stress increases PIP 2 levels by activating PIP5KI␤ (23). We tested if preincubating the oocytes with hypertonic solution would blunt the depolarization-induced KCNQ2/Q3 current increase. Indeed, preincubation significantly reduced the effect of depolarization, whereas the hypertonic solution on its own increased KCNQ2/Q3 currents (Fig. 5). Overall, the above results suggest that the depolarization increases KCNQ currents by elevating PIP 2 levels in the oocytes. Depolarization Increases PIP 2 Levels through Increased Activity of PI4 Kinase-If the depolarizationinduced enhancement of KCNQ2/ Q3 currents was due to an increased synthesis of PIP 2 , then blocking the synthesis of PIP 2 would be expected to prevent the depolarization-induce enhancement of KCNQ2/Q3 currents. For this, wortmannin, a blocker of PI4 kinase was used to test this possibility. Wortmannin, when applied at 10 M in an incubation solution for 10 min, reduced KCNQ2/Q3 currents by 33.8 Ϯ 1% . Depolarization increases KCNQ2/Q3 currents through increasing membrane PIP 2 level. A, co-expressed Ci-VSP was activated at depolarization potentials and induced an inhibition of KCNQ2/Q3 currents. Aa shows the current traces elicited by depolarization from Ϫ20 mV to ϩ40 mV. Ab shows the comparison of the currents elicited by ϩ40 mV depolarization with (gray) or without (black) Ci-VSP coexpression. B, activation of Ci-VSP antagonized the depolarization-induced potentiation of KCNQ2/Q3 current. Lesser depolarization to Ϫ10 mV was used to activate KCNQ2/Q3 currents only, whereas larger depolarization to ϩ40 mV was used to activate Ci-VSP. C, activation of Ci-VSP interrupted but did not cancel the depolarizationinduced potentiation of KCNQ2/Q3 currents. The black dotted line was an average representative current trace seen for KCNQ2/Q3 alone under the depolarization (ϩ40 mV). The solid line was the KCNQ2/Q3 currents from oocytes co-expressing Ci-VSP, recorded using the protocol shown above (solid line). The gray dotted line presented KCNQ2/3 currents computed as if the membrane were depolarized to ϩ40 mV rather than 0 mV where the currents were measured. D, summary data of fold-current increases from KCNQ2/Q3 only, and KCNQ2/ Q3ϩCi-VSP oocytes. The currents were measured 10 min after holding the membrane at ϩ40 mV. E, high K ϩ incubation and depolarization increased PIP and PIP 2 levels. Cellular PIP and PIP 2 levels were measured using thin layer chromatography (TLC). Oocytes were incubated either in ND96 (control) or in ND96K for 15 min (upper panel), or were held either at Ϫ80 mV (control) or 0 mV (depolarization) for 15 min (lower panel). Triplicate (upper panel) or duplicate samples (lower panel) from a single experiment are shown. F, summary data for E. The dots in E were quantified and normalized to the control level. Data are summary of three independent experiments. **, p Ͻ 0.01. (n ϭ 7) (Fig. 6A), indicating an active endogenous phosphoinositide metabolism involving PI4 kinase. Wortmannin at 1 M did not affect KCNQ2/Q3 currents when applied for 10 min, but induced a 34 Ϯ 2% (n ϭ 5) inhibition when applied for 30 min (Fig. 6A). The time-dependent effect of wortmannin indicated a low potency in inhibiting PI4 kinase at a low concentration, as described (24). When applied during the period of depolarization, both concentrations of wortmannin greatly reduced the depolarization-induced enhancement of KCNQ2/ Q3 currents (Fig. 6, B and C). We also tested the effect of depressing expression of PI4 kinase on depolarization-induced potentiation of KCNQ2/Q3 currents. Double-stranded RNA (dsRNA) and siRNA against the endogenous PI4 kinase of Xenopus oocytes (PI4K␤) was used to decrease levels of the enzyme. The 634 base pair (cor-responding to bases 1038 -1671) dsRNA was synthesized from a cDNA clone of PI4K␤ isolated from Xenopus oocytes (ordered from Open Biosystems. GenBank TM access no. BC073760). Injection of the dsRNA reduced the basal expression level and abolished the depolarization-induced increase in PI4 kinase expression (Fig. 6F). In agreement with these results, the dsRNA completely abolished the depolarization-induced potentiation of KCNQ2/Q3 currents (Fig. 6D). Furthermore, the dsRNA also prevented depolarization-induced membrane PIP2 increase (Fig. 6E). Similar results were obtained when siRNA was used (data not shown). Physiological Stimulation Mimicking Action Potentials Frequency Dependently Increase KCNQ2/Q3 Currents-We tested if a more "physiological" activity of membrane potentials would also modulate KCNQ2/Q3 currents. For this, we used a voltage clamp protocol mimicking neuronal action potential (Fig. 7). Four groups of oocytes were compared: (1), oocytes were clamped at 0 mV for 10 min; (2), oocytes were clamped at Ϫ70 mV for 10 min; (3), oocytes were applied with voltage clamp protocol shown in Fig. 7 every 100 ms (10 Hz) from a holding potential of Ϫ70 mV for 10 min; (4) oocytes were applied with voltage clamp protocol shown in Fig. 7 every 50 ms (20 Hz) from a holding potential of Ϫ70 mV for 10 min. Fold-current increases after 10 min of each above the voltage clamp protocol were shown and compared. Clearly the low frequency of "physiological" membrane potential activity (10 Hz) already increased KCNQ2/Q3 currents compared with quiescent cells (0 Hz, held at Ϫ70 mV). This stimulation at higher frequency (20 Hz) further significantly increased KCNQ2/Q3 currents. DISCUSSION The present study demonstrates that membrane depolarization increases cellular PIP 2 levels through increased PIP 2 synthesis mediated by PI4 kinase. To our knowledge, this is a novel finding that broadens our understanding on the roles that membrane potential may play in cellular signaling. With increased recognition of the importance of PIP 2 in cell function, the effect of membrane depolarization in PIP 2 metabolism is destined to have important physiological implications. KCNQ channels were found to have a highly variable maximal open probability (25,26), which was explained by a differential apparent affinity among the channels for PIP 2 (14). Consistent with highly differential PIP 2 affinities for KCNQ2 and KCNQ3, overexpression or overactivation of PI(4)P5 kinase greatly increased the amplitude of wholecell KCNQ2 currents, but not of KCNQ3 currents, and KCNQ2/Q3 currents were increased modestly (14,15). This suggests that KCNQ2 channels are normally only marginally saturated by PIP 2 , whereas KCNQ3 channels are nearly fully saturated and KCNQ2/Q3 channels are in between, analogous to the dif- Brief hyperpolarization to Ϫ80 mV for 500 ms was separated by long depolarization of ϩ40 mV. Current traces above and below the dotted zero current lines were the currents from ϩ40 mV and Ϫ80 mV, respectively. C, summary data for folds of current increase at Ϫ80 mV after 10 min using the protocol shown. **, p Ͻ 0.01. FIGURE 5. Hypertonic stress reduces the depolarization-induced potentiation of KCNQ2/Q3 currents. A, oocytes were incubated in either normal (NT) or hypertonic solution (HT, ND96 plus 96 mM NaCl) for 10 min, and then were depolarized to 0 mV for 10 min. B, summary data for: control (current amplitudes at the beginning of 10 min depolarization at 0 mV) after 10 min of incubation in either normal (NT) or hypertonic solution (HT), increased currents by 10 min depolarization at 0 mV (0 mV), and total increased currents by 10 min of HT incubation and 10 min of 0 mV depolarization (control ϩ 0 mV) compared with NT control. **, p Ͻ 0.01. ferential affinity for PIP 2 that has been proposed for Kir channels (22,27,28). In line with these observations, we found that the membrane depolarization increased KCNQ2 currents more than KCNQ2/Q3 currents. KCNQ2(H328C)/Q3 mutant was shown to be less sensitive to PIP 2 (8). Accordingly, KCNQ2(H328C)/Q3 currents tend to be increased to a greater degree by depolarization than KCNQ2/Q3 currents (supplemental Fig. S3). Similarly, the depolarization increased Kir2.3 currents to a greater extent than Kir2.1 currents (Fig. 4C). Our previous study demonstrated that Kir2.1 has a higher apparent affinity for PIP 2 than Kir2.3 (22). Thus, the depolarization-induced potentiation of KCNQ channels currents was most likely due to an increased membrane PIP 2 level. This was confirmed by the direct measurement of PIP 2 in the cells. Elevated PIP 2 is most likely the result of increased activity of PI4 kinase, since wortmannin (a PI4 kinase inhibitor) totally abolished the depolarization-induced potentiation of KCNQ2/Q3 currents. It is not yet clear from the present study what is the voltage-sensing mechanism for the observed increase of PI4 kinase activity and PIP 2 level. We have studied the roles of voltage-dependent Ca 2ϩ channels and Ca 2ϩ may play in this regard. L-type Ca 2ϩ channels are believed to be the voltage-sensing mechanism for depolarization-induced activation of PLC in skeletal muscle cells (29,30). However, in the absence of extracellular Ca 2ϩ , the depolarization induced a similar enhancement of KCNQ2/Q3 currents (supplemental Fig. S4). Apart from the well-known effects of membrane potentials on functions of voltage-dependent ion channels, evidence is accumulating suggesting that membrane voltage is an important regulator on functions of non-conventional voltage sensing proteins. A prototype example is the voltage-dependent activation of phosphoinositides specific phosphatase, Ci-VSP (17), as we used in this study. Another example is modulation of G proteincoupled receptor (GPCR) signaling by membrane potential. It is believed that voltage-sensitive GPCR signaling using mechanisms localized at the GPCR per se, or direct coupling interface between GPCR and immediate downstream effectors (31)(32)(33). Recently, two muscarinic receptors (M 2 R and M 1 R) were shown to have charge-movement-associated currents analogous to gating currents of voltage-gated channels. The results indicate that GPCRs serve as sensors for both transmembrane potential and external chemical signals (34). It is interesting to note that while the depolarization-induced modulation of The initial current amplitudes immediately following depolarization to 0 mV were normalized. C, summary data for B. D, dsRNA for PI4K␤ abolished the ND96K-induced potentiation of KCNQ2/Q3 currents. **, p Ͻ 0.01. E, dsRNA for PI4K␤ abolished depolarization (0 mV)-induced increases of membrane PIP 2 level. F, Western blot of PI4K␤ for control, dsRNA-injected, ND96K-incubated, and dsRNA plus ND96K-incubated oocytes. GPCR signaling is graded and with no apparent threshold or upper limit (32), the voltage-dependent potentiation effect of depolarization on KCNQ2/Q3 currents resembles the voltagedependent activation of ion channels (Fig. 1C). The present study presents an important novel mechanism for phosphoinositides metabolism that could have broad physiological implications. It is clear from data shown in Fig. 7 that physiological stimulation like action potential could significantly modulate function of PIP 2 -sensitive proteins including but not solely ion channels, if a similar membrane potential mediated phosphoinositides metabolism system exists in cells where these proteins reside. We tried on dorsal root ganglion (DRG) to see if a similar depolarization-induced modulation of M/KCNQ currents exists in these neurons. Instead, depolarization-associated depression of M/KCNQ currents was observed, which is probably not linked with altered phosphoinositides metabolism. 4 Nevertheless, the membrane potentials and excitation of cells are surely important modulating factors for cellular phosphoinositides and their relevant functions, as exemplified in cardiac myocytes (35) and in neurons (36). Clearly, more work needs to be done to understand the detailed mechanisms and physiological significance of voltage-dependent modulation of phosphoinositides metabolism.
6,239.8
2010-01-27T00:00:00.000
[ "Biology", "Chemistry" ]
A dynamic assessment of various non-Newtonian models for ternary hybrid nanomaterial involving partially ionized mechanism The dynamic of fluids and coolants in automobiles are achieved by enhancement in heat energy using ternary hybrid nanostructures. Ternary hybrid nanomaterial is obtained by suspension of three types of nanofluid (aluminum oxide, silicon dioxide and titanium dioxide) in base fluid (EG). Prime investigation is to address comparison study in thermal energy involving various flow models termed as Maxwell fluid and Williamson fluid. This exploration is carried out by partially ionized fluidic particles in the presence of ternary hybrid nanomaterial over cone. Heat transfer is carried out by heat source and thermal radiation. Equations regarding Ordinary differential are achieved from PDEs using variable transformations. The numerical consequences are obtained implementing finite element method. Flow into fluid particles is enhanced versus higher values of Hall and ion slip parameters. Thermal performance as well as flow performance for the case Williamson fluid is better than for case of Maxwell fluid. Production via energy is boosted versus heat source parameter. Latest insurrection in manufacturing and science machineries has prepared several effects possible which where incredible in past developed of nano-particles is since of modern progress in nano-technology. Technology based on thermal design and development of nano-scale (solid particles) have been initiated approaches regarding synthesis fluids involving nano-metallic are called nano-fluids. Tri-hybrid nanomaterial has higher thermal conductivity as compared hybrid nanofluid and nanoparticles, which are used in several engineering applications which are applicable in engineering process, cancer therapy, hair care products, electrical insulators, green tires, dental products, fuel cells, solar cells, optical chemical sensors, bio-sensors and automotive parts. Additionally, the concept related to electrically conducting fluids is totally distinct as compared concept related to conducting liquids considering absence of magnetic field. Magnetohydrodynamic (MHD) flow is defined as electrically conducting fluids in the occurrence of magnetic field. Such developments regarding modeling are required Ohm's law and Maxwell's equations along with conservations laws. Applications of MHD are utilized in nanofluid pumping, magnetic drug targeting, pumping of seawater, cancer tumor treatment and fluid pumping. These studies includes both theoretical and experimental in term nanofluids, hybrid nanoparticles and tri-hybrid nanoparticles. Here, we discuss the related work. Nazir et al. 1 www.nature.com/scientificreports/ variable thermal conductivity using Soret and Dufour effects in hydro-magnetized flow. Tripathi and Kumari 19 estimated generalized heat transfer characterizations in thin film using thermocapillary convection. Kumari and Tripathi 20 discussed features of self-rewetting liquid under the action of gravitational force considering concept of Marangoni convection. Kumar et al. 21 studied role of magnetic field in brinkman type nanoparticles in the presence of convection flow considering chemical reaction. Kumar et al. 22 studied characterizations of thermal transport using influences of Soret and second order chemical reaction over a heated vertical plate considering porous medium and thermal radiation. Kumar et al. 23 discussed the thermal aspects of magnetic parameter into nanoparticles named as CNT in the presence of viscous dissipation. Kumar et al. 24 estimated numerical consequences of thermal radiation in heat transfer using Williamson rheology considering Joule heating. Kumar et al. 25 analyzed mathematical modeling of non-Newtonian liquid using concept of magnetic dipole and thermal radiation in the presence of activation energy. Kumar et al. 26 studied entropy generation in Casson liquid considering influence of magnetic field and activation energy. Kumar et al. 27 estimated features of entropy generation and magnetic field into nanoparticles past a rotating disk including concept of activation energy 26 . It is claimed that there is no investigations on tri-hybridity of nanoparticles in ethylene glycol involving two fluid models termed as Maxwell and Williamson fluids past a cone considering ion slip and Hall forces. Additionally, thermal aspects regarding heat generation, heat absorption and thermal radiation are added into heat energy equation. The comparative investigation among Maxwell and Williamson fluids are studied whereas comparison among pure fluid, nanofluid, hybrid nanomaterial and tri-hybrid nanomaterial is also observed. The present analysis is contained into five sections. First section is related to literature review. Mathematical work is prescribed in section two and section three is based on numerical work. Section four is about discussion and results while last section is related to conclusions. Descriptions regarding modeling and novelty Thermal achievement in 3D Williamson and Maxwell liquids is visualized past a cone with the suspension of tri-hybrid nanomaterial in base liquid termed as ethylene glycol. Fluidic motion is produced with help of angular velocity of cone. Heat energy is taken place using thermal features thermal radiation, Joule heating and heat source. Moreover, fluid is considered as viscous in rotating disk. Figure 1 depicts the flow configuration and thermal properties regarding tri-hybrid nanomaterial is prescribed in Table 1. Conservations laws in term of momentum and energy 28,30 for non-Newtonian fluids flow are , Numerical technique A strong numerical approach based on FEM is embraced to find results of ODEs along with boundary conditions. The methodology is explained through Fig. 2. A FEM is used to conduct the solutions of various CFD problems. It is noticed that code regarding finite element method is designed on MAPLE 18. Moreover, Maple is known as numerical and symobolic computing enviroment. It deals with numberious areas, such as numerical analysis, data processing, symobilc mathematics and visulizations. Several advantages of FEM are mentioned below. • FEM has ability to tackle various kinds of boundary conditions for the problem arising in the modeling of different physical system in engineering doamin; • It has also ability to simulate various types complex geometries; • It can be easily discretization's of derivatives very well; • It needs low investment resources and time in term of handing problems; • Several physical problems regarding recent developments in applied sciences are solved by FEM. Step I Domain Discretization First step is about domain discretization of problem domain. Domain is broken into small elements up to 300 elements. 300 elements are enough to simulate solution of current analysis, which is shown in Table 2. It is noticed that a system of ODEs is called strong form whereas weak form is achieved via residual method. The residuals [30 and 31] are derived as (15) σ www.nature.com/scientificreports/ Step II Selection of Shape Function www.nature.com/scientificreports/ A significant role of shape functions are used to obtain approximation solution of current analysis. Various types of shape functions are used in finite element procedure. In this procedure, linear kind of shape functions is used. Desired form of shape functions are defined as Step III Weak Formulation Equations (12)- (14) are known as strong form along with boundary conditions. In this procedure, weak forms are needed to achieve approximation solution. Collection of all terms are placed on one side and integrating it over 300 elements as presented in Table 2. Step IV Finite Element Formulation In this step, stiffness elements are obtained of current problem. Finally, global stiffness matrices are achieved over each element. Step V Assembly Process Assembly process is an integral part of finite element method. Stiffness matrices are formulated using concept of assembly approach. Step VI Solution of Algebraic Equations In final step, algebraic equations are achieved under visualized tolerance ( 10 −5 ) which is defined as Validatio of numerical study. Table 3 is prepared to visulized the validation of numerical study with published work. It is noticed that good agreement among pubish work 19 and present study by disppareing impacts tri-hybrid nanoparticles and non-Newtonin behaviour. These comaraive simulations are recorded in Table 3. Grid independent investigation. The code regarding finite element method is designed in MAPLE 18. The code is varified with already published works. The numerical reuslts of grid indepenet analaysis are recoreded in Table 2. This table reveals convergence investiagation for 300 elements. It is estimated that results are recoreded in table versus incresing number of elements at mid of each temeprature and vecloity profile. Hence, www.nature.com/scientificreports/ solution of problemes is converged by taking 300 elements whreas graphical and numerical results are simulated by taking 300 elements. Comparative study is prented in Table 3. Outcomes and discussion The 4 is plotted to determine impact of magnetic parameter on velocity field in term of y-and x-directions. From this figure, it is estimated that motion regarding fluid particles is induced using wall velocity. However, impact of Lorentz force creates resistance among fluid particles using implication of magnetic parameter. The flow regarding fluidic particles becomes slow down when magnetic field is implemented. Lorentz force is produced using appearance of magnetic field. Flow is slow down because of negative Lorentz force is implemented. Additionally, magnetic parameter is formulated using concept of Lorentz force in momentum equations. Magnetic parameter is inserted along z-direction regarding fluidic motion. Due to opposite direction of flow and magnetic field, flow becomes slow down. Thickness regarding momentum layers is decreased when magnetic parameter is increased. Moreover, flow for case of appetence of magnetic field is higher than flow for the case of disappearance of magnetic field. Figures 5, 6, 7 and 8 are sketched to determine observation of ion slip and Hall parameters on motion regarding fluid particles in term of y-and x-directions. The acceleration is enhanced when ion slip and Hall parameters are increased. This is because ion slip and Hall parameters are appeared in generalized Ohm's. Therefore, velocity of fluidic particles is increased. Moreover, it visualized that thickened based on momentum layers Table 3. Validation of numerical simulations in term of skin friction coefficients and heat transfer rate with published work 19 when ph 1 = ph 2 = ph 3 = 0, β e = 0, Pr = 0.7, β = 0 and 1 = 0. Dynamics analysis regarding heat energy. Figures 9 and 10 are plotted to observe the behavior of thermal radiation number and heat source number on heat energy. It is estimated that comparative simulations among Williamson liquid and Maxwell fluid on heat energy is measured inserting ternary hybrid nanoparticles. Figure 11 captures comparison study into fluid, nanoparticles, nanofluid and hybrid nanofluid. The role radiation parameter on temperature curves are carried out by Fig. 9. From this figure, it is investigated that the production related to heat energy is reduced when radiation parameter is enhanced. It is because heat energy is transferred in term of away from the wall due to thermal radiation ways. Therefore, amount regarding thermal energy is reduced. Moreover, production related to thermal energy for the case of Williamson fluid is higher that production based on heat energy for the case of Maxwell fluid. Physically, heat energy moves away from the surface of rotating cone due to thermal radiation. Therefore, heat energy is reduced when thermal radiation www.nature.com/scientificreports/ number is increased. It is noticed that inverse proportional relation is observed versus impact of thermal radiation. An increment in thermal radiation number brings declination into heat energy. Thermal layers thickness of boundary layers is decreased when thermal radiation number is increased. Figure 10 predicts investigation of heat source parameter on heat energy. Heat energy is boosted when heat source parameter is increased. This is because heat source is implemented at wall via surface of sheet. Heat energy can be easily managed using heat source parameter. The impacts of two kinds impacts regarding H t are discussed which are based on heat source and heat generation phenomena's. It is noticed that positive numerical values of H t are implemented for heat generation and magnetite numerical values of H t are implemented for heat absorption. Thickness related to thermal layers can be managed by variation in heat source parameter. Moreover, production related to thermal energy for the case of Williamson fluid is higher that production based on heat energy for the case of Maxwell fluid. Figure 11 illustrates visualizations of thermal enhancement inserting fluid, hybrid nanofluid, nanofluid and tri-hybrid nanofluid. It is noticed that solid curves are made by tri-hybrid nanofluid and dot curves are drawn for apprence of hybrid nanoparticles. Further, dash and dot dash curves are captured for fluid and nano- www.nature.com/scientificreports/ fluid. From this figure, it is investigated that tri-hybrid nanoparticles are significant to obtain maximum amount regarding heat energy rather than hybrid nanoparticles, fluid and nanofluid. Dynamics analysis regarding Nusselt number, divergent velocities and heat transfer rate. Table 5. It is estimated that heat transfer rate is enhanced versus variation in Prandtl number but heat transfer rate is declined against change in magnetic parameter. Additionally, production for heat transfer rate for case of Williamson liquid is higher than heat transfer rate for case of Maxwell fluid.
2,896.6
2022-06-19T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Automated data extraction from historical city directories: The rise and fall of mid-century gas stations in Providence, RI The location of defunct environmentally hazardous businesses like gas stations has many implications for modern American cities. To track down these locations, we present the directoreadr code (github.com/brown-ccv/directoreadr). Using scans of Polk city directories from Providence, RI, directoreadr extracts and parses business location data with a high degree of accuracy. The image processing pipeline ran without any human input for 94.4% of the pages we examined. For the remaining 5.6%, we processed them with some human input. Through hand-checking a sample of three years, we estimate that ~94.6% of historical gas stations are correctly identified and located, with historical street changes and non-standard address formats being the main drivers of errors. As an example use, we look at gas stations, finding that gas stations were most common early in the study period in 1936, beginning a sharp and steady decline around 1950. We are making the dataset produced by directoreadr publicly available. We hope it will be used to explore a range of important questions about socioeconomic patterns in Providence and cities like it during the transformations of the mid-1900s. Background Until the passage of the Resource Conservation and Recovery Act (RCRA) of 1976, waste produced during commercial and industrial activities in the United States was largely unregulated [1]. It took another decade until programs like the Environmental Protection Agency's (EPA) Toxic Release Inventory [2] were established for keeping track of emissions from the largest and most hazardous facilities. These regulatory dynamics, combined with businesses' tendencies to constantly churn in and out of operation over time, has created an urban environment covered with the relic sites and toxic legacies of past economic activity [3]. This is a serious concern for both community members worried about their health [4] and regulators and environmental professionals interested in locating and remediating contaminated sites. Unfortunately, many relic sites of historical businesses with modern environmental implications remain untracked and unknown, with nearby residents unaware. As a result, there is a great need for accurate historical business location data dating back to before regulatory agencies began tracking these issues. To address this problem, we present directoreadr, a new code for extracting historical business data from scans of city directories. Previous work focused on developing a software pipeline for processing historical directories specific to industrial manufacturing [5]. The result was the georeg code (https://github. com/brown-ccv/georeg), which was able to process digitized industrial directories to produce a near comprehensive dataset of industrial site locations and activities in Rhode Island for the years 1953-2012. This has been used productively for a range of scientific and community activities [6]. However, while industrial production is a major source of urban pollution, it represents only a selection of economic activities that leave behind on-site contaminants. Gas stations are another such commercial activity of concern. According to the EPA, underground gas and oil storage tanks at these sites are a leading source of groundwater contamination [7]. And while the federal government has been monitoring underground storage tanks (USTs) since the mid 1980s through RCRA, older USTs are added to lists only as they are discovered. Therefore, in the current paper we develop an approach to collecting the historical location of commercial sites from city directories. Since the 1930s, the Polk Corporation has maintained detailed city directories for most American cities. Compiled annually, these books contain a comprehensive list of area businesses in the yellow pages. Because the structure of the data in the city directories was considerably different from the industrial registries, we have developed a new code, directoreadr, instead of adapting the georeg code of [5]. We did use the same custom geocoder as [5], although the geocoding processing code was quite different. With the new algorithms in the directoreadr code, we are able to efficiently process data with a substantially higher degree of accuracy than [5]. To develop and test directoreadr, we have focused on city directories from Providence, Rhode Island. Using the scanned images of these directories, directoreadr is able to extract a company's name, address, and business type. These data are then geocoded to provide latitude and longitude. To show an example use of these data to examine environmentally hazardous sites, we focus on gas stations, but the applications are not limited to tracking environmentally hazardous sites. These data can answer many important research questions across a range of topics that are of interest to many social science and environmental disciplines, from economics to ecology. Data For the purposes of this project, we have focused on the yellow pages business directory within the city directories. We examined 27 city directories from the city of Providence, RI with dates from 1936 to 1990. Beginning in 1940, the city directories were produced by the Polk corporation, but the three city directories from before 1940 were produced by the Sampson and Murdock corporation. By extracting this detailed spatio-temporal business data, we allow for socioenvironmental analysis of changes in the land use of industrial sites, manufacturing zones, or other potentially hazardous areas, such as current and former gas station sites. Digitization was performed by the Internet Archive's office at the Boston Public Library, and the physical books were supplied by both the Boston Public Library and the Providence Public Library. The Internet Archive uses a standardized digitization process, delivering 300 dpi 8-bit color images with a lossy compression in the wavelet-based JPEG 2000 format. We convert these files to grayscale and do not use color information. The raw scans are available online from the Internet Archive, and we provide the links in the directoreadr GitHub (github. com/brown-ccv/directoreadr). Methods As input, directoreadr takes a series of page images; as output, directoreadr produces a database of businesses and locations, along with error files containing dropped addresses, geocoder errors, and addresses in another city. The pipeline consists of a series of discrete processing steps: grayscale thresholding, ad removal, margin cropping, column chopping, line chopping, Optical Character Recognition (OCR), header identification, entry concatenation, text cleaning, address parsing, street matching, and geocoding. Image preprocessing 3.1.1 Grayscale thresholding. The original color images are read into directoreadr as 8-bit grayscale images with an integer pixel value ranging from 0 to 255, and the first step of the pipeline is to convert these images to a binary format, where each pixel value is either 0 or 1. This binarization step enables us to detect connected areas of black pixels, a core component of many of the computer vision algorithms we use. To do this, we use a fixed threshold across the entire page, which directoreadr estimates from the page's pixel value distribution. Incorrect grayscale thresholds are one of the largest sources of error in directoreadr, and better grayscale thresholding would provide more accurate results. Ad removal. After producing the binary images, we remove the advertisements along the border of the pages, as well as lines and decorations within and between the columns of text. For the sake of simplicity, we refer to all page features to be removed as "ads." To identify and separate out the ads from the text, we leverage two different geometric characteristics: First, the ads tend to be outlined by simple shapes like direct rectangles. Second, the ads tend to be much larger in extent than the characters of the text. Fig 1 shows an example of ad removal. Using the OpenCV contours method, we identify regions of connected pixels. For each pixel contour, we calculate both the perimeter of the contour and the perimeter of the bounding box, the smallest possible horizontal rectangle circumscribing the contour. In most cases, ads can be separated from text simply by looking at the perimeter of the bounding box. However, in a few cases, many characters of text blur into each other, and the perimeter of the bounding box can be as large as it is for the smallest ads. To address this, we multiply the perimeter of the bounding box by the ratio between the perimeter of the bounding box and the perimeter: Because text has a more complex shape than the ads, the ratio of bounded perimeter to contour perimeter is much lower for text than for ads, and it helps separate the text from the ads. Once we have identified the contours around the ads, we remove any black pixels within the bounding box around those contours. In most cases, the ads around the edge of the page are surrounded by horizontal rectangles. When they aren't, the next step is to identify where the columns of ads are and remove all black pixels there. Even then, in a few cases, ad removal fails, and the image has to be cropped by hand. Margin cropping. Once the ads have been removed and replaced by whitespace, the columns of text in the center of the page are still surrounded by whitespace. To focus in on the text, the next step in the directoreadr pipeline is to remove the whitespace. To allow for specks, lines, and other noise on the page, we set a pixel threshold. Margin cropping is fairly straightforward and rarely creates problems. Column chopping. Each page is set up with columns of text (usually three columns), and in order to preserve information about text location, we separate the text into the columns. To identify the column breaks, we sum up the number of black pixels in each vertical line of pixels in the image. Around the column breaks, there are dips in the number of black pixels. To identify the location of the dips, we set a pixel threshold and identify the vertical lines with fewer black pixels than the threshold value. Using the mean-shift algorithm, we cluster those vertical lines. Unlike traditional clustering algorithms, like k-means, which take a number of clusters as an input, mean-shift figures out the optimal number of clusters. As the cut point for column separation, we pick the right-most vertical line in each cluster. One of the key features of this algorithm is to err on the side of failure, throwing an error when the ad removal has performed poorly. The goal of this design is to allow for hand-chopping when it will meaningfully improve the results, and for all of the failure cases, we generated the columns through hand-chopping. Line chopping. Once we have the columns of text, we then chop the columns into individual lines of text. To identify the lines of text, we use a similar process to identifying the columns. We calculate the number of pixels in each horizontal line of pixels in the column. Then, we cluster the horizontal lines of pixels that fall below the pixel threshold, using meanshift to identify the entry breaks. If there are large blocks of entries that don't separate, we then run the algorithm on them with a higher black pixel threshold. This higher threshold is typically necessary when the page image is warped or tilted. Highly robust, this process rarely produces errors unless there are more serious problems with the image. Address processing 3.3.1 OCR. Entering this part of the pipeline, we have a directory of images where each image represents a single line from one of the columns on the page. To convert these images of text to a string of text, we use the Tesseract OCR package developed by the Google corporation [8]. OCR is not perfect, and it does produce some errors, so downstream text parsing parts of directoreadr must account for these errors. Header determination. The data in the city directories are grouped under headers that describe the type of business, and these headers must be identified. Depending on the year, the city directories identify headers using a number of different characteristics. Headers are typically indented, and they sometimes contain all caps. Often, headers have asterisks before them. Depending on the year, directoreadr selects from five different header determination algorithms, most of which center around how many pixels each line is indented by. Because some columns are tilted, we calculate a relative indentation compared to nearby lines. Our header detection algorithms relied on indentation and capitalization as the primary detection features, and we did not build a robust header algorithm for 1964, the one year in which headers were not indented, and both the headers and the text were in all caps. As a result, most header identifications for 1964 are incorrect. Entry concatenation. In many cases, entries in the columns of text are too long for one line and continue onto the next line. In all of these cases, the next line is indented, but not by as much as a header is. Using the indentation data, we concatenate the multi-line entries into single strings of text. Text cleaning. Most of the raw entries just contain a business name and an address, but some of them contain additional information that must be removed, like a telephone number or a floor or room number. To clean these data, we used a complex series of pattern matching operations. In some cases, especially in older books, there were multiple addresses for a business in a single entry, and we split these lists based on the positions of commas and the word "and." 3.3.5 Address parsing. We start by using regular expressions to search the entry's string of text for abbreviations like: St., Av., Ct., Dr., Rd., Ave., and Ln. in either upper or lower case. If one of those abbreviations is detected in the string, the algorithm searches for a group of digits before the abbreviation. First, it classifies the string of text between the number and abbreviation as the address, and then it classifies the text before the address number as the company name. If the abbreviation is not detected, the algorithm will still try to parse out the address by searching for the address number and classifying the string of text after the number as the address, then classifying the string proceeding the number as the company name. This parsing algorithm is not perfect. For instance, it requires digits, not spelled out numbers, for the address, and it does not work for addresses written as the corner between two streets instead of a numbered address. However, it is generalized enough to work well across many different formats because it is built on simple, consistent address components. 3.3.6 Street matching. Because a number of the streets contained OCR errors, we used fuzzy matching to produce true street names. We developed two lists of streets, a list of current streets and a list of historical streets. The historical street list was developed through hand examination of historical maps and is not fully comprehensive. Because we only had a database of Providence streets, we removed the addresses we could identify as belonging to another Rhode Island municipality. Using the fuzzywuzzy package in Python, we created a scoring algorithm to quantify how close an OCR reading of a street name is to a street in the true street name list. This scoring algorithm is based off the Levenshtein distance ratio: where s1 and s2 are the two strings being compared, L is a function giving the length of a string and D is a function giving the Levenshtein distance between two strings. The Levenshtein distance is the minimum number of edit operations (substitutions, deletions, or additions) required to convert one string into another. For instance, the Levenshtein distance between "park" and "barks" is 2, one substitution and one addition. The ratio would be 100 � (1-2/9) = 77.8%. The scoring algorithm is: where LongestWord is a function that gives the longest word of a string. The reason for adding additional emphasis on the longest word was to emphasize the core street name. For instance, we wanted "BROADWAY" to match with "BROADWAY ST." For each street in the true streets list, we calculated the matching score between the OCR result and the street in the true streets list, selecting the true street with the highest score. To guard against false positives, we removed any matches with a score below 80. One of the directoreadr output files lists any entries that failed to match to a known street, failed to parse as an address, or contained empty text. If desired, these files could be examined for hand coding, but we do not do that here. To dramatically improve the speed of the street search code, we only searched for unique queries, saving each input string in a dictionary to convert it to its street match. That way, once we had done the street searching for one OCR reading of a street, when that same OCR reading came up again, we wouldn't have to repeat the search. 3.3.7 Geocoding. The last component of the directoreadr pipeline is geocoding the cleaned and parsed addresses to obtain the latitude and longitude coordinates of the businesses. After researching several different geocoding options, from paid services (Smarty-Streets) to free APIs (Google Maps), we decided to implement the geocoder built for [5] using ArcGIS software, with data from Rhode Island's E911 database. This geocoder was free for our use, given that Brown University had an in-house ArcGIS server, but because geocoders are proprietary, in our published version of the code, we do not include the api key necessary to run the geocoder. To improve the speed of the geocoding, we ran 50 concurrent searches and searched only for unique addresses, building up a dictionary of geocoder results to reference in future runs of the program. Because many addresses were repeated across many years, this drastically sped up the process. The geocoder only contained data on current street layouts. Providence, however, like many American cities, has seen considerable change in its street pattern over the course of the study period. Many streets have been wholly or partially demolished, and others have been renumbered. To address this problem and prevent against false positives, we utilized the geocoder confidence score, and we removed any addresses with a confidence score under a perfect score of 100 from our final results. This way, we only accepted matches that were a perfect match to a known current address in our geocoder's address database. To address the most common of these addresses, we allowed for hard-coding of hand-identified historical geocodes, entering hard-coded locations for four large buildings with many businesses at those addresses. Results and discussion The image processing portion of the pipeline had a success rate of 94.4%. In our dataset, we ran the algorithm on 2,582 individual pages. For these pages, 144 or 5.6% required hand-chopping in order to process. We designed the column-chopping algorithm to deliberately fail when there were likely errors with the ad removal algorithm. The goal was to require handchopping whenever it would meaningfully improve the end result, not just in cases where it was strictly necessary. Because of the hand-chopping, we were able to pass all of the pages through to the OCR and text parsing algorithms. In the text parsing algorithm, 6.7% of all entries were dropped as not a successfully identified and matched address. These include both entries that should be dropped and entries that were dropped because of an error. In 38.2% of these cases, the algorithm failed to parse an address at all. In 10.3% of these cases, the algorithm parsed an address but returned an empty string for the street. In 4.2% of these cases, the algorithm parsed an address but threw an error in street matching. And in 47.2% of these cases, the algorithm successfully parsed an address and matched a street, but the confidence score was too low for us to be sure the address was correct. In some of the address drop cases, the addresses were outside of Providence, sometimes outside of Rhode Island. Others reflected an idiosyncratic address form the algorithm wasn't set up to parse. For instance, some addresses were named buildings without an address (e.g. "Arcade Bldg" or "Industrial Trust Bldg"). Others were street corners instead of numerical addresses. Of course, many of the address drop cases represented failures of the OCR or failures of the header identification, concatenation, and entry chopping algorithms. Address drop rates were not strongly correlated with time (Fig 2). The geocoder algorithm produced errors in 4.7% of cases. (Errors here are defined by failing to produce a prefect geocoder confidence score.) Unsurprisingly, these errors were higher in earlier years when the Providence street pattern was considerably different (Fig 2). Towards the end of the study period, the percentage of addresses outside of Providence increased sharply. While we were able to capture most of these, we were not successful in all cases. Many addresses from a different city were not recognized as belonging to a different city, and when they were processed, they led to dropped addresses or geocoder errors. These statistics only capture the places where the code generated errors or flags. In order to fully assess the ultimate accuracy of the code, we hand-examined the error rate for gas stations in three years : 1936, 1962, and 1990. In 1936, 220 out of 242 gas stations were correctly identified, for an accuracy rate of 90.9%. Of the 22 missing gas stations, eleven were missing due to geocoder errors cause by historical street changes, seven were missing because of non-standard address formats that directoreadr could not parse correctly, one had the wrong address read, and only three were entirely missing. In 1962, 219 out of 224 gas stations were correctly identified, for an accuracy rate of 97.8%. Of the five missing gas stations, one was dropped, and four had a geocoder error. Three out of the four geocoder errors represented cases where the addresses had been read incorrectly. In 1990, 71 out of 73 gas stations were correctly identified, for an accuracy rate or 97.3%. Both of the missing gas stations were dropped. We define "correctly identified" as cases where the algorithm correctly read and parsed the address, identified the right street, and geocoded it correctly. These statistics do not include errors in correctly reading and parsing the business names. The total accuracy rate from the gas stations in the three years we examined by hand was 94.6%. (This number is somewhat skewed because 1936 had the most gas stations. The average of the three accuracy rates was 95.3%.) Of all the errors we identified during our hand validation, only one was a false positive, a case where an entry was processed without errors but returned the wrong address and geocoded location. This false positive occurred in as part of a long list of locations of the Socony-Vacuum Oil Company where two entries were merged together. The section of the text read "269 Valley, Waterman cor E River," which got parsed as one address at 269 Waterman St., which geocoded successfully. In this case, we recorded this as one station missed because of a non-standard address format and one address incorrectly parsed as a different address. We are making the data available for download from the Brown Digital Repository (https:// doi.org/10.26300/typ4-nj27), and we are making the directoreadr code available at github.com/ brown-ccv/directoreadr. False positives We did our best to design our algorithm to avoid false positives at all cost, leading to the errors being overwhelmingly false negatives. However, it is probable that there are some other types of rare false positives we could not identify in our hand validation sample. For instance, if a street was renumbered, and an old address moved its location, the geocoder only knows the relationships between addresses and locations in the present day, and it would identify the old address with its present location. Without extensive historical research of every address, it is impossible to rule out that this may have happened with a very small percentage of our addresses. Additionally, a small portion of addresses outside Providence were not successfully identified as belonging to another city. While the vast majority of these addresses will not match to an exact address in Providence, a small percentage, mostly on common street names, will have an address in a different city that exactly matches a real address in Providence. No examples of these sorts of false positives were observed in our hand validation, but it is certainly probable that there may be a few examples of these sorts of false positives outside our hand validation sample. Efficiency Once the geocoding, street matching, and OCR results have been cached, the parsing algorithm runs in roughly 20s per book on a standard laptop, enabling faster debugging and development. With no cached results, the full directoreadr pipeline still runs in under 30 minutes per book. Before our efficiency improvements, directoreadr would take many hours to process a single book. Example hazardous site: Gas stations Because of their environmental importance, we selected gas stations as an example hazardous site type. Gas stations in Providence typically developed along main roads, avoiding wealthier neighborhoods like Providence's East Side (Fig 3). Starting in the 1950s, gas stations began a precipitous decline in Providence (Figs 3 and 4). By 1990, there were only 75 gas stations in the city, a decline of 71% since 1950, when city directories list 257. This drop corresponds with a decline in the city's population, which dropped by a third between 1950 and 1980, the combined result of job loss from deindustrialization and displacement of minority residents whose neighborhoods were cleared for several ambitious "urban renewal" projects [9][10][11][12][13]. These changes were part of broader national trends of suburbanization and economic decline in the urban core [14][15][16]. Other factors specific to the service station and automobile industries also may have played a role. Broader regulatory changes likely also affected gas station counts, with zoning having a particularly important effect [17][18][19]. Because the rate of geocoder errors was higher in the earlier years, these figures probably underestimate the dramatic drop in the number of gas stations. Overall, we identified 526 unique gas station addresses in Providence over the study period, compared to just 114 gas station addresses recorded in the Rhode Island Department of Environmental Management Underground Storage Tank (UST) database (http://www.dem.ri.gov/programs/ wastemanagement/inventories.php). Applicability to other cities Although we have developed the directoreadr code on directories from Providence, RI, these directory formats are fairly similar in different cities, and directoreadr should be easily adaptable to cities all across the country. The Polk Corporation produced comprehensive sets of city directories for nearly all major and mid-sized American cities, and the formats are largely similar from city to city. While some modifications to the code may be necessary to address small differences in layout, heading design, text abbreviations, and ad shape, the modifications needed should be small. While archival and permissions work will always be necessary to assemble and scan the books, these city directories have been preserved and maintained by libraries around the country, and a comprehensive set of city directories should be available at a feasible cost. Conclusions We have successfully built the directoreadr code: a pipeline for the digitization, extraction, and processing of city directory data. We were able to process 94.4% of the pages without any human input, and the rest required only minor human input. The code correctly processed, identified, and located 94.6% of gas stations in our hand validation set. We have developed and tested the code on city directories from Providence, RI, and it should be adaptable to cities all across the country with minimal modifications. There are many potential uses of these data, and we have demonstrated mapping of environmentally hazardous historical gas station sites as an example. We found that gas stations were most plentiful in the 1930s and 1940s, with a dramatic decline beginning around 1950. We identified 526 unique historical gas station sites in Providence, compared with just 114 in the Rhode Island Department of Environmental Management's Underground Storage Tank database.
6,489
2019-07-12T00:00:00.000
[ "History", "Environmental Science", "Computer Science" ]
Detecting Malicious .NET Files Using CLR Header Features and Machine Learning The .Net Framework has made writing windows applications easier than ever. Several programming languages can be used to write software using the .Net Framework, the most common one being C#. Due to the abundance of modules and pre-built functionalities that allow programmers to easily manipulate the windows operating system with high abstraction and no need for low-level coding, the .Net framework has also become a desirable environment for malicious actors to write their malware. To best of our knowledge, researchers have been treating .NET malware and other malware the same way by utilizing features from the PE header to classify the files. This is not possible for.Net files because their PE headers are nearly identical. In this paper, we tackle the problem of detecting malicious .Net files by extracting features from the CLR header. As far as we know, we are the first ones to explore this approach. Furthermore, we create a new dataset comprised of.Net malware and benign files, which we freely distribute to the research community. Finally, we assess the performance of several machine learning algorithms to detect malicious .NET files. The random forest model was the best solution among the set of algorithms tested, exhibiting a performance of 92% for this predictive task. Introduction On February 13, 2002 [1], Microsoft introduced the .Net Framework, a software development framework, and now a built-in component of Windows, with the goal of making programming easier. With .NET, users no longer need to create pointers to manipulate memory operations, nor do they need to clear any objects that they create, as these functionalities are provided through the .Net Framework garbage collection. Various programming languages can be used to target the .Net Framework, with C# being the most common. C# makes developing applications very easy. However, the ease of writing programs that manipulate the windows operating system in .Net also made it a desirable language for malicious actors to write their Malware in. Between 2009 and 2015, we witnessed a 1600% growth in the number of .NET unique malware [2]. According to the Cybersecurity and Infrastructure Security Agency (CSIA), 3 out of the top 10 malware attacks in 2021 were written in C# [3], those being Agent Tesla, LokiBot and NanoCore. When a .Net file is compiled, the resulting file can be a DLL or an EXE file. However, these files are not similar to ordinary PE Files, i.e., they are not native executables. Unlike an executable that was compiled from C/C++ code, a .NET file does not have an x86 assembly code inside of it. Instead, the C# code is compiled into an intermediary language called the Common Intermediate Language (CIL) [4]. Later, when the file is executed, the Common Language Runtime (CLR) will convert the CIL code into native assembly code using Just in Time compilation (JIT). This provides applications written in .Net languages with various benefits such as memory management, garbage collection, type safety and exception handling [5]. The CLR acts as a virtual machine that handles the execution of the .Net files, and the translation process from managed code, i.e., the CIL code, into native code, i.e., x86 Assembly code. Figure 1 illustrates the .Net compilation process. In order for the windows loader to execute a file, this file must have a PE header [6]. Even though .Net files have a PE header, this only exists for legacy purposes to allow the files to be executed by the windows loader as portable executables [7]. However, this header is redundant, as all .Net executables have nearly identical PE headers, with few minor differences. They import only one library (mscoree.dll) and only one function (_CorExeMain). Figure 2 displays a comparison between two .Net files, a benign one, being the tool IL Spy [8] and a malicious one being Async RAT malware [9], using the tool PeNet [10]. We observe that they have nearly identical PE headers with the only difference being the virtual size of the file sections, that is, the size of this section in memory. In Figure 3 we display a comparison on the binary level of the two files using the tool PE Bear [11]. We confirm in this figure that the two files (a benign and a malicious) have nearly identical PE headers with only minor differences being the names of the files, their sizes, and the name of their sections. Thus, using only the PE header to classify .Net files as benign or malicious is not feasible as it does not contain enough information. Below the PE header, exists the most important data structure in a .Net file, that is, the CLR header, which contains the necessary information for the CLR virtual machine to load the file. The CLR header is not documented by Microsoft, and that is one of the reasons for the lack of research in this area. Still, several non-official resources exist online explaining it in detail [12]. When a .Net file is executed, the Windows loader transforms control to the Common language runtime, which uses the metadata in the CLR header to correctly execute the file. In this paper, we focus on the information present in the CLR header to detect if a file is malicious or not. We present a static analysis method using information extracted from the CLR header to evaluate the effectiveness of different machine learning classifiers in the detection of .Net-based malware. The key contributions of this study are as follows. • We present a new dataset for the detection of malicious .Net files. We collected, pre-processed and extracted relevant features from a large set of 216006 malicious files and 14194 benign files, which we provide to the research community at the following link https://github.com/MagicianMido32/Detecting-Malicious-. NET-Files-Using-CLR-Header-Features-and-Machine-Learning; • We created various scrappers to scrap fresh benign and malicious .Net executables from the internet when building our dataset. We also provide this code to the research community; • we carry out an analysis of the performance of seven different learning algorithms using the proposed dataset. This paper is organized as follows. In Section 2 we present the most relevant related works. Section 3 provides an overview of the new dataset that we collected and preprocessed. We present our experimental evaluation in Section 4, and the results are displayed and discussed in Section 5. Finally, Section 6 concludes this paper and discusses future research avenues. Related Work While much research has studied the Portable Executable Header, by extracting features from it, and through the development of Machine learning models for malware detection, such as using Assembly OP Code sequences [13], Imports table [14] or PE Header characteristics [15], to the best of our knowledge, not much work has been done to exploit the .NET CLR header for static malware classification. The closest research work we have found that extracts features from the CLR header is the work of Tom Leemreize who tries to analyze fileless malware for the .NET Framework through CLR profiling [16]. The author collected a dataset of five post-exploitation frameworks that maliciously utilized the .Net framework functionalities (both PowerShell and .Net PE). A .Net profiler was then used to build a .Net API call tree for each malware. YARA signatures were then created to detect malicious tactics used by the malware. To the best of the authors' knowledge, this is the only paper that is able to analyze all .Net framework application formats, including .Net compiled files, which is the focus of our work. This author also highlighted the research focus on Power-Shell, confirming that the area of malware detection in the .Net framework, in general, is significantly under-researched [16], which resonates with our findings. Current solutions, such as Antimalware Scan Interface (AMSI) [17] based detection, have important drawbacks as they can be bypassed by malware [16]. One such example is the technique named Bring Your Own Interpreter (BYOI) where the malware itself can be written in another language such as python and be run through the .Net environment using dynamic language runtime with the Iron python project [18]. Leemreize [16] collected a dataset of stagers 1 of 5 post-exploitation frameworks that maliciously utilized the .Net framework functionalities. A customized .Net profiler was implemented using the tool GroboTrace. The profiler builds a call tree for each .Net API call, regardless of its source, whether it is a .Net PE file or a PowerShell script. A set of YARA rules to detect different malicious tactics used by these malicious stagers was proposed. Still, no machine learning or deep learning techniques are used in [16] to address the malware detection problem. Moreover, manually created signatures can be easily bypassed, and the population sample used is small and not all of them are .Net PE files (some are PowerShell). In our paper, we explore the use of machine learning algorithms to tackle this predictive task and build a large dataset that we make available to the research community. In the next sections we provide more details about the dataset and the performance of the learning algorithms we evaluated. The .Net Files Dataset We propose a new dataset for tackling the issue of static malicious .Net PE files detection. We are the first -to the best of our knowledge -to create such a dataset. To achieve this, we built custom web scrappers to collect fresh malicious .Net files from different sources including: Any Run 2 , Malware Bazar 3 , Virus share 4 , Zoo 5 , VX Underground 6 , Twitter, Github and other sources. We created a tool that filters large malware samples and removes any files that are not .Net files. We collected a total of 216006 malicious .Net files. For the collection of benign examples we created several scrappers to download executable files from different source: CNet, Softonic, Source Forge, Net EXE, and Net Windows. We collected a total of 14194 benign .Net files. More precisely, we obtained the following files: 7865 from Sourceforge.net, 3603 from CNET, 151 from softonic, 185 from Windows 10 executables, 170 from Windows server 2019 executables, 400 from PortableApps.com, 1820 from various 1 A stager is a small malware that downloads and executes a larger stage payload representing the rest of the malware. This is done for making it lightweight allowing it to bypass antiviruses [19]. programs installed on our machines. All benign files collected were scanned with windows defender (up to date on 18 February 2023) and none of these files was detected as malicious or suspicious. Our goal was to extract the relevant information from the CLR header into a data frame file that we can be used to train machine learning algorithms. With regards to the parser, we used Palo Alto open-source python library dotnetfile [20] that was developed and used internally by the company and was published in 2022. The library parses a .Net file and extracts not only the .Net CLR header information but also various metadata and strings. We used this library to build our parsing code in python and used it to parse our dataset of malware to generate a CSV file that can be consumed by machine learning models. Figure 4 shows the overall process of the .Net parser. Initially, the dataset included 66 features, the target variable and a total of 222204 records. These 66 features extracted were configured in the parser. Out of these 66 features we dropped 14 because they were constant for all the records, and kept the remaining 52 features. We observed that multiple features extracted for each file were lists such as, a list of imports, a list of functions, or a list of namespaces. We had 18 of those types of features that exhibit a very high number of categories. Still, these features are important as they hold valuable information about the behavior of the program. For example, the list of unmanaged functions can be a strong indicator of potential process hollowing/RunPE [21] in the file. Table 1 displays the main characteristics of the features collected, including the number of features for each type. We distinguish here between categorical features with low and high cardinality, as they require further preprocessing for being used by standard machine learning algorithms. We assigned the type low cardinality to categorical features with less than 100 distinct values. The high cardinality features were clustered into text or list text. The first type corresponds to features represented by one single text per sample, while the second represents features that are lists of text, i.e., the feature is a list of text of variable size per sample. We made this dataset available to the research community to allow further research in this area. The dataset can be accessed at https://github.com/MagicianMido32/Detecting-Malicious-. NET-Files-Using-CLR-Header-Features-and-Machine-Learning. Feature set encoding and analysis After having extracted and pre-processed the dataset, we tried to convert the features with a large number of categories into a form suitable for the machine learning algorithms. Namely, we experimented with one hot encoding, and with word embeddings followed by a TFIDF vectorizer. However, in both cases, the resulting dataset is unfeasible to use due to the high dimensionality obtained. We selected to use Hash Encoder, from the library Category Encoders [22]. This is not the ideal solution, because of collisions, and potential information loss. However, we opted for this solution as it was the fastest and the most convenient solution, given the size of the dataset, and that would avoid increasing the dimensionality of the dataset to an unmanageable size. We applied the hash encoder to encode the categorical features with high cardinality (both the text and the list text types as listed in Table 1). This process was applied on the train data and after extended to the test data to avoid data leakage. Overall, we followed the following encoding procedure. For the low cardinality categorical features we used label encoder to encode them into their numerical representation without adding additional features. For the high cardinality categorical features as text and lists of text we used hash encoder, we encoded these 18 features into 7 new features, thus the number of features after encoding is 41 (3 low cardinality categorical features encoded using label encoder, 9 numerical features, 22 Boolean features, and 7 numerical features that are the result of encoding the high cardinality features. To carry out feature selection, we applied a forward selection method over a logistic regression model to measure the accuracy obtained over different features combinations. We used a stratified shuffled split to divide the dataset into two sets: a train and a test set. We used stratification to ensure that the distribution of the data is kept, i.e., the number of positive (malicious) and negative (benign) class cases are maintained. We also applied random oversampling on the training set to obtain a selection of features that would not be biased towards one of the classes. We found out that using the 40 features gave the highest accuracy. The results obtained on the logistic regression model trained using this method are: Accuracy: 0.75906, Precision: 0.72982, Recall: 0.78929, F1-score: 0.75839. Figure 5 shows the relationship between the accuracy and the number of features for the logistic regression model. We also applied an alternative embedding method using Lasso regression for feature selection that provided a total of eleven features. The metrics for the logistic regression trained on this subset of features are Accuracy: 0.72839, Precision: 0.68750, Recall 0.79395, and F1-score: 0.73690. These results are not better than the previous method in terms of F1 score, which is our method of choice. The results are worse in terms of precision, which means that the model does worse when returning relevant results in comparison to returning irrelevant ones. Experimental setup For training the models we decided to not apply oversampling and we decided to use all the features in the original dataset. The main motivation for this concerns the fact that the feature selection phase proved to be of no significant value for the accuracy of the models. We selected as performance assessment metrics for the models the F1 score, which is the harmonic mean of the precision (measurement for the rate of false positives) and recall (measurement for the rate of false negatives). We also decided to observe the recall in our comparison to be sure that the model does not overly misclassify benign samples. Besides these two metrics we computed the precision and the accuracy of the models. We used a stratified 5-Fold cross-validation procedure to preserve the distribution of the samples as the original dataset and to obtain a better estimate of the results. We report the average of the 5-fold CV process for each metric. We selected seven well-known learning algorithms for testing. The learners selected are: decision tree, Logistic Regression, MLP, Naïve Bayes, Random Forest, SVM, and XGBoost. We trained these learners with their default parameters. Results In this section we present and analyze the results obtained with the learning algorithms and discuss this work limitations and strengths. Results and discussion The average results obtained from the selected learners over the 5 folds are presented in Table 2. We observe that random forest achieves the highest performance with an F1-score of 0.911, followed by the XGBoost model with 0.90. All the remaining learners have an much lower F1-score. These two algorithms are also the top performers (random forest being the best) for the accuracy and precision metrics. We note that for recall the SVM is the best, followed by Naive Bayes. However, these two models, achieve a high recall at the cost of an extremely low precision. Overall, the select as the best performing model the random forest and the second nest the XGBoost. Overall, the ensemble models tested have a clear advantage in this task when compared with the other models. This may be happening because ensemble models: (i) are more robust by reducing the variance; and (ii) improve the performance when compared to a single contributing model. We also find it interesting that the MLP model is not performing as well, exhibiting more difficulties in this task. This can be due to a difficulty in dealing with the number of features extracted. Limitations and strengths This work has multiple strengths. Namely, we propose a novel feature extraction method from CLR headers that is an alternative to using the PE header. We also provide to the research community the first dataset with benign and malicious .NET files. We explore seven machine learning models for this predictive task, and build a scrapper to obtain benign and malicious .Net executables, which we also provide to the research community. In terms of limitations, our work has limitations in two key aspects: the data and the machine learning algorithms. Regarding the dataset, we consider that the use of the hash encoder is a limitation of this work that should be addressed in the future. The heavy data imbalance between malicious and benign files is another aspect that should be taken into account. Finally, this task could be explored with a larger volume of data, opening the possibility to the usage of deep learning models. Although we tested seven machine learning algorithms, more could be added and deep learning models could also be explored. Moreover, extensive hyperparameter tuning could be carried out. Conclusion This paper presents a novel dataset for detecting malicious .Net files. We proposed a novel method for feature extraction using the CLR header of .Net files showing that it is possible to use the information extracted from the CLR header to classify malicious and benign .Net files. We provide this dataset to the research community as well as the code of the scrappers we used to obtain benign and malicious .Net executables. We tested several feature selection methods and evaluated seven machine learning models on the newly developed dataset. An accuracy of 92% was obtained when using the Random Forest model with the full set of features. The overall results are promising and show that the information collected from the CLR headers is useful and should be employed to address this detection problem. An interesting avenue for future research concerns dealing with the high dimensionality of the dataset. We applied hash encoding, which comes at the expense of some information loss. We hypothesize that this impacted the performance of the machine learning models. A possible solution for this problem would be to use word embedding and stacking. Another important aspect concerns the imbalance in the dataset. The new dataset we present is largely imbalanced, with malicious records being significantly higher than benign records by a ratio of nearly 15 to 1. This is because collecting malicious files was easier and faster due to the availability of various malware-collecting sites such as malware Bazar and virus share. However, in order to collect benign files, it was necessary to manually scrap, download, and check each file. This is also an opportunity for future improvements.
4,874.8
2023-06-05T00:00:00.000
[ "Computer Science" ]
Dynamic Measurement of Relative Complex Permittivity of Microwave Plasma at Atmospheric Pressure : Complex permittivity is one of the most important parameters to characterize the interaction between microwave and medium, especially for microwave-excited plasma. It is convenient to study plasma’s dielectric properties and microwave propagation characteristics by measuring its complex permittivity. A dynamic measurement method of equivalent relative complex permittivity of microwave-excited plasma at atmospheric pressure is proposed in this paper. Firstly, a cavity based on WR-430 at a frequency of 2.45 GHz was specially designed in COMSOL. Then, the samples with different real parts of complex permittivity and loss tangent were simulated in the designed cavity to obtain their corresponding S parameters, and they were used to train the BP neural network until the error was lower than 0.001. A two-port network was built to excite the plasma. The input power, reflected power, and transmitted power could be measured by the transmission reflection method. Finally, the measured power values were converted into S parameters and used as inputs in the BP neural network. The plasma’s real parts of complex permittivity and loss tangent were obtained by inversion. The variation of plasma complex permittivity conforms to the interaction principles between microwave and plasma, which verifies the accuracy of the method. Introduction Generally, plasma can be equivalent to a medium whose dielectric properties are characterized by equivalent complex permittivity. By measuring the complex permittivity of the medium, the design of materials and cavity can be optimized in the simulation, which is of great significance to improve the utilization efficiency of microwave. Therefore, complex permittivity measurement has been applied in various fields such as aerospace and industry [1,2]. In recent years, some novel metamaterials and plasma with distinctive dielectric properties to those of conventional materials have emerged. The real part of their complex permittivity is negative [3] with a wide range, which fluctuates with the microwave frequency, power, temperature and pressure. So, it poses a new challenge to the measurement of complex permittivity. The method of measuring complex permittivity [4,5] is composed of the resonance method [6,7] and network parameter method [8,9]. These methods have achieved remarkable results in complex permittivity measurement of conventional materials [10][11][12][13], but there were few reports on the complex permittivity measurement of microwave plasma. Currently, the complex permittivity of plasma is mostly obtained by theoretical calculation. Jiansheng Liu [14] gave effective dielectric constant of plasma by solving Maxwell's equation. Felderhof [15] used linear response theory and the technique of cluster expansion to study the dielectric function of plasma by quasistatic approximation, but it is no longer applicable when the electron temperature and the density of plasma change rapidly. Mascali et al. [16] used a Langmuir probe to measure the electron density and other plasma parameters, and they obtained the equivalent relative complex permittivity of plasma by calculation. However, when some plasma domains do not allow the probe to penetrate, this method is no longer applicable. The interaction between microwave and plasma is complex. Dynamic measurement of the complex permittivity of plasma is helpful to observe the interaction principles between microwave and plasma under different conditions. Therefore, a dynamic measurement method of the complex permittivity of microwave plasma is proposed in this paper. Firstly, in order to build a two-port network to excite plasma, we designed a cavity based on WR-430 in COMSOL. We simulated the samples with different complex permittivity to obtain the corresponding |S 11 | and |S 21 | parameters in the designed cavity, and they were used to train a BP neural network [17]. Then, we measured the input power, reflected power, and transmitted power by the transmission reflection method [18,19] in the two-port network. Multiple groups of reflected power and transmitted power could be obtained by changing the gas flow and microwave power. Finally, the measured power was converted into |S 11 | and |S 21 | parameters, and they were used as inputs to obtain the complex permittivity of plasma by BP neural network inversion. We have drawn and analyzed the curve of the microwave plasma's real part of complex permittivity and loss tangent with different input power and gas flow. Schematic Design The two-port network measurement system based on the transmission reflection method is shown in Figure 1. Argon is excited by microwave to form plasma in the center of a quartz tube. The power meter close to the microwave source measures the input power (P 1 ) and the reflected power (P 2 ), and another power meter measures the transmitted power (P 3 ). The measured power will be converted into |S 11 | and |S 21 | parameters, and they are used as the inputs of the trained BP neural network to obtain the plasma's real part of complex permittivity and loss tangent by inversion. The flowchart is shown in Figure 2. Moreover, in this measurement system, we realize the dynamic measurement of the plasma complex permittivity when changing the microwave power and gas flow. Cavity Design We designed an open-end cavity based on WR-430. The cut-off waveguide in the center of the cavity is to prevent microwave power from leaking out. In order to give the center of the cavity high electric field intensity to excite the plasma and improve the transmission performance of the cavity, the length of the cavity as well as the length and inner diameter of the cut-off waveguide were scanned parametrically in COMSOL. Finally, the cavity structure is determined and processed, as shown in Figure 3. Processes 2021, 9, x FOR PEER REVIEW other plasma parameters, and they obtained the equivalent relative complex perm of plasma by calculation. However, when some plasma domains do not allow th to penetrate, this method is no longer applicable. The interaction between microwave and plasma is complex. Dynamic measu of the complex permittivity of plasma is helpful to observe the interaction princi tween microwave and plasma under different conditions. Therefore, a dynamic m ment method of the complex permittivity of microwave plasma is proposed in thi Firstly, in order to build a two-port network to excite plasma, we designed a cavi on WR-430 in COMSOL. We simulated the samples with different complex perm to obtain the corresponding |S11| and |S21| parameters in the designed cavity, a were used to train a BP neural network [17]. Then, we measured the input power, r power, and transmitted power by the transmission reflection method [18,19] in port network. Multiple groups of reflected power and transmitted power could tained by changing the gas flow and microwave power. Finally, the measured po converted into |S11| and |S21| parameters, and they were used as inputs to ob complex permittivity of plasma by BP neural network inversion. We have drawn alyzed the curve of the microwave plasma's real part of complex permittivity a tangent with different input power and gas flow. Schematic Design The two-port network measurement system based on the transmission re method is shown in Figure 1. Argon is excited by microwave to form plasma in th of a quartz tube. The power meter close to the microwave source measures th power (P1) and the reflected power (P2), and another power meter measures the tr ted power (P3). The measured power will be converted into |S11| and |S21| para and they are used as the inputs of the trained BP neural network to obtain the p real part of complex permittivity and loss tangent by inversion. The flowchart is s Figure 2. Moreover, in this measurement system, we realize the dynamic measure the plasma complex permittivity when changing the microwave power and gas f Cavity Design We designed an open-end cavity based on WR-430. The cut-off waveguid center of the cavity is to prevent microwave power from leaking out. In order to center of the cavity high electric field intensity to excite the plasma and improve th mission performance of the cavity, the length of the cavity as well as the length a diameter of the cut-off waveguide were scanned parametrically in COMSOL. Fin cavity structure is determined and processed, as shown in Figure 3. A quartz tube with a wall thickness of 2 mm is placed in the cut-off wave excite the plasma. It is assumed that the plasma in the quartz tube is homogenou tributed. Therefore, the complex permittivity of the samples set in COMSOL is r as the equivalent complex permittivity of the plasma. We can obtain the S par corresponding to different samples by simulation. Considering the large variatio of plasma complex permittivity and high loss, the samples' range and step of the permittivity real part and the loss tangent are set in the parameter scanning, as s Table 1. Figure 4 shows the simulation results. Within the parameter setting ra |S11| and |S21| parameters of the port maintain good monotonicity, which can ef avoid a multi-value problem in BP neural network inversion, so as to obtain more results. Cavity Design We designed an open-end cavity based on WR-430. The cut-off waveguide in the center of the cavity is to prevent microwave power from leaking out. In order to give the center of the cavity high electric field intensity to excite the plasma and improve the transmission performance of the cavity, the length of the cavity as well as the length and inner diameter of the cut-off waveguide were scanned parametrically in COMSOL. Finally, the cavity structure is determined and processed, as shown in Figure 3. A quartz tube with a wall thickness of 2 mm is placed in the cut-off waveguide to excite the plasma. It is assumed that the plasma in the quartz tube is homogenously distributed. Therefore, the complex permittivity of the samples set in COMSOL is regarded as the equivalent complex permittivity of the plasma. We can obtain the S parameters corresponding to different samples by simulation. Considering the large variation range of plasma complex permittivity and high loss, the samples' range and step of the complex permittivity real part and the loss tangent are set in the parameter scanning, as shown in Table 1. Figure 4 shows the simulation results. Within the parameter setting range, the |S11| and |S21| parameters of the port maintain good monotonicity, which can effectively avoid a multi-value problem in BP neural network inversion, so as to obtain more accurate results. A quartz tube with a wall thickness of 2 mm is placed in the cut-off waveguide to excite the plasma. It is assumed that the plasma in the quartz tube is homogenously distributed. Therefore, the complex permittivity of the samples set in COMSOL is regarded as the equivalent complex permittivity of the plasma. We can obtain the S parameters corresponding to different samples by simulation. Considering the large variation range of plasma complex permittivity and high loss, the samples' range and step of the complex permittivity real part and the loss tangent are set in the parameter scanning, as shown in Table 1. Figure 4 shows the simulation results. Within the parameter setting range, the |S 11 | and |S 21 | parameters of the port maintain good monotonicity, which can effectively avoid a multi-value problem in BP neural network inversion, so as to obtain more accurate results. BP Neural Network In this paper, BP neural network is used to inverse the dielectric properties of plasma. BP neural network is a multilayer feedforward network trained according to the error back propagation algorithm. It can learn and store a large number of input-output mode mapping relationships without describing the mathematical equations of this mapping relationship. The process is mainly divided into two stages. The first stage is the forward propagation of the signal from the input layer to the hidden layer and finally to the output layer. The second stage is the back propagation of the error from the output layer to the hidden layer and finally to the input layer. By the means of back propagation, the weight and threshold of the network are continuously adjusted to minimize the sum of squares of the error of the network. BP neural network is currently one of the most widely used neural network models. Since the neural network algorithm can accurately inverse the mapping relationship between input values and output values only when they are corresponding, it is necessary to discuss the multi-value problem of the neural network before taking the value of the sample space. It is known from Figure 4 that the corresponding relationship between input values and output values maintains monotonicity. The two-input and two-output network structure can greatly reduce the multi-value points and effectively avoid the multivalue problem. This paper constructs a BP neural network with |S11| and |S21| parameters as inputs and the real part of complex permittivity and loss tangent as outputs, as shown in Figure 5. The simulation results are randomly assigned to one of two sets of samples: 80% to the training set and 20% to the test set. Furthermore, the BP neural network training parameters are shown in Table 2. BP Neural Network In this paper, BP neural network is used to inverse the dielectric properties of plasma. BP neural network is a multilayer feedforward network trained according to the error back propagation algorithm. It can learn and store a large number of input-output mode mapping relationships without describing the mathematical equations of this mapping relationship. The process is mainly divided into two stages. The first stage is the forward propagation of the signal from the input layer to the hidden layer and finally to the output layer. The second stage is the back propagation of the error from the output layer to the hidden layer and finally to the input layer. By the means of back propagation, the weight and threshold of the network are continuously adjusted to minimize the sum of squares of the error of the network. BP neural network is currently one of the most widely used neural network models. Since the neural network algorithm can accurately inverse the mapping relationship between input values and output values only when they are corresponding, it is necessary to discuss the multi-value problem of the neural network before taking the value of the sample space. It is known from Figure 4 that the corresponding relationship between input values and output values maintains monotonicity. The two-input and two-output network structure can greatly reduce the multi-value points and effectively avoid the multi-value problem. This paper constructs a BP neural network with |S 11 | and |S 21 | parameters as inputs and the real part of complex permittivity and loss tangent as outputs, as shown in Figure 5. Study Radio Maximum Itera-Target Accuracy Hidden Neuron Hidden The simulation results are randomly assigned to one of two sets of samples: 80% to the training set and 20% to the test set. Furthermore, the BP neural network training parameters are shown in Table 2. Experimental System The The microwave source can be adjusted to different values of power, up to 500 W. The circulator prevents excessive reflected power from damaging the microwave source. The water load absorbs excess energy. The quartz tube is placed in the center of the designed cavity. The flowmeter is connected to the lower end of the quartz tube to adjust the gas flow. When the microwave excites argon gas to generate plasma, as shown in Figure 6b, the input power, reflected power, and transmitted power from the microwave source to plasma can be measured by the power meter. Then, we adjusted the different power and gas flow to get the corresponding S parameters. The microwave source can be adjusted to different values of power, up to 500 W. The circulator prevents excessive reflected power from damaging the microwave source. The water load absorbs excess energy. The quartz tube is placed in the center of the designed cavity. The flowmeter is connected to the lower end of the quartz tube to adjust the gas flow. When the microwave excites argon gas to generate plasma, as shown in Figure 6b, the input power, reflected power, and transmitted power from the microwave source to plasma can be measured by the power meter. Then, we adjusted the different power and gas flow to get the corresponding S parameters. Measured Results and Discussion Argon is introduced into the bottom of the quartz tube, and the initial gas flow is set to 10 L/min. When the gas flow is stable, the microwave source is turned on, and the microwave power is initially set to 150 W, which is the minimum power to excite argon plasma in this measurement system. Then, we gradually increased the power and gas flow when the plasma was excited in the middle of the quartz tube. The range and step are shown in Table 3. The reflected power and transmitted power under different input power and gas flow are recorded, and the measurement error of the power meter is within 10%. The measured power is converted to the corresponding S parameters, as shown in Figure 7. Taking the |S 11 | and |S 21 | parameters as inputs, the real part of the plasma complex permittivity and loss tangent corresponding to different power and gas flow are obtained by means of BP neural network inversion, as shown in Figure 8. Moreover, the inversion error of the BP neural network is within 20%. Start End Step Power 150 W 500 W 50 W Gas Flow 10 L/min 25 L/min 5 L/min by means of BP neural network inversion, as shown in Figure 8. Moreover, the inversion error of the BP neural network is within 20%. Figure 7a,b show that |S11| increases and |S21| decreases with the rise of power. This is due to the fact that with lower input power, the excited plasma electron density becomes lower accordingly, which makes the microwave easier to penetrate plasma, causing less reflection. With higher input power, the excited plasma electron temperature becomes higher and the electron collision frequency increases, which leads to higher electron density. With the increase in electron density, the ability of plasma to reflect microwave energy is stronger, and the attenuation of microwave energy is greater [20]. When the power reaches 500 W, |S11| reaches its maximum and |S21| reaches its minimum. Figure 8a,b show that the real part of the complex permittivity of plasma decreases from −15 to −55 and the loss tangent decreases from −0.8 to −1.8 in the power range from 150 to 450 W. Then, they stay stable in the power range from 450 to 500 W. Due to reflection, the real part of plasma's complex permittivity is negative. With the rise of power, the reflection increases, and the real part of plasma complex permittivity decreases. When the power reaches 450 W, the electron density and electron collision frequency of plasma reach the maximum and then stay stable. At this time, microwave energy cannot penetrate Figure 7a,b show that |S 11 | increases and |S 21 | decreases with the rise of power. This is due to the fact that with lower input power, the excited plasma electron density becomes lower accordingly, which makes the microwave easier to penetrate plasma, causing less reflection. With higher input power, the excited plasma electron temperature becomes higher and the electron collision frequency increases, which leads to higher electron density. With the increase in electron density, the ability of plasma to reflect microwave energy is stronger, and the attenuation of microwave energy is greater [20]. When the power reaches 500 W, |S 11 | reaches its maximum and |S 21 | reaches its minimum. Figure 8a,b show that the real part of the complex permittivity of plasma decreases from −15 to −55 and the loss tangent decreases from −0.8 to −1.8 in the power range from 150 to 450 W. Then, they stay stable in the power range from 450 to 500 W. Due to reflection, the real part of plasma's complex permittivity is negative. With the rise of power, the reflection increases, and the real part of plasma complex permittivity decreases. When the power reaches 450 W, the electron density and electron collision frequency of plasma reach the maximum and then stay stable. At this time, microwave energy cannot penetrate the inside of the plasma [21]. The loss of microwave energy no longer fluctuates with the increase in power. Keeping the input power constant and increasing the gas flow, the plasma will be blown to the far end of the quartz tube. Under such circumstance, electron loss might occur. Therefore, the electron density decreases in the tube, which makes it easy for the microwave power to penetrate with low reflection. Moreover, under the same power level, larger gas flow makes smaller |S 11 |, larger |S 21 |, and complex permittivity. However, when the power is large enough and plasma reaches a stable state, changing the gas flow will no longer affect the real part of the complex permittivity and loss tangent. Conclusions From a macroscopic perspective, complex permittivity is an important parameter representing the interaction of plasma excited by microwave. In this paper, a dynamic measurement method of microwave plasma relative complex permittivity at atmospheric pressure is proposed to better study the variation of plasma complex permittivity against microwave power and gas flow. Firstly, an experimental cavity based on WR-430 was designed. COMSOL Multiphysics was used to simulate the samples represented by a wide range of complex permittivity to obtain their corresponding |S 11 | and |S 21 | parameters. Then, the simulation data were used as the training set to construct a BP neural network. The |S 11 | and |S 21 | parameters were used as inputs, and the real parts of complex permittivity and loss tangent were used as outputs. In the experiment, by controlling the gas flow and microwave source, the input, reflected, and transmitted power were measured, which were later converted into |S 11 | and |S 21 | parameters. By means of a BP neural network inversion algorithm, the corresponding real parts of complex permittivity and loss tangent were obtained. Compared with the traditional method, the proposed method is capable of real-time measurement of plasma complex permittivity under varying external conditions. Compared with traditional inversion algorithms, the BP neural network has strong nonlinear mapping ability, and the inversion results are fast and accurate. Compared with related theories, the measured plasma complex permittivity curve conforms to theoretical reaction principles, which proves the feasibility and accuracy of the proposed method. In the simulation of microwave and argon plasma, this paper provides reference values for the complex permittivity of argon plasma. The method also provides a new reference for the measurement of medium with a large range of complex permittivity.
5,283.6
2021-10-13T00:00:00.000
[ "Physics" ]
Word-Level and Pinyin-Level Based Chinese Short Text Classification Short text classification is an important branch of Natural Language Processing. Although CNN and RNN have achieved satisfactory results in the text classification tasks, they are difficult to apply to the Chinese short text classification because of the data sparsity and the homophonic typos problems of them. To solve the above problems, word-level and Pinyin-level based Chinese short text classification model is constructed. Since homophones have the same Pinyin, the addition of Pinyin-level features can solve the homophonic typos problem. In addition, due to the introduction of more features, the data sparsity problem of short text can be solved. In order to fully extract the deep hidden features of the short text, a deep learning model based on BiLSTM, Attention and CNN is constructed, and the residual network is used to solve the gradient disappearance problem with the increase of network layers. Additionally, considering that the complex deep learning network structure will increase the text classification time, the Text Center is constructed. When there is a new text input, the text classification task can be quickly realized by calculating the Manhattan distance between the embedding vector of it and the vectors stored in the Text Center. The Accuracy, Precision, Recall and F1 of the proposed model on the simplifyweibo_4_moods dataset are 0.9713, 0.9627, 0.9765 and 0.9696 respectively, and those on the online_shopping_10_cats dataset are 0.9533, 0.9416, 0.9608 and 0.9511 respectively, which are better than that of the baseline method. In addition, the classification time of the proposed model on simplifyweibo_4_moods and online_shopping_10_cats is 0.0042 and 0.0033 respectively, which is far lower than that of the baseline method. I. INTRODUCTION The continuous development of social media has gradually made it the main platform for netizens 1 to express their views and opinions. A large number of active netizens publish a large number of micro-blogs, tweets and other short texts bearing user information everyday, which contains abun-The associate editor coordinating the review of this manuscript and approving it for publication was Kathiravan Srinivasan . 1 It refers to all people who conduct network activities through computers and the Internet. dant valuable information reflecting public opinion, social hot spots and user interests. However, how to quickly and accurately mine important information from massive texts according to the personalized needs of society and users is still facing huge challenges. The research of short text classification technology can help the system more efficiently ''understand'' and ''manage'' all kinds of short texts, which has an important role in promoting the development of social intelligence. As an important research content of Natural Language Processing (NLP) [1], text classification has been widely used in search engines [2], [3], information filtering [4], [5], subject tracking [6], [7], mail classification [8], [9] and other fields. The traditional machine learning classification method divides the whole text classification task into feature engineering and classifier. Feature engineering is divided into three parts: text preprocessing, feature extraction and text representation. The ultimate goal is to convert text into a format that can be understood by the computer and encapsulate enough information for classification, that is, strong feature expression ability. Commonly used machine learning classifiers include Naive Bayes (NB) [10], K-Nearest Neighbor (KNN) [11], Decision Tree (DT) [12], Support Vector Machine (SVM) [13], etc. Feature engineering is a very complicated process, and the quality of feature selection directly determines the classification results of machine learning classifiers. With the continuous development of deep learning models, more and more researchers use deep learning models for text classification. Deep learning method does not require complex feature engineering, but relies on its powerful ability to fit data distribution to automatically learn rules by training massive data. As Convolutional Neural Network (CNN) [14] can extract text features through one-dimensional convolution operation and retain the most important features through pooling operation, many researchers use CNN for text classification [15]. Text data can be regarded as words that have sequence relationships, which are difficult for CNN to capture [16]. Therefore, Recurrent Neural Network (RNN) [17] is used in text classification due to its powerful ability in processing variable-length sequence input. However, RNN has the problem of gradient disappearance when the text length is too long, and it is difficult to capture long-distance global dependencies [18], which is solved by Long Short-Term memory(LSTM) [19]. Considering that LSTM only propagate forward and can easily ignore important content after the current time node, Bidirectional Long Short-Term Memory (BiLSTM) [20] is proposed to use both forward and backward temporal features for text classification. Although CNN and RNN have achieved satisfactory results in the text classification tasks, most of them are for English and there are only a few studies on Chinese text classification [21], [22], [23]. With the rapid development of Chinese social network, the classification of Chinese short text becomes more and more important. Since Chinese is based on pictograms and there are many homonyms in Chinese, homophonic typos problem 2 often occurs in Chinese short texts, which is difficult to be handled by traditional text classification models. Considering that Pinyin in Chinese is a system that takes Latin letters as the modern standard Chinese phonetics, which can be used to solve the homophonic typos problem of Chinese short texts. Therefore, word-level and Pinyin-level based short text classification model, WP-STC, is constructed in this paper. Since WP-STC contains both 2 It means that Chinese words are wrongly written into other words with the same Pinyin but other meanings. word-level and Pinyin-level features, it can also effectively solve the data sparsity problem 3 of short texts. Considering that RNNs assign the same attention to all contexts and the text classification results of deep learning model alone are always not satisfactory [15], a deep learning model based on BiLSTM, Attention mechanism and CNN is proposed in this paper. BiLSTM is used to obtain bidirectional temporal features, Attention mechanism is used to assign different weights to the context, and CNN is used to extract local features and reduce dimension. In addition, the residual network is used to solve the gradient disappearance problem with the increase of network layers [25]. Additionally, considering that the complex deep learning network structure will increase the text classification time, the concept of Text Center is innovatively proposed. When there is a new text input, the text classification task can be quickly realized by calculating the Manhattan distance between the embedding vector of it and the vectors stored in the Text Center. The main contributions of this paper are as follows: (1) Word-level and Pinyin-level based Chinese short text classification model is proposed, which can well solve the data sparsity and the frequent occurrence of homophonic typos problems of Chinese short texts. (2) A deep learning model based on BiLSTM, Attention mechanism, CNN and residual network is proposed. BiLSTM is used to obtain bidirectional temporal features, Attention is used to assign different weights to the context, and CNN is used to extract local features and reduce dimension. In addition, the residual network is used to solve the gradient disappearance problem with the increase of network layers. (3) The concept of Text Center is innovatively proposed, which can greatly reduce the classification time by just comparing the embedding vector of the input text with the vectors stored in the Text Center. (4) Multi-group comparison experiments on two public baseline datasets prove that the proposed model not only get the best classification Accuracy, Precision, Recall and F1, but also greatly reduces the classification time. The remainder of this work is organized as follows. Section II introduces the latest research results of text classification. Section III introduces the background of LSTM, Attention mechanism and CNN. Section IV introduces the consturction of the proposed model. Section V introduces the experimental results of the proposed model and the comparison models. Section VI provides summary and future work. II. RELATED WORK A. TEXT CLASSIFICATION Commonly used text classification methods include machine learning and deep learning. 1) MACHINE LEARNING Luo et al. [13] implemented the SVM in classifying English text and documents. Experimental results performed on a set of 1033 text document demonstrated that the classifier provided the best results when the feature set size is small. Richard et al. [26] reviewd how the NLP techniques of TF-IDF combined with the supervised machine learning model of SVM and word embedding approaches such as Word2vec could be used to categorize/label protocol deviations across multiple therapeutic areas. Annalisa et al. [27] presented an updated survey of 12 machine learning text classifiers applied to a public spam corpus. They proposed a new pipeline to optimise hyperparameter selection and improve the models' performance by applying specific methods (based on NLP) in the preprocessing stage. Luo [13] implemented the SVM model in classifying English text and documents. Abdalla et al. [28] proposd similarity measures with machine learning models and presented benchmarking studies for integration methodology over balanced/imbalanced datasets. 2) DEEP LEARNING Machine learning methods highly rely on feature engineering, which is very complex and time-consuming, therefore. Since deep learning methods do not need complex feature engineering, more and more researchers use them for text classification. Cheng et al. [29] proposed a novel text classification model based on hierarchical self-attention mechanism capsule network, which was composed of the capsule network and hierarchical self-attention network. The experimental results on 5 text classification datasets showed that the proposed model achieved the best classification results compared with other baseline models. Kong et al. [30] proposed a hierarchical BERT with an adaptive fine-tuning strategy (HAdaBERT). HAdaBERT consisted of an attention-based gated memory network as the global encoder and a BERT-based model as the local encoder. Experimental results on different corpora indicated that HAdaBERT outperformed the state-of-theart pretrained language models. Sergio et al. [31] proposed Stacked DeBERT, which improved the robustness of incomplete data by designing a novel encoding scheme in BERT. Stacked DeBERT took advantage of stacks of multilayer perceptrons for the reconstruction of missing words' embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. Liu et al. [32] proposed a Co-attention Network with Label Embedding (CNLE) that jointly encoded the labels and text into their mutually attended representations, which was able to attend to the relevant parts of both. Experiments showed that CNLE achieved competitive results on 2 multi-label and 7 multi-class classification benchmarks. Zhang [33] proposed a news text classification method based on the combination of deep learning (DL) algorithms. Gao et al. [34] introduced four methods to scale BERT to perform document classification on clinical texts several thousand words long. B. SHORT TEXT CLASSIFICATION Although machine learning and deep learning models are widely used text classification models, considering the data sparsity of short texts, simple machine learning or deep learning network cannot classify them well. Therefore, researchers use more complex models for short text classification. Yang et al. [35] proposed a novel heterogeneous Graph Neural Network (GNN)-based method for semi-supervised short text classification, leveraging full advantage of limited labeled data and large unlabeled data through information propagation along the graph. Wang et al. [36] proposed SHINE, which used Graph Neural Network (GNN) for short text classification. Wang et al. [37] proposed a short text classification method based on semantic extension and CNN. Škrlj et al. [39] proposed tax2vec, a parallel algorithm for constructing taxonomy-based features, which could well solve the data sparsity problem of short texts. Zhou et al. [40] proposed a semantic extension-based classification algorithm for short texts and both ordinary 1D convolution and atrous convolution are performed. The model achieved the best classification results and had lower computational complexity than BERTbase. Yu et al. [41] proposed Deep Pyramid Temporal Convolutional Network for short text classification, which was mainly consisting of concatenated embedding layer, causal convolution, 1/2 max pooling down-sampling and residual blocks. C. CHINESE SHORT TEXT CLASSIFICATION Although deep learning models have achieved satisfactory results in short text classification tasks, most of them are for English and there are only a few studies on Chinese short text classification. Hao et al. [21] proposed a Mutual-Attention CNN framework, which integrated features at the word and character levels for Chinese short text classification. Lyu et al. [22] introduced HowNet 4 as an external knowledge base, and proposed a language knowledge enhancement graph converter to deal with Chinese word ambiguity problem. Feng et al. [23] proposed a sentiment classification model for short Chinese texts. The model combined word features with part-of-speech features, position features and dependent syntactic features to form three new combined features, which were input into the multi-channel CNN, and combined with the multi-head attention mechanism to more fully integrate the multi-head attention mechanism. Yang et al. [42] proposed a character-word graph attention network to explore the interactive information between characters and words for Chinese text classification. The existing Chinese short text classification models are basically based on the characteristics of words or characters, and it is difficult to deal with the homophonic typos problem that often occurs in Chinese short texts. In order to overcome the above problem, word-level and Pinyin-level based Chinese short text classification model, WP-STC, is constructed in this paper. III. BACKGROUND A. LSTM LSTM [19] is a special type of Recurrent Neural Network (RNN) [43] designed to deal with the gradient disappearance problem faced by RNNs. Like other types of RNNs, LSTM generates its outputs based on the input of the current time step and the output of the previous time step, and sends the current output to the next time step. Each LSTM cell consists of a memory cell c t that maintains its state at arbitrary time intervals and three nonlinear gates, including an input gate i t , a forget gate f t , and an output gate o t , which are designed to regulate the flow of information into and out of the memory cell. An LSTM containing a hidden layer is defined as follows [19]. where i t , f t , o t and c t denote the input gate, forget gate, output gate, and memory cell activation vector at moment t, respectively. σ (.) denotes the logistic sigmoid function, ⊗ denotes the element-level multiplication operation. The and b c denote the bias terms. d and H denote the dimension and the hidden layer of the input. B. ATTENTION MECHANISM The Attention mechanism [38] can highlight important information from contextual information by setting different weights, thus paying more attention to the parts that are similar to the elements of the input and suppressing other useless information. Let h = (h 1 , h 2 , · · · , h T ) denote the input of the Attention network, then the correlation e tj between the jth input h j and the current hidden state s t−1 is calculated as follows [38]: where score() denotes the weight-focused multiplication. Assuming that v is the trainable parameter, the associated likelihood a tj and the content vector c t for time step t are calculated as follows [38]: The CNN [14] for text classification includes multiple onedimensional (1D) convolution layers and pooling layers. 1D convolution is used to extract deep features, and pooling is used to further obtain important features and reduce dimensions. For the Chinese text S containing s embedding vectors, assuming that the length of each embedding vector is e, then S is a matrix with the size of s × e. Multiple linear filters with the size of h×e are used to perform the convolution operation to generate the feature map M = [m 0 , m 1 , · · · , m s−h ]. h is the length of the linear filter and 1 ≤ h ≤ s. Let S i:j represents the matrix from the ith word to the jth word in S, then the ith feature sequence of M is generated by the following formula [14]: where f () denotes the activation function Relu, W denotes the weight and b denotes the bias term. After the convolution operation, the max pooling is used to reduce the dimension and extract the key features b [50]: IV. MODEL CONSTRUCTION WP-STC is constructed in this paper for Chinese short text classification, and the structure is shown in Figure 1. In the first setp, the Chinese text of input layer is transformed into word-level and Pinyin-level features. In the second step, the word-level and Pinyin-level features are transformed into embedding vectors by pre-trained Word2vec [44]. In the third step, a deep learning model, Bi-Att-CNN, which is based on BiLSTM, Attention and CNN is constructed, and the word-level and Pinyin-level embedding vectors are both fed into Bi-Att-CNN to obtain the hidden vectors. In the fourth step, the word-level and Pinyin-level hidden vectors are added together to fed into the fully connected network and the softmax function is used to get the classification results. At the same time, the word-level and Pinyin-level hidden vectors are used to get the Text Center of different text categories. When there is a new text input, the text classification task can be quickly realized by calculating the Manhattan distance between the embedding vector of it and the vectors stored in the Text Center. The implementation process of the model is described in detail below. A. ACQUISITION OF WORD-LEVEL AND PINYIN-LEVEL FEATURES The processes of converting Chinese text in input layer into word-level and Pinyin-level features are shown in Figure 2. Given a Chinese short text T of length z, firstly, T is transformed into word-level feature T word = {w 1 , w 2 , · · · , w b } containing b words by jieba, 5 and then each Pinyin of T word is obtained by pyPinyin 6 to get the Pinyin-level feature T Pinyin = {p 1 , p 2 , · · · , p z }. For every word has a Pinyin, then b = z. B. ACQUISITION OF EMBEDDING VECTOR Word2vec [45] is the most commonly used static word vector model, which can convert word and Pinyin into fixed length word vectors and a pre-trained Word2vec [44] is used to transform both T word and T Pinyin into embedding vectors For computational convenience, assume that the length of each word and Pinyin embedding vector is p, then V word ∈ R b×p , V Pinyin ∈ R z×p . C. CONSTRUCTION OF BI-ATT-CNN The deep learning model based on BiLSTM, Attention and CNN, Bi-Att-CNN, is constructed to get the hidden vectors of V word and V Pinyin , and the structure of Bi-Att-CNN is shown in Figure 3. Bi-Att-CNN can well extract the deep hidden features of the embedding vectors of the embedding layer. The specific processes are as follows. Firstly, V word and V Pinyin from the embedding layer are fed into BiLSTM to obtain the forward and backward hidden vectors Secondly, the forward and backward hidden vectors are concatenated to obtain the bi-directional timing vectors h word and h Pinyin : Thirdly, h word and h Pinyin are fed into the Attention network separately, so as to assign different attentions to the context. Specifically, at first, the hidden representations x word and x Pinyin of h word and h Pinyin are obtained respectively: x Pinyin = tanh(W Pinyin h Pinyin + b Pinyin ) where W word and W Pinyin denote weights and b word and b Pinyin denote bias terms. Next, the importances of the context are calculated based on the similarities between y word , y Pinyin and x word , x Pinyin , where y word and y Pinyin are the randomly initialized context vectors. After obtaining the weights, the softmax function is used to normalize them to obtain the weight vectors r word and r Pinyin : r Pinyin = exp(x Pinyin y Pinyin ) exp(x Pinyin y Pinyin ) Finally, the comment vectors f word and f Pinyin containing all contextual attention are obtained by weighted summation of r word and r Pinyin : Fourthly, the residual network is used to solve the gradient disappearance problem with the increase of network layers by adding the output vectors of the Attention network and the initial embedding vectors: (24) fr Pinyin = r Pinyin + V Pinyin (25) Fifthly, fr word and fr Pinyin are fed into the convolutional networks of CNN to obtain the results of convolutional operation t word and t Pinyin : t Pinyin = relu(W cnn Pinyin fr Pinyin + b cnn Pinyin ) where W cnn word and W cnn Pinyin denote weights and b cnn word and b cnn Pinyin denote bias terms. Sixthly, t word and t Pinyin are fed into the max pooling networks of CNN to get the result of pooling operation g word and g Pinyin : Seventhly, g word and g Pinyin are added to get the output vector g = g word + g Pinyin of Bi-Att-CNN network. D. CLASSIFICATION RESULTS The output vector g of the Bi-Att-CNN network is fed to the fully connected network, and then the softmax function is used to get the final classification result label. label = argmax(softmax(tanh(gW g ))) (30) where W g denotes the weight. E. TEXT CENTER Since the number of samples used for training is always large, it is very time-consuming to traverse all training samples and compute their distances from the new input text when a new text input is available. In order to reduce the computational complexity, the concept of Text Center is proposed. When there is a new text input, the text classification task can be quickly realized by calculating the Manhattan distance [46] between the embedding vector of it and the vectors stored in the Text Center. The original and improved calculation methods are shown in Figure 4 and Figure 5, respectively. We analyze the calculation process of the original method and Text Center separately. Suppose the example of the sample library is denoted as X = {X 1 , X 2 , · · · , X k }, k is the length of the sample library, X i = {x i1 , x i2 , · · · , x in }, n is the length of the feature dimension, x label = {1, 2, · · · , a}, a denotes the total class of the text. The word vector corresponding to the new short text to be classified is assumed to be Y = {y 1 , y 2 , · · · , y n }. The process of text classification by the original method is as follows. It is not difficult to obtain the time complexity to be o(k × n). In order to reduce the amount of computation and speed up the classification, the time complexity can be reduced by establishing the Text Center of the sample library. The specific process is as follows. First, the Text Center T 1 -T a are established for each of the a categories of text: T a = {t a1 , t a2 , · · · , t an } VOLUME 10, 2022 x ai , X label = a (32) where k i denotes the number of samples labeled i, i = 1, 2, · · · , a. The improved similarity is calculated as follows: TestT a = |T a − Y | = |t a1 − y 1 | + |t a2 − y 2 | + · · · + |t an − y n | y label = x label min(TestT 1 , TestT 2 , . . . , TestT a ) (33) It is easy to see that only a calculations need to be performed to derive the classification results for the new input text. A. DATASET Two Chinese baseline datasets are selected, and the details are as follows. simplifyweibo_4_moods 7 : The dataset contains more than 200,000 Weibo 8 records labeled with four types of emotions, including about 50,000 joyful, angry, disgusted and depressed Weibo records each. online_shopping_10_cats 9 : The dataset contains more than 60,000 reviews for products in 10 categories, with about 30,000 positive and 30,000 negative reviews each. KNN [47]: A commonly used machine learning algorithm for text classification. Decision Tree (DT) [48]: A commonly used machine learning algorithm for sentiment classification. SVM [49]: A commonly used machine learning algorithm for text classification. TextCNN [50]: This model extracts local features by convolution operation and dimensionality reduction by pooling operation, and is a commonly used deep learning algorithm for text classification. LSTM [19]: This model effectively solves the gradient disappearance problem of RNN by setting multiple gates, and is a commonly used deep learning algorithm for text classification. BiLSTM [20]: This model takes into account the forward and backward temporal content, and is a commonly used deep learning algorithm for text classification. HGAT [35]: This model is a heterogeneous GNN-based method for semi-supervised short text classification, leverag-7 https://zhuanlan.zhihu.com/p/80029681 8 One of the largest social platforms in China. 9 https://github.com/SophonPlus/ChineseNlpCorpus/raw/master/datasets/ ing full advantage of limited labeled data and large unlabeled data through information propagation along the graph. SHINE [36]: This model first models the short text as a hierarchical heterogeneous graph consisting of word-level component graphs which introduce more semantic and syntactic information. Then, the model dynamically learns a short document graph to facilitate effective label propagation among similar short texts. T-SNE [51]: This model proposes a method to generate a word-level emotion distribution vector for short text classification. BERT-TER [52]: This model proposes a dual-channel system for multi-class short text emotion recognition, and develops a technique to explain its training and predictions. The baseline methods are reproduced according to the references, and the optimal values of each baseline method after repeated parameter adjustment are obtained as the experimental results. C. EVALUATION CRITERIA Accuracy, Precision, Recall and F1 are used as evaluation criteria, which are calculated as follows: where P and N denote the number of forward and reverse samples, respectively, TP and TN denote the number of correctly predicted forward and reverse samples, respectively, and FP and FN denote the number of incorrectly predicted forward and reverse samples, respectively [53]. D. EXPERIMENTAL ENVIRONMENT Deep learning model based on BiLSTM, Attention and CNN, Bi-Att-CNN, is constructed in this paper. There are three layers of BiLSTM networks, and the number of neurons is 256, 256, 128, respectively. Relu is used as activation function, and 0.5 dropout is used at the end of each layer. There are two layers of CNN networks. The first layer contains 128 convolutional kernels of size 3, and the size of pooling template is 3. The second layer contains 64 convolution kernels of size 3, and the size of pooling template is 3. Two layers of fully connected networks are contained. The number of neurons in the first layer is 256, and that in the second layer is equal to the number of text categories in the dataset. The length of the embedding vector obtained through Word2vec is 100. The learning rate is set to 0.0001, the batch size is set to 64, and the epoch is set to 100. E. EXPERIMENTAL RESULTS In this section, multiple sets of experiments are conducted to verify the classification effects of WP-STC, WP-STC with Text Center and baseline methods. To facilitate understanding, WP-STC-Center is used to represent WP-STC with Text Center. For all experiments, the ratios of train set, test set and validation set on the two baseline datasets are set to 7:2:1. In addition, in order to make the classification results more accurate, 5-fold cross-validation is adopted in all experiments. WP-STC, WP-STC-Center and baseline methods are applied to the two baseline datasets, respectively, and the classification results are shown in Tables 1 and 2, respectively. It can be seen that the classification results of WP-STC on the two baseline datasets are better than all baseline methods. The Accuracy, Precision, Recall and F1 are improved by at least 0.0538, 0.0533, 0.0625 and 0.0579, respectively. However, although WP-STC outperforms the state-of-the-art baseline method, it takes the second longest time, while WP-STC-Center takes the shortest time and has the best classification Accuracy, Precision, Recall and F1, which indicates that the text-centered classification model proposed in this paper can not only improve the classification Accuracy, Precision, Recall and F1, but also reduce the classification time. 2) COMPARISON OF DIFFERENT DISTANCE METHODS Text Center is constructed in this paper for fast implementation of classification of new input text. Considering the simplicity and accuracy of Manhattan distance [46], the category of new input text is obtained by calculating Manhattan distance between the embedding vector of the new input text and the vectors stored in the Text Center. Since there are various methods to calculate the distance between vectors, the following experiments are conducted to demonstrate the superiority of Manhattan distance. Besides Manhattan distance, Euler distance [54], Minkowski distance [55], Chebyshev distance [56], and cosine similarity [57] are also selected, and the experimental results are shown in Tables 3 and 4, respectively. From Tables 3 and 4, it can be seen that Manhattan distance achieves the optimal Accuracy, Precision, Recall, F1 and classification time on both baseline datasets. 3) IMPORTANCE ANALYSIS OF BI-ATT-CNN To further demonstrate the contribution of Bi-Att-CNN to text classification results, a group of comparative experiments are conducted. Keep other structures in WP-STC and WP-STC-Center unchanged, and replace Bi-Att-CNN with baseline methods respectively. The classification results are shown in Figures 10-13 respectively. It can be seen that when Bi-Att-CNN is replaced by baseline methods, the classification results are greatly improved compared with baseline methods, but still lower than WP-STC and WP-STC-Center using Bi-Att-CNN, which not only shows that the word-level and Pinyin-level classification models constructed in this paper greatly improve the classification results of Chinese short texts, but also shows that the deep learning model based on BiLSTM, Attention and CNN constructed in this paper outperforms all the baseline methods. 4) CLASSIFICATION RESULTS FOR IMBALANCED DATASETS The baseline datasets selected in this paper are all balanced datasets, but the actual situation is often that the forward short texts are much larger than the reverse short texts. Therefore, the classification results on the unbalanced datasets are tested by the following experiments. The ratios of forward and reverse short texts in both dataset are set to 1:1-10:1, respectively, and WP-STC and WP-STC-Center are used for text classification. The classification results are shown in Tables 5-8, respectively. From Tables 5-8, it can be seen that the classification results of both WP-STC and WP-STC-Center gradually decrease as the ratio of forward and reverse short texts gradually increases. Although the classification results has decreased, they still remain at high values. Even when the ratio of forward and reverse short texts is 10:1, the classification Accuracy, Precision, Recall and F1 of WP-STC on both datasets are all greater than 0.9, and the classification Accuracy, Precision, Recall and F1 of WP-STC-Center on both datasets are all greater than 0.91. Therefore, WP-STC and WP-STC-Center are applicable to both balanced and unbalanced datasets. 5) ANALYSIS OF THE EFFECT OF SOLVING HOMOPHONIC TYPOS PROBLEM WP-STC can solve the homophonic typos problem that often occurs in Chinese short texts by adding the Pinyin-level feature. The following experiments are carried out to verify the above conclusion. 1000 short text comments from Weibo 10 are crawled as a test set, and a word in each short text is manually and randomly selected to change into a homonym. The trained WP-STC in simplifyweibo_4_moods dataset with and without Pinyin-level features are used to test the classification effect. It can be seen from the experimental results that the classification Accuracy of WP-STC with Pinyin-level feature is 0.9227, and that of WP-STC without Pinyin-level feature is 0.7866. Therefore, it can be concluded that WP-STC can well solve the homophonic typos problem of Chinese short texts. VI. CONCLUSION Word-level and Pinyin-level based Chinese short text classification model is constructed in this paper, and a deep learning model based on BiLSTM, Attention and CNN is proposed. In order to reduce the classification time, the concept of Text Center is innovatively proposed. Multiple experiments on two baseline datasets demonstrate that WP-STC with Text Center not only outperforms the state-of-the-art text classification model in terms of classification Accuracy, Precision, Recall and F1, but also greatly reduces the classification time. However, since different Text Centers are constructed for different datasets, the classification results of data outside this dataset is not ideal after being input into the Text Center. Therefore, the future work of us will focus on improving the 10 One of the largest social platforms in China. VOLUME 10, 2022 robustness of the model, so that the Text Center does not rely on the construction of a single dataset.
7,616
2022-01-01T00:00:00.000
[ "Computer Science" ]
Improved methods for RNAseq-based alternative splicing analysis The robust detection of disease-associated splice events from RNAseq data is challenging due to the potential confounding effect of gene expression levels and the often limited number of patients with relevant RNAseq data. Here we present a novel statistical approach to splicing outlier detection and differential splicing analysis. Our approach tests for differences in the percentages of sequence reads representing local splice events. We describe a software package called Bisbee which can predict the protein-level effect of splice alterations, a key feature lacking in many other splicing analysis resources. We leverage Bisbee’s prediction of protein level effects as a benchmark of its capabilities using matched sets of RNAseq and mass spectrometry data from normal tissues. Bisbee exhibits improved sensitivity and specificity over existing approaches and can be used to identify tissue-specific splice variants whose protein-level expression can be confirmed by mass spectrometry. We also applied Bisbee to assess evidence for a pathogenic splicing variant contributing to a rare disease and to identify tumor-specific splice isoforms associated with an oncogenic mutation. Bisbee was able to rediscover previously validated results in both of these cases and also identify common tumor-associated splice isoforms replicated in two independent melanoma datasets. Alternative splicing has been shown to play an important role in normal cellular processes as well as a wide range of pathogenic processes underlying many different diseases 1,2 . For example, global dysregulation of splicing, as well as mutations in genes regulating splicing, such as SF3B1, have been observed in a variety of tumors 3,4 . In addition, the results of genome wide association studies (GWAS) focusing on common chronic conditions have identified a number of disease-associated variants that influence splicing, suggesting a role for alternative splicing in mediating many common diseases 5,6 . Furthermore, highly penetrant variants that affect splicing have been classified as pathogenic in a number of monogenic disorders 7 . The detection of disease relevant splice alterations is not trivial, as there are hundreds of thousands of annotated splice sites in the human genome. In addition, there is also great potential for the emergence of novel unannotated splice sites at countless locations in the genome. This suggests a need for robust statistical methods for detecting and quantifying differential splice events in comparative studies in health and disease. We have developed a novel statistical framework for differential splicing and splice outlier detection. This framework identifies splice events in an individual sample based on a metric evaluating the percentage of reads supporting the event, or the 'percent spliced in (PSI), ' which reflects evidence supporting that event beyond the range observed in a set of reference samples. The methods are implemented in a package called Bisbee designed for RNAseq data splicing analysis (Fig. 1). Bisbee also provides protein-level splicing effect predictions. We validated these predictions and benchmarked our statistical methods using normal tissue samples with both RNAseq and mass spectrometry data 8 . Alternative splicing analysis consists of three main steps: detection, statistical comparison, and effect prediction. Here we leverage an existing tool for detection and implement new methods for the statistical analysis and Figure 1. Method overview. Five types of alternative splicing events are detected by SplAdder. For each event, two alternative splice isoforms are considered. Bisbee takes the read counts supporting each isoform in each sample and performs differential splicing or outlier analysis. As illustrated in the volcano plot on the left, Bisbee Diff is able to detect high coverage events with subtle differences in percent spliced in (PSI) as well as low coverage events with large differences in PSI. As illustrated in the center plot, the Bisbee outlier test also takes into account the differences in PSI and the coverage of the event. Each dot represents a sample, with tumors on the left and normal tissues on the right. The samples are sorted by outlier score within each set on the x-axis, the PSI is plotted on the y-axis, and the color represents the depth of coverage of the event in the sample. The dots within the grey stripe pass the outlier score threshold. Bisbee also annotates protein-level effects and as can be seen in the example output on the right. Predicting the protein level impact of a splice variant is critical for understanding the biological implications and potential mechanisms underlying disease states, yet most RNAseq alternative splicing analysis packages do not incorporate an effect prediction component. Splice variants may result in truncations or deletions at the protein level that result in a loss of protein function. Alternatively, spliced protein isoforms may also exhibit qualitative differences in function. For example, BCL2L1 splice isoforms have opposing effects on apoptosis 27 . In addition to identifying the expression of such known isoforms, it would also be possible to predict the functional consequences of novel splice isoforms using the domains impacted and other in silico approaches. Alternative splicing may also give rise to novel protein sequences in a cancer cell that could be recognized by the immune system 3, 23-25 . Genomics, transcriptomics, and proteomics are being used together more often in an effort to better characterize phenotypic effects resulting from genomic alterations and pathway dysregulation [28][29][30] . There are many existing proteogenomics pipelines that use transcriptome sequencing to generate protein sequence databases for matching mass spectra [31][32][33][34][35] . Many of these pipelines take a comprehensive approach using all detected splice junctions and translating them in all six reading frames [31][32][33][34][35][36] . Such studies have been extremely useful for elucidating gene structure and cataloging splice junctions in specific samples 31,37 . Other proteogenomics pipelines leverage transcript assembly and generate protein sequences via either three frame translation or translation of open reading frames of the reconstructed sequences [38][39][40] . However, transcriptome assembly is computationallyintensive and not necessary for integration with mass spectrometry, as only peptides rather than full length proteins, are detected. We are aware of only one other proteogenomics pipeline that predicts protein sequences from splice events rather than junctions or assembled transcripts; however, it does not detect events involving novel splice sites 41 . Our study utilizes splice event-level analysis as it is more amenable to comparisons between samples and facilitates interpretation. In order to benchmark Bisbee's methods against similar approaches we developed a 'truth set' with splice events validated through the detection of corresponding protein isoforms. This truth set was generated using mass spectrometry and RNAseq data on a set of normal tissues from Wang et al. 8 . We identified several other splice variant analysis tools to consider for benchmarking against Bisbee. However, only a few of them provide utilities for predicting the effect of splice alterations at the protein level, which would be necessary for use with our mass spectrometry truth set. Using real data with complementary measurements provides a more robust framework for benchmarking and validation. Our truth set takes advantage of the naturally occurring differences in splicing between different tissues 30 to evaluate Bisbee's differential splicing and splice outlier modules. Results Predicted splice isoforms are detected at the protein level. In order to validate the existence of proteins/peptides corresponding to splice variants, we leveraged a dataset from Wang et al., which includes paired RNA-seq and proteomics data from normal tissues 8 . In this validation dataset, SplAdder identified 268,791 total splice events, of which 125,683 were predicted by Bisbee to be protein coding. The mass spectrometry searches identified 182,662 unique peptides (Supplemental Fig. 1). Protein evidence of alternative splicing, defined as having at least one peptide supporting each of the two isoforms, was detected for 1587 of the protein-generating events, including 1082 generating novel sequences (Fig. 2). The event categories that generate longer stretches of altered sequence have higher proportions of protein level detection as expected (Supplemental Fig. 2). We observed 330 events showing tissue specific detection patterns at the protein level, and these were used for benchmarking and validation. Bisbee Diff more accurately detects differentially spliced isoforms. The beta binomial differential splicing test implemented in Bisbee has one parameter that requires tuning, ω M , which is a constraint on the parameter ω which controls the overall shape of the beta distribution, particularly in its tails. In order to avoid overfitting, we reserved the tissue-specific protein isoforms dataset from Wang et al. to compare the accuracy of differential splicing methods 1 . GTEx was used for parameter optimization and threshold selection. We compared the distribution of the test statistic for the Bisbee Diff test between sets of samples from the same tissue versus different tissues using different values of the ω M parameter. The percentage of events passing a given threshold in the 'different' versus the 'same' comparison is used as an indicator of the specificity of the test, while the percentage of events in the different comparisons passing the thresholds is used as an indicator of sensitivity. Setting the ω M parameter to 200, and using a log likelihood ratio (LR) threshold of 8 provides optimal enrichment of splice events detected as different between different tissues compared to splice events detected as different between samples from the same tissue (Supplemental Fig. 2a). We identified 281 instances of protein expression-confirmed isoform switches over six pairwise tissue comparisons, which represent 196 unique isoform switch events. For comparison, SplAdder's test module was run as an example of a program that uses a, generalized linear model approach. As a simple approach, a t-test on the PSI values was pursued both using all of the PSI values regardless of depth and only including PSI values with a sequencing read depth at the position of greater than 10. To evaluate these methods, we compared the total number of events passing a given threshold to the number of protein confirmed events passing the threshold. The Bisbee Diff method consistently found higher enrichment of confirmed events out of total events passing a threshold (Fig. 3A, Supplemental Table 1). In order to see how the magnitude of PSI differences and the read depths of the events influence the performance of each of the differential splicing tests we made a volcano plot of the brain versus small intestine comparison (Fig. 4). Bisbee outlier more accurately detects splice outliers. The Bisbee outlier detection method parameter β M was optimized using GTEx data. The percentage of outlier scores passing a threshold for models trained www.nature.com/scientificreports/ Figure 2. Splice event detection. Pie charts show the breakdown of splice events by their mass spectrometry evidence with "none" indicating no peptides map to either isoform, "oneIso", indicating at least one peptide maps to one of the two isoforms of a known event, "refOnly" indicating at least one peptide maps to the reference isoform of a novel event, "AltOnly" indicating at least one peptide maps to the alternative (novel) isoform of a novel event, and "bothIso" indicating at least one peptide maps to each of the two isoforms for known (A) or novel (B) events. Breakdown of splice events by event type and predicted protein level effect for all predicted known isoform events (C), all predicted novel isoform events (D), known isoform events with both isoforms detected (E), and novel isoform events with both isoforms detected (F). www.nature.com/scientificreports/ on the same tissue was compared to the percentage passing for matching tissue models. We found a β M value of 80 provides the best enrichment of different tissue outliers with a log likelihood (LL) cutoff of 10 (Supplemental Fig. 2b). We used these values for benchmarking on the Wang et al. dataset with matching proteomics data 8 . We used a set of GI tissues as the reference set and detected outliers in three other tissues. We identified 140 outlier events across the three tissues, which represents 134 unique outlier events. Since we are not aware of another tool that is able to detect splice outliers and generate predicted protein sequences, we implemented two simple methods using the distribution of PSI values in the reference dataset. The first simple outlier method uses the median absolute deviation (mad) and the second using the interquartile range (iqr) of the PSI values. For both of these methods we performed the analysis both with using PSI values for all data points as well as using only PSI values for data points with a depth greater than 10. The Bisbee outlier method detected more proteomics-confirmed events for similar numbers of total events passing the same score threshold (Fig. 3B, Supplemental Table 2). Case study: detection of a splice event in rare disease. In order to examine the utility of the Bisbee package for research and clinical applications, we analyzed disease-causing splice mutation in the nuclearencoded mitochondrial methionyl-tRNA formyltransferase (MTFMT) [42][43][44] . We previously identified homozygous mutation (c. 626 C > T) in the MTFMT gene in three children from two unrelated families (Clinvar Accession#VCV000039827.4) with Leigh syndrome and combined oxidative phosphorylation (OXPHOS) deficiency. The MTFMT mutation c. 626 C > T in the coding region resulted in a Ser209Leu (S209L) amino acid substitution, which is likely a non-pathogenic event. However, c.626 C > T is predicted to generate a splicing suppressor that results in skipping of exon 4, leading to frame shift and truncation of the protein (p. R181SfsX5) 42,43,45 . The c.626C site 20 base pairs (bp) upstream of the 3′ end of exon 4 is predicted to eliminate the two overlapping exonic splicing enhancers (ESE) (GTCAAG and TCAAGA) and generate an exonic splicing suppressor (ESS) sequence (GTTGT T) 46,47 . To confirm the expected exon skipping and truncation, we performed differential splicing analysis of RNA sequencing data obtained from the primary fibroblast cells from three patients carrying the homozygous c. 626 C > T mutation and five unaffected controls using Bisbee. We found that the MTFMT exon 4 skipping event was the 14th highest scoring differentially spliced event. Though the LR (7.999) was just barely below the optimal threshold determined in the GTEx analysis, its high rank makes it likely to be considered in a candidate variant analysis. It is not surprising that the event did not quite pass the threshold as coverage of the event in the cases was only 10, 6 and 2 reads. If we use the protein effects predictions to filter down to events predicted to generate novel sequences that were expressed more highly in the cases compared to the controls, we find that the MTFMT exon 4 skip is the highest scoring of these events (Supplemental Fig. 3). When trying to discover the causal variant in a rare disease, there is often only one affected case available for sequencing, so we also ran Bisbee outlier analyses on each of the three cases to illustrate the single case scenario. The number of events with protein expression evidence of differential splicing is plotted against the total number of events passing the threshold for four different differential splicing methods: beta binomial (bbd-black), SplAdder's test (sp-red), t-test on all PSI (tt-blue), t-test on PSI with depth > 10 (tt-d10-cyan). (b) The number of mass spectrometry confirmed outlier events is plotted against the total number of events passing the threshold for five different methods: bisbee outlier (black), median absolute deviation (red), median absolute deviation with depth > 10 (magenta), interquartile range (blue), and interquartile range with depth > 10 (cyan). www.nature.com/scientificreports/ Since it is desirable to have a large set of reference samples for outlier analysis, but there are technical differences in the sequencing between GTEx and this dataset, we performed the outlier analysis both using GTEx fibroblasts as the reference samples and using the five unaffected fibroblast samples used in the differential splicing analysis as the reference samples, and used the minimum score of the two analysis. The Bisbee outlier scores for the MTFMT exon 4 skip in the cases were 4.8, 10.6, and 3.4, ranking 386, 16, and 1746 of all events (Supplemental Fig. 3). When only considering events generating novel protein sequences with increased expression in the cases, the MTFMT event ranked 145, 2, and 587, respectively, in each of the three cases. Despite the very low coverage of the event in the cases, Bisbee was still able to rank the event in the top 1% of all events in all three cases. The Bisbee annotation output is shown for the MFTMT exon 4 skipping event in Table 1. Each event is assigned a unique identifier (event_jid) using the contig, strand, and junction coordinates to facilitate comparing results between datasets. The effects at the transcript (event_cat) and protein level (effect_cat) are described, as well as whether the splice event is found in ensembl transcripts (aa_change_type). The sequence headers of www.nature.com/scientificreports/ the two isoforms are provided in order to locate the protein sequences in the fasta output. The sample group with increased expression of the isoform labeled "alt" is indicated (group_increased_alt). The location with the protein sequence as well as the altered amino acid sequence fragments are also provided. These results confirm the expected R181SfsX5 frame shift truncation. Application to TCGA Uveal Melanoma dataset. We selected the TCGA uveal melanoma dataset as an example application as there is a recurrent mutation in the splicing factor 3B1 gene (SF3B1) that has been previously shown to cause aberrant 3′ splice site usage 48,49 . To identify tumor-specific splice events, we performed Bisbee Outlier analysis using the complete GTEx tissue library exempt of testis tissue samples. Testis was excluded as it may express developmentally restricted proteins not found in normal somatic tissues 39,40 . We also used the TCGA normal samples as a reference and took the minimum score of the two analyses. In examining the total number of splice outliers per patient, we observed a large increase in alternative 3′ splice site outliers with SF3B1 mutation as well as significantly increased exon skipping, intron retention, and mutually exclusive exon outlier burden (Fig. 5A, rank sum p value < 0.01). We also ran Bisbee Diff to identify differentially spliced events between SF3B1 mutant and wild-type tumors. We found 19,950 differentially spliced events of which 72% were mutually exclusive exons and 15% were alternative 3′ splice sites. The alternative 3′ differentially spliced events had higher Bisbee Diff LR and a greater overlap with events also observed in the outlier analysis (Fig. 5B). Alsafadi et al. previously identified differentially spliced events between SF3B1 mutant and wild-type tumors in an independent dataset, and selected seven of these events to validate in isogenic cell lines using a mini-gene splice assay 49 . All seven of these events were detected as differentially spliced by Bisbee Diff (Table 2). In order to identify protein isoforms that may be specific to SF3B1 mutant tumors, we selected splice events that were common between the differential splicing and outlier analysis (494) and then identified those predicted to result in altered protein sequence (321). These events are primarily alternative 3′ events causing insertions or frame disruptions resulting in novel protein isoforms in the uveal melanoma tumors (Fig. 5C). Replication of common melanoma associated splice events in an independent dataset. In addition to observing splice events associated with SF3B1 mutation, we also observed splice events common across the TCGA uveal melanoma cohort, irrespective of SF3B1 mutation status. In order to validate this finding, we performed the Bisbee splicing analysis on an independent melanoma cohort, consisting of 37 patients with BRAF wild-type recurrent tumors including 13 cutaneous, 7 mucosal, 10 uveal, 5 acral, 1 melanoma of unknown primary. We performed the Bisbee outlier analysis using both the GTEx excluding testis as the reference and a set of 28 normal tissue or cell lines sequenced at the same institution as the reference and took the minimum score of the two analyses. We compared the number of patients passing the outlier threshold for each event between the two datasets. We identified 23 splice events with 20 or more tumors meeting the outlier criteria in the TCGA dataset, and found that 10 of these events were also detected as outliers in at least one of the SU2C tumors (Fig. 6A). When only considering events with predicted protein sequence changes, there are ten events meeting the outlier criteria in 20 or more of the TCGA tumors and nine of these events are detected as outliers in at least one of the SU2C tumors (Fig. 6B). These nine events identified in both datasets include five intron exclusion events in GAPDHS predicted to result in novel sequence in the reference samples. There is also an alternative 5 prime site in EXOC3, and intron retention in TBL1X, PTPRH, and PALM that are predicted to result in novel sequence in the tumors (Fig. 6C). The intron retention event in SLC24A5 was not detected by SplAdder in the SU2C dataset. Discussion We have developed a new package for splicing data analysis called Bisbee. Bisbee provides functions for differential splicing analysis, splicing outlier analysis, and protein effect prediction. Using a dataset with matched RNAseq and mass spectrometry data on normal human tissues we constructed a truth set to benchmark differential splicing and outlier methods, including Bisbee. We found that Bisbee's differential splicing approach had www.nature.com/scientificreports/ www.nature.com/scientificreports/ substantially better enrichment of proteomics-confirmed events than the other approaches. Bisbee's outlier test also outperformed other outlier approaches. We demonstrated the utility of the approach in both a rare disease and a cancer context. The Bisbee package goes beyond many other RNAseq-focused splicing analysis packages by generating protein sequences for the observed splice events. While there are other tools for transcriptomic-proteomic integration, they are generally focus on generating comprehensive databases for mass spectra matching rather than on generating high confidence protein sequence predictions from the RNAseq data. By using Bisbee to generate a database of patient specific protein isoforms from RNAseq data, and then using mass spectrometry to detect which ones have protein level evidence, one could identify high confidence disease-specific protein isoforms for further characterization. The protein domains impacted and other downstream functional predictions from the protein sequences enable further insight into the impact of splicing alterations and can identify splicing-derived pathogenic variants that would go undetected by DNA sequencing alone. The Bisbee pipeline currently relies on SplAdder for splice event detection 14 . While the work presented here as well as previous work demonstrate that SplAdder is a robust tool for splice event detection, in the future, we plan to adapt Bisbee to work with input from other splice detection tools and benchmark against SplAdder. Bisbee is also limited to the types of splice events detected by SplAdder. Other event types that are not currently detected include alternate first exon, alternate terminal exon, and complex events involving more than one type of alteration. Another limitation of the current approach is that it relies on short read sequencing and does not attempt to assemble a full-length transcript but rather focuses on the local changes in the transcript and protein sequences. An expansion of the approach to incorporate long read data would be useful for enabling full length sequence analysis. Currently, Bisbee only offers two statistical tests: comparison between two groups and outlier detection compared to a reference set. Future work may extend the methods to test for associations with continuous variables or other more complex experimental designs. The differential splicing test in Bisbee uses a novel beta binomial model to test for differences in PSI. Most differential splicing tools, including the SplAdder test included in our evaluation, test for differences in expression level of the splice isoform, controlling for the overall expression level of the gene. Many of the events that are highly significant in SplAdder's test have relatively small differences in mean PSI between the two groups (Fig. 4B). In order to identify events with more substantial differences in mean PSI between the two groups, one may directly test for a difference in PSI values using a t-test. However, we have shown that the beta binomial model implemented in Bisbee better addresses the relationship between PSI measurement accuracy and depth. Bisbee is able to detect both low coverage events with dramatic differences in PSI and high coverage events with small differences in PSI (Fig. 4A). Bisbee is the second splicing tool that we are aware of to offer an outlier detection test. This test is intended for identifying splice isoforms unique to an individual patient compared to a set of reference samples. We were not able to compare directly to the other splice outlier detection tool (leafCutterMD) as it does not report splice events in a way that is amenable to protein sequence generation. The case studies we presented illustrate the utility of the outlier approach in both the rare disease and cancer research. Currently, the collective use of whole-exome sequencing (WES), overlayed with RNA-Seq data, has enhanced the identification of disease-causing splice mutations and has significantly improved the diagnostic rate of rare diseases. We provided a case study of a rare disease whereby three patients with known pathogenic splice variants were available. We performed differential splicing analysis and the likely pathogenic or causal event was the highest scoring of those predicted to generate a novel amino acid sequence, illustrating how the protein level annotation can aid variant prioritization. Outlier analysis is an important approach in studying rare disease as often more than one case is not available. These cases were difficult to detect by the outlier analysis alone due to the very low coverage at the event locus. However, it is conceivable that the Bisbee output could still help identify the causal variant when examined alongside with candidate variants from WES and knowledge of the phenotype and underlying pathways is exploited. Previous work has suggested that splicing dysregulation in cancer may be a greater source of tumor specific antigens than somatic point mutations 3,25 . Application of the Bisbee outlier test to cancer patient samples may enable the discovery of tumor-specific splicing-derived neoantigens, which could be therapeutic or vaccine www.nature.com/scientificreports/ targets. Splice events that are both outliers compared to normal tissues and differentially spliced between SF3B1 mutant and wild-type tumors are promising candidates as tumor-specific neoantigens, as many of these are predicted to generate novel sequences through frame disruptions and insertions in the tumor-specific isoforms (Fig. 5C). SF3B1 mutant uveal melanomas have better prognosis than SF3B1 wild-type 50 . We hypothesize that the tumor-specific splice isoforms associated with SF3B1 mutations may act as antigens enabling better immune control of the tumors. The protein sequence output from Bisbee would facilitate in silico MHC binding prediction to further investigate the potential immunogenicity of these splice variant generated neoantigens. We also detected splice outliers common to uveal melanoma regardless of SF3B1 mutation status, and these results showed strong concordance in an independent melanoma cohort. Interestingly, events with predicted protein sequence impact showed stronger concordance than those with no predicted impact (Fig. 6A, B). Nine of the ten events identified as common splice variant outliers with protein impact in the TCGA uveal melanoma dataset were also detected in the SU2C melanoma dataset. These melanoma associated splice variants included several intron retention events in GAPDHS, with the tumors having lower expression of the intron-retained transcripts compared to the normal reference tissues. GAPDHS is typically expressed in sperm, but not in normal somatic tissues, and has previously been shown to be expressed in melanoma 51 . We hypothesize that we are seeing these events in GAPDHS due to expression of the immature transcript in the normal tissues. Four melanoma associated splice events were identified that were predicted to lead to frame disruptions in the tumors, resulting in novel protein sequence. These events are most promising for further investigation as candidate targets in melanoma. In summary, the Bisbee package is able to predict protein sequences of both known and novel protein isoforms. It provides a more statistically powerful differential splicing test than existing methods. It also provides an outlier detection approach, which will be useful in a number of different contexts, including cancer and rare disease. The Bisbee package is publicly available, and should enable the robust detection of aberrant splicing. Methods Description of datasets used. For initial evaluation and optimization of the differential and outlier splicing test implemented in Bisbee, we compared the distribution of the likelihood ratios between tests involving samples from the same tissues compared to samples from different tissues. For this analysis GTEx SplAdder results were downloaded from GDC (https:// gdc. cancer. gov/ about-data/ publi catio ns/ PanCa nAtlas-Splic ing-2018) 3 . For the differential splicing evaluation, 50 random pairs of tissues were selected, six random samples were selected from each tissue, and 100,000 events were selected for each tissue pair. The beta binomial differential splicing test was applied to grouping the samples into two groups of three replicates within each tissue as well as between the pairs of different tissues. For the outlier evaluation, 12 tissues with at least 100 samples were selected and 80 samples were randomly selected for fitting the model and 20 were selected for determining the outlier scores. For further evaluation and benchmarking, we identified a dataset where RNAseq and mass spectrometry data were available on the same set of tissues 8 . We selected a total of seven tissues including four lower GI tissues (colon, duodenum, rectum, and small intestine) to serve as the reference set for relevant outlier analyses and three diverse tissues for comparison purposes (brain, ovary, and tonsil). RNAseq reads were downloaded from ArrayExpress (E-MTAB-2836) and aligned to the human reference genome (GRCh38) using star 2.7.3a two pass basic mapping mode and splice events were detected using SplAdder v2.3.0 with default parameters. Two databases for searching were constructed in order to separately assess the FDR for known and novel sequences. The first consisted of Bisbee prot known splice isoforms as well as canonical sequences from Ensembl, and the second included only novel sequences. The LC-MS/MS spectra were downloaded for 7 tissue types from the EBI PRIDE database (PXD010154). The spectra were searched using Mascot (Matrix Science, London, UK; version 2.6.0) through Proteome Discoverer 2.4 (Thermo Fisher Scientific, Waltham, MA), allowing for oxidation (Met) and carbamidomethylation (Cys) dynamic and static modifications, respectively. A maximum of two missed cleavages were allowed with fragment mass tolerance of 0.02 Da and precursor mass tolerance of 10 ppm. FDR thresholds for PSMs, peptides and proteins were set at 0.01, with a minimum of 1 peptide required for protein identification. Peptides that mapped to protein products of more than one gene were excluded from downstream analysis. Peptides that matched exclusively to only one protein isoform sequence were taken as evidence for that isoform. Events where only one isoform was detected in one tissue and the other isoform detected in a different tissue were taken as protein-level evidence of tissue-specific splicing. For an example use case, three Leigh syndrome and five unaffected control fibroblast cell lines from the study participants were established. RNA was extracted and sequenced by Illumina paired end sequencing and aligned to the reference genome using STAR. Please see the supplementary methods for details of the cell culture, sequencing, and alignment. We used an independent set of melanoma patients for comparison with the TCGA melanoma dataset, referred to here as the SU2C melanoma cohort. RNA was extracted from core needle biopsies, sequenced using Illumina paired end sequencing, and reads were aligned to the reference genome using STAR. Please see the supplementary methods for additional details on the cohort, sequencing, and alignment. Splice event protein sequence prediction. In order to generate protein sequences corresponding to each splice event, we use known transcript sequences from Ensembl as a starting point. We first determined whether each isoform of the event exists with any known transcripts, by comparing the event junction coordinates to the exon coordinates (retrieved using the python package pyEnsembl) of protein coding transcripts for www.nature.com/scientificreports/ www.nature.com/scientificreports/ that gene. Each transcript is categorized as matching isoform one, isoform two, or neither for the splice event. For each transcript matching the isoform one, the isoform one junctions are removed and replaced with the isoform two junctions to make the altered sequence, and vice versa for those matching isoform two. The region of altered amino acids is found by aligning the two sequences. If the altered amino acid sequence is not found in any of the canonical sequences, the event is categorized as novel. If no transcript is found that matches either isoform, no sequence is generated and the event's effect is categorized as unknown. In order to narrow down to one pair of protein sequences per event, the sequences are prioritized as follows: (1) pair of known transcripts, (2) longest altered amino acid sequence, (3) longest starting isoform sequence. Differential splicing test (Bisbee diff). Read counts for a splice variant are modeled as following a beta binomial distribution. Here the number of reads supporting the first isoform is the number of successes, the total number of reads covering the event is the number of trials, and the expected PSI (percent spliced in) value across the samples of interest is represented by the beta distribution. The beta distribution is reparameterized as ψ = α α+β and ω = α + β . ψ represents the expected value of the beta distribution and ω affects the sharpness of the distribution, but is more intuitive then actual variance of the beta function, which is a much more complex function of α and β . In the one group model, all of the samples are assumed to have the same underlying distribution of PSI values and a maximum likelihood estimate is made for ψ and ω . In the models below, I n indicates the number of reads supporting the first isoform for sample n, T n indicates the total number of reads supporting either isoform for sample n, and N A indicates the total number of samples. In the two-group model, it is assumed that the two groups have different expected PSI values, but similar distribution shapes, so the two groups have different values of ψ but the same ω. For both the one group and two group models we find the values of the parameters that maximize the sum of the probability densities across the data points. In fitting the model, we use logistic transformations as shown below to constrain ω to be greater than 2 and less than ω M and constrain to be between 0 and 1. a and b are the parameters that are optimized in order to ensure that the values of and ω remain within the constraints. The difference in the sum of the log probability densities across the two models is used to identify that events have different underlying PSI distributions in the two groups. Splicing outlier test (Bisbee outlier). As in the two-group test, the read counts are assumed to follow a beta binomial distribution. The beta distribution parameters are found that maximizes the sum of the log probability densities across a set of reference samples. Here I n indicates the read count supporting the isoform with mean PSI < 0.5 across the samples. Nelder-Mead optimization 53 (or BFGS if Nelder-Mead fails) is used to find the maximum likelihood values of α R , β R . The reparameterizations below are used to constrain α R to be between 1 β M and 1 and β R to be between 1 and β M so that the beta distribution is strictly decreasing. The values the a and b parameters are optimized to find the maximum likelihood values of α R , β R within the constraints. If zero reads are detected supporting the minor isoform in the reference sample set, alpha is set to one and beta is set as shown below. betabinomial pdf (I n , T n , ω G × � 2 , ω G × (1 − � 2 )) ω = ω M 1 + e a + 2, � = www.nature.com/scientificreports/ For each sample of interest, the log cumulative probability of the major isoform read counts being less than or equal to those observed given the total read depth and the beta distribution fit to the reference sample is used as the outlier score. Implementation of other differential splicing methods. For benchmarking we selected SplAdder's differential splicing test using default parameters as a representative of the generalized linear model approach. We also wanted to include a simple method directly testing differences in PSI values. While a non-parametric test would be more appropriate, as PSI values are unlikely to be normally distributed, we would not have any power to detect differences with only three or four replicates per group. Instead, used a two-sample t-test, with the more conservative assumption of unequal variance, on the PSI values. SplAdder only reports PSI values for samples with a coverage of 10 for a given event, though PSI values can still be calculated from the isoform one and two coverages. We applied the t-test both to all PSI values as well as treating the data points with depth less than 10 as missing data. Implementation of other splicing outlier detection methods. We implemented two simple methods using the distribution of PSIs for comparison. The first finds the median absolute deviation from the set of reference samples. Below, x s indicates the PSI of the sample s and x R indicates the array of PSI values associated with reference samples. The second normalizes to the interquartile range. We applied both of these methods either using all of the PSI values, as well as only using data points with depth less than 10 as missing data. Ethics approval and consent to participate. The study protocol and written informed consent for the use of human fibroblast cells of the MFTMT cases and controls was approved by the Western Institutional Review Board (WIRB; study number 20120789). The SU2C melanoma biopsies were collected through a clinical trial (NCT02094872). Ethics review boards at all participating institutions approved the study, which was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice guidelines. All patients provided written informed consent. Data availability The RNA sequencing data of the MTFMT cases and controls will be deposited in dbGap. The SU2C RNA sequencing data is available in dbGap under accession phs001786.v1.p1. The Bisbee package is available at https:// github. com/ tgen/ bisbee.
8,887.2
2021-05-24T00:00:00.000
[ "Biology" ]
A Noncontact Force Sensor Based on a Fiber Bragg Grating and Its Application for Corrosion Measurement A simple noncontact force sensor based on an optical fiber Bragg grating attached to a small magnet has been proposed and built. The sensor measures the force between the magnet and any ferromagnetic material placed within a few millimeters of the sensor. Maintaining the sensor at a constant standoff distance, material loss due to corrosion increases the distance between the magnet and the corroded surface, which decreases the magnetic force. This will decrease the strain in the optical fiber shifting the reflected Bragg wavelength. The measured shift for the optical fiber used was 1.36 nm per Newton. Models were developed to optimize the magnet geometry for a specific sensor standoff distance and for particular corrosion pit depths. The sensor was able to detect corrosion pits on a fuel storage tank bottom with depths in the sub-millimeter range. Introduction In the last decade, significant progress has been made towards the development of fiber Bragg grating (FBG) sensors, which have the advantages of being immune to electromagnetic interference, easy to multiplex, remotely accessible, stable for long-term measurements, and easy to cable. When an FBG is embedded in or bonded on the appropriate host material, one is able to monitor changes in several physical parameters. Any strain in the fiber at the Bragg grating will cause a shift of the OPEN ACCESS reflected Bragg wavelength, which can be precisely detected in a number of ways [1]. Several sensors have been built to measure temperature [2], humidity [3], pressure [4], vibration [5], deformation and displacement [6], electric current [7] and magnetic field [8][9][10]. Similar approaches have also been used to build sensors to measure mechanical force by bonding FBGs on cantilever structures [11][12][13]. FBG sensors have been found useful in the medical area [14][15][16] and in the construction industry, monitoring corrosion in reinforced steel bars [17,18]. More recently, FBG sensors have been used in the oil and gas industry to monitor corrosion in pipelines, offshore platforms, and fuel storage tanks [19]. Failures due to corrosion can be responsible for over 25% of the total failures on the oil and gas industry [20]. In order to ensure proper flow and storage of oil and gas, conforming to environmental laws, regulations and standards, the need for nondestructive techniques that provide reliable corrosion monitoring has never been higher. We used a different approach that does not rely on the bonding of the Bragg grating to a specific substrate to develop a contactless force sensor; the sensor employs one FBG and a small permanent magnet attached to the tip of the optical fiber. Analytical and finite element models were developed to optimize and calibrate the sensor, and tests were performed with both fabricated and natural corrosion pits present in samples of a fuel storage tank bottom. The developed sensor can detect corrosion with sub-millimeter depths and has unique properties: it is passive with no need of local power sources, and can be used to monitor corrosion in remote sites and in places that are difficult to access. Operational Principle If a mechanical strain ε is induced in an FBG with Bragg wavelength λ B , the corresponding wavelength shift Δλ B can be written as: (1) where p e is the effective strain-optic coefficient of the fiber. In the above expression, the temperature effect was omitted. It is possible to successfully compensate for this effect by adding a second FBG [12]. Expressing the mechanical strain in terms of force F m , cross-sectional area of the optical fiber a and its Young's modulus E, the wavelength shift can now be stated as: (2) Figure 1 shows the optical power spectrum that was measured with a commercial FBG Interrogation System (Micron Optics sm 125-200, Atlanta, GA, USA). Side-lobe peaks appear because the Bragg grating used was not apodized [21]. It had an at-rest (dotted line) wavelength peak at 1,541.20 nm. Strains were induced into the FBG by fixing one end to a supporting structure and pulling the loose end by a dynamometer (Lutron Force Gauge FG-20 kg, Taipei, Taiwan) until the dynamometer reads 0.5 N, 1.0 N and 1.5 N. These strains increased the reflected wavelength peak to 1,541.88 nm (dash-dotted line), 1,542.55 nm (dashed line), and 1,543.24 (solid line) respectively. These simple measurements, after a linear fitting, indicate an FBG force sensitivity of 1.36 nm/N, which is in good agreement with the values reported in the literature for a standard, single-mode optical fiber [13,22]. To turn the FBG into a contactless force sensor, we attached a small, cylindrical magnet that was magnetized in the direction of its axis to the loose end of the optical fiber by a tiny hole in the center; therefore, the tip of the optical fiber could be inserted into the hole in the magnet and firmly glued using a low-viscosity adhesive (Loctite 496, Henkel, Sã o Paulo, Brazil). We used NdFeB and SmCo magnets (Logimag, Hong Kong, China) that were a few millimeters in diameter and a few millimeters long. These small magnets are routinely employed in optical isolators and as rotors for quartz watches. As observed in the schematic in Figure 2a, we encapsulated the FBG and magnet in a nonmagnetic Teflon cylinder with a gliding cavity. The Teflon cylinder serves three purposes: to protect the Bragg grating, to minimize friction, and to keep unwanted ferromagnetic parts away from the magnet. A plastic sleeve eliminates the potential upper motion of the magnet and FBG so no shear, compression, and bending can affect the sensor device. We glued the upper part of the optical fiber to a support using the same low-viscosity adhesive. Analysis of the Magnet Geometry The role of the spatial dependence of the magnetic field, caused by the magnet, to the applied force can be explicitly determined if we calculate the force on a ferromagnetic body with the principle of virtual work. The principle of virtual work states that the force on a body can be calculated by taking the gradient of the energy change, which is caused by its virtual displacement, in the volume occupied by the body [23]. Suppose a ferromagnetic body has a volume v, the magnetic force F m is given by: where u B is the magnetic energy density in the volume occupied by the ferromagnetic body and u B0 is the magnetic energy density in the same volume in the absence of the body. Using µ 0 as the permeability of free space, χ as the body susceptibility, D as an effective demagnetization factor of the body [24], and H as the magnetic field, the magnetic force F m is given by: Assuming a magnetic field gradient in only one direction, e.g., in the z direction: Thus, the force caused by the magnet in a ferromagnetic body is a function of the body susceptibility, demagnetization factor and the product of the magnetic field and the magnetic field gradient integrated over the volume of the body. Considering the objective of assessing the influence of the magnet geometry in the proposed sensor, it will be useful to analyze product of the magnetic field and the magnetic field gradient generated by the magnet as a function of the radius and length of the magnet. The axial magnetic field outside a cylindrical magnet, magnetized in the direction of its axis, of radius R and length L, as a function of the axial distance z from the magnet face, can be given by [25]: (6) where M 0 is the magnetization of the permanent magnet. Differentiating Equation (6) with respect to z yields: Both expressions are approximations because they do not take into account the hole in the magnet center. Nevertheless, the product of the two expressions will give rough estimates of the dimensions of the magnet radius and length that will maximize the magnetic force at a specific distance. Figure 3a shows the product of the magnetic field with magnetic gradient versus the magnet radius when the magnet is placed at three different distances (0.5 mm, 1.0 mm, and 1.5 mm) from the ferromagnetic body. The magnet length used was 5.0 mm and M 0 = 800 kA/m. Figure 3b shows the product of the magnetic field and magnetic gradient versus the magnet length. The magnet radius used was 1.5 mm. The calculations were made at the same three distances. It can be noted that the magnet radius plays an important role in the product of the magnetic field and the gradient amplitudes, especially at small distances from the magnet. Note that there is only a small influence in the product for lengths larger than 5.0 mm. When the magnet length is 5.0 mm, it is already equal to 95% of the maximum value of the product. A practical constraint concerning the magnet size is related to its weight. For example, the mass of a magnet with 1.5 mm radius, 5.0 mm long is 0.3 g. This corresponds to a weight of about 3 mN. In order to neglect the gravity effect, the smallest magnetic force between the sensor and the ferromagnetic component must be much larger than that. This will limit the maximum distance the sensor will be able to measure with that specific magnet. Another practical limitation is the maximum tensile strength of the optical fiber used. A theoretical maximum force of about 13 N was obtained by using Equation (2), a maximum strain equal to 1.2% and typical optical fiber parameters, E = 90 GPa and a = 1.23 × 10 −8 m 2 (125 μm optical fiber diameter). At the laboratory, we were able to apply up to 7.4 N (Δλ = 10 nm) to the optical fiber without breaking it. This limit will have to be taken into account when choosing the sensor magnet and standoff distance. Sensor Calibration Based on the previous analysis, we built FBG sensors using magnets that were 5.0 mm long with radii equal to 1.5 mm and 1.0 mm (sensors A and B, respectively). We also tested magnets that were 10 mm long with radii equal to 1.5 mm and 1.0 mm (sensors C and D, respectively). The FBG sensor was calibrated in terms of force by sliding it towards a 6.7 mm thick, 40 mm × 20 mm low-carbon steel plate while measuring the attractive force with an attached dynamometer. The measurements were performed on a linear track with a linear actuator with 0.1 µm resolution (Zaber T-NA08A50, Vancouver, Canada), as shown in Figure 4. The average uncertainty obtained in the experimental values and Equation (2) was approximately 0.020 nm for both sensors. Thus, according to Figure 5, the experimental sensitivity of the FBG magnet force sensor is 1.36 ± 0.02 nm/N. This value is the same one achieved by mechanically pulling the optical fiber, as shown in Figure 1. This result demonstrates that the constructed sensors measure force without contact with the best possible sensitivity for a non-tapered optical fiber [22]. Additionally, the inset of Figure 5 shows the first three points measured with sensor A and their respective error bars. The departure from the linear behavior can be attributed to the overall noise present in our system, which may originate from the FBG interrogation system electronics, vibrations in our experimental setup and/or friction between the magnet and the nonmagnetic case. Wavelength Change versus Distance In order to do quantitative measurements, an additional calibration was performed with a three dimensional model using finite elements (FEM) with approximately 177,000 elements (Opera-3D, Cobham, Dorset, United Kingdom). We modeled the magnets in sensors A and B with Br = 1.1 T and Hc = 0.82 MA/m and the steel plate (20 mm × 40 mm × 6.7 mm) with a nonlinear BH curve that was determined experimentally. The modeled magnets had a 0.5 mm-diameter pass-through hole. The model geometry and the simulation results for sensors A and B are shown in Figure 6, respectively, where the Bragg wavelength and dynamometer measurements versus distance are also shown. Note that the maximum force measured with sensor A, at 100 μm from the ferromagnetic plate, was about 1.6 N, well below the maximum tensile strength of the optical fiber mentioned at the end of Section 3. Validation Based on Controlled Fabricated Corrosion Pits To test the sensors on steel samples with corrosion pits, we used the experimental setup shown in Figure 7. The setup consists of an XYZ-motorized stage containing a depth gauge (Mitutoyo 543-682B, Sã o Paulo, Brazil) and the FBG force sensor. We positioned steel plates (150 mm × 150 mm × 6.7 mm) containing fabricated semi-spherical corrosion pits to scan them under the FBG sensor and depth gauge. We started by scanning a fabricated, semi-spherical pit that was 6.0 mm in diameter and 0.5 mm deep under four sensors that were built with different magnet geometries (sensors A, B, C, and D). The results can be observed in Figure 8a, where the changes in the Bragg wavelength Δλ for the four sensors are plotted versus the position on a line passing over the center of the pit. In this measurement, the sensor standoff distance was 0.5 mm from the plate. As observed, all of the sensors were able to detect the 0.5 mm-depth pit and, as expected, the sensor with the larger magnet generated the larger change in the reflected wavelength. Notice that, in Figure 8b, normalizing the wavelength changes and the depth profile measured with the depth gauge produces a very good match, indicating that the wavelength change obtained by all sensors accurately describes the depth profile of the corrosion pit. To use Figure 6b to properly determine the depth of a corrosion pit, we used the FEM model to study the effect of the corrosion pit border on the force (wavelength change) measured by the FBG sensor. This effect was accomplished by modeling semi-spherical pits in the steel plate, as shown in Figure 9a. It is expected that if the pit diameter is less than the diameter of the sensor magnet, the measurement will be influenced by the pit border; therefore, the decrease in the magnetic force will be less evident when the sensor is over the pit. This result was obtained in the simulations for the force on the magnet as a function of the corrosion pit diameter. Figure 9b shows how the force on sensors A and B changes as a function of the pit diameter when the sensors are placed over the center of a corrosion pit 0.5 mm deep. The standoff distance of both sensors in relation to the plate was 0.5 mm. The effect of the pit border effect was evaluated with the increase of the magnetic force produced, compared to the force that would be produced if the pit had a much larger (infinite) border, as observed in Figure 6b. For sensor A, the effect of the border is less than 10% when the pit diameter is approximately twice the size of the magnet (solid line), and there is similar behavior for sensor B (dashed line). Figure 9c shows the effect of the pit border on sensor A when the standoff distance is increased to 1.0 mm. The effect becomes less evident at larger standoff distances, as the change in the force is less than 10% when the diameter of the pit is approximately 30% larger than the diameter of the sensor (solid line). However, when the depth is increased to 1.0 mm (dashed line) but the pit size remains the same, the border effect increases to 70%. The change in the force returns to less than 10% when the pit diameter is again approximately twice the diameter of the sensor magnet. Thus, once the change in wavelength is measured, to use Figure 6b to assess the pit depth, one must be sure that the pit diameter is at least twice the diameter of the FBG magnet. Figure 10 shows changes in wavelength (circles) when scanning sensor A over fabricated, semi-spherical pits with 6.0 mm diameter and four different depths 0.17 mm; 0.3 mm; 0.5 mm and 0.7 mm. The standoff distance was 0.8 mm. The depth profiles measured by a depth gauge are shown as well (solid line). A larger and deeper semi-spherical fabricated pit was also measured using sensor A with the standoff distance increased to 1.2 mm. The pit diameter was 30 mm, and it was 2.7 mm deep. The resulting measurement is shown in Figure 11. The change in wavelength (circles) was related to the pit depth (solid line) by using Figure 6b. The value obtained for the depth was 2.6 mm. Notice that at this standoff distance; we are at the full dynamic range of sensor A. . Change in wavelength detected by sensor A (circles) obtained when a fabricated semi-spherical pit with 30-mm outside diameter and 2.7-mm depth is scanned under sensor A at a 1.2-mm standoff distance. The depth profile measured by a depth gauge is also shown (solid line). Testing with Real Corrosion Sample The smaller sensor B was used to detect a natural corrosion pit that was found in the bottom plate of a fuel storage tank, as shown in Figure 12b. The natural pit is 0.4-mm deep and approximately 5.0 mm in diameter. Scanning the pit under the FBG sensor at a 0.5-mm standoff distance, we obtain the image that can be observed in Figure 12a. Additionally, an extended corrosion area found on the same plate, shown in Figure 12d, was also detected by sensor B. The image obtained can be observed in Figure 12c at a 0.3-mm standoff distance. The maximum depth was 0.7 mm, which corresponds to a change in Δλ from 0.23 nm to 0.04 nm.
4,135.6
2013-08-29T00:00:00.000
[ "Physics" ]
Increasing best practice data sharing at PLOS Pathogens IntroductionAU : Pleaseconfirmthatallheadinglevelsarerepresentedcorrectly: PLOS Pathogens is a trusted venue for impactful research that readers everywhere rely on to view, download, and share. The journal empowers authors to decide how they want to share their work by removing barriers, taking action on the feedback from our community, and providing transparency every step of the way. As with all PLOS journals, PLOS Pathogens strives to be a conduit for good open science practices. However, we know that there are barriers to the adoption of open science practices [1], and our approach to increasing adoption should be driven by evidence and community need. Current data sharing at PLOS Pathogens Since 2014, all PLOS journals have mandated the sharing of research data, and PLOS Pathogens has seen a steady increase in the use of repositories from 16% of research articles with data depositing in a repository in 2014 to 29% in 2020. PLOS Pathogens authors share their data in a number of different places-discipline-specific repositories (e.g., Sequence Read Archive and Gene Expression Omnibus), generalist repositories (e.g., Dryad and Figshare), code repositories (e.g., Github), and within the Supporting information accompanying an article. In fact, in 2020, 75% of research articles published in PLOS Pathogens shared at least some of the data underlying their research in the Supporting information. Sharing data via a repository-either discipline specific or generalist-has benefits for both authors and readers. Authors are able to demonstrate their compliance with the increasing number of funder and institutional policies that require data sharing [3,4]. The dataset can be cited independently of the article, which can be tracked using the assigned DOI, demonstrating the impact and value of the research to funders. And sharing data in a repository has been linked to a 25% increase in citations to the article [5]. Readers benefit as they are able to find data more easily as it is indexed in search engines and aggregators. Licensing information is clear so the potential for reuse is immediately obvious. Finally, repositories offer the possibility of machine-readable datasets, allowing for computational methods to access and reuse data. Depositing in a repository is the ideal way to meet the requirements of the findable, accessible, interoperable, reusable (FAIR) principles for research data [2]. The data become a research output itself. Depositing it in a repository makes it findable, citable, and available for the long term (in accordance with the repository's preservation guarantees). It is clear that sharing data via a repository has tangible benefits for the community whether they are authors or readers and that PLOS Pathogens could, and should, do more to promote sharing of data in this way. Integration To enable more PLOS Pathogens researchers to share their data via a repository, we are experimenting with solutions that will make data deposition easier and more efficient. We have integrated the journal's submission system with the Dryad repository (https://datadryad.org), which will be available for 1 year initially. Dryad is a generalist repository that will accept any data, with the exception of identifiable human data, as part of the manuscript submission process. Dryad will curate the dataset to ensure that it meets the minimum requirement, and PLOS Pathogens will cover the costs of this service for an unlimited number of authors for the initial 1-year period. These actions aim to further remove barriers to submission and therefore promote robust participation. This integration is part of a Wellcome Trust-funded PLOS open science initiative to explore how PLOS can improve data sharing and engagement with the data that have been shared [6]. Evidence-based solutions This approach was chosen because research conducted by PLOS, involving mostly PLOS authors, suggests that busy researchers are unlikely to significantly alter their behavior when it comes to data sharing [7] and are more likely to participate in public data sharing if the barriers to entry (including time, multiple processes, and cost) are reduced or removed. Our view is that increased data sharing is more likely to occur if good practices are embedded into existing author workflows, such as the journal manuscript submission. By introducing a small, simple, and optional extra step to the submission process, we aim to promote good open science practice in a way that respects author choice and does not increase workload. PLOS Pathogens is well placed to trial Dryad as the research data authors commonly put into Supporting information are primarily not suited to a discipline-specific repository but could be of value to other researchers who may wish to reproduce or utilize their results. Not all the outputs shared via Supporting information are suitable for a repository (e.g., tables supporting statistical analysis presented in the article), and so Supporting information will still be an option for authors wanting to share other materials. Success measures The Dryad integration with PLOS Pathogens will initially be available for 1 year from October 2021 to all authors starting a new submission. During this time, we will be monitoring the uptake of the integration and conducting research with the community to further understand their attitudes toward data repositories and their experience of the integration. What we do after the first year will depend on what the community tell us during our consultations and if there has been a measurable increase in data sharing via a repository in the journal. We hope that this integration will empower PLOS Pathogens authors to practice data sharing in a way that benefits them and their community.
1,269.8
2021-11-01T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Multiplicity of Weak Positive Solutions for Fractional & Laplacian Problem with Singular Nonlinearity where Ω ⊂ R (푁 ≥ 3) is an open bounded domain with smooth boundary 휕Ω , 푁 > 2푠(0 < 푠 < 1), 1 ≤ 푞 < 푝 < 푁/푠, 0 < 훾 < 1 < 훽 < 푝∗ 푠 − 1 , is a positive parameter. e weight functions 푎 : Ω → R is in 푝∗ 푠 /(푝∗ 푠 +훾−1) with 푎(푥) > 0 for almost every 푥 ∈ Ω, 푏 : Ω → R is bounded with 푏(푥) > 0 for almost every 푥 ∈ Ω, and 푝∗ 푠 = 푝푁/(푁 − 푝푠) denotes the critical Sobolev exponent. (−Δ) , with 푟 ∈ {푝, 푞}, is the fractional -Laplacian operator defined for any 푢 ∈ C∞ 푐 (R푁) by In recent years, the fractional Laplacian problems have been extensively investigated. For more details, we cite the reader to [11][12][13][14][15]. ere are many different definitions of weak solutions for the fractional Laplacian equation (3). In [16], Fang say that 푢 ∈ 퐻 푠 0 (Ω) is a weak solution of (3) with 휆 = 0 and 푎 ≡ 1 if the identity holds. In virtue of the method of sub-supersolution, the author gives the sufficient conditions for the existence and uniqueness of positive solution. Very recently, great attention has been devoted to the study of fractional -Laplacian problems, see for instance [17][18][19][20]. However, in literature, there are only a few papers [21][22][23] dealing with fractional & problems. Motivated by the works [1,23,24], in this paper, we investigate the existence and multiplicity of solutions for the fractional & Laplacian problem (1) and extend the main results of Wang and Zhang [1]. (1) is article is organized as follows. In Section 2, we give some notations and preliminaries. Section 3 is devoted to proving that problem (1) has at least two positive solutions for sufficiently small. Proof. By using the definition of N (Ω) and (16), we have is implies that J (Ω) is coercive and bounded from below on N (Ω). e proof is complete. ☐ Journal of Function Spaces 4 which yields that Passing to the limit as 푛 → ∞, we get J = . In the following, we will show that 푢 휆 ∈ N + 휆 (Ω). It suffices to prove that → strongly in 푊 푠,푝 0 (Ω). By J = and we have . e proof is complete. ☐ Main Result Similar to the proof of Lemma 3.7 in [1], we can easily obtain the following Lemma. Conclusions In this paper, the existence and multiplicity of positive solutions for a class of fractional & Laplacian problem with singular nonlinearity have been investigated. It is worthy to point out that few studies have been done on this issue. By means of the variational method, Nehari manifold method and some analysis techniques, the sufficient conditions of existence and multiplicity of positive solutions to this problem have been presented in eorem 13. Our results generalize the main conclusions of Wang and Zhang in [1]. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that there are no conflicts of interest.
749
2020-01-30T00:00:00.000
[ "Mathematics" ]
Protective effects of a Modified Vaccinia Ankara-based vaccine candidate against Crimean-Congo Haemorrhagic Fever virus require both cellular and humoral responses. Crimean-Congo Haemorrhagic Fever (CCHF) is a severe tick-borne disease, endemic in many countries in Africa, the Middle East, Eastern Europe and Asia. There is no approved vaccine currently available against CCHF. The most promising candidate, which has previously been shown to confer protection in the small animal model, is a modified Vaccinia Ankara virus vector expressing the CCHF viral glycoprotein (MVA-GP). It has been shown that MVA-GP induces both humoral and cellular immunogenicity. In the present study, sera and T-lymphocytes were passively and adoptively transferred into recipient mice prior to challenge with CCHF virus. Results demonstrated that mediators from both arms of the immune system were required to demonstrate protective effects against lethal challenge. Introduction Crimean-Congo haemorrhagic fever (CCHF) is an acute tick-borne zoonotic disease. The causative agent, CCHF virus (CCHFv), has the most extensive geographical distribution of the medically important tick-borne viral diseases [1] with a distribution over much of Asia, the Middle East, Africa and expanding areas of south-eastern Europe [2]. The continued spread of the Hyalomma tick vector through climate change and modern farming practices has resulted in the virus becoming established in territories in which it was not previously endemic; its introduction to Turkey, Greece and Spain being testament to this [3]. Recognised antiviral compounds (e.g. ribavirin) or vaccines have not proved to be effective against CCHFv in controlled trials [4,5]. The only currently available vaccine is that produced in Bulgaria, which is made in suckling mouse brain, inactivated by chloroform, heated at 58°C and adsorbed on aluminium hydroxide [6]. Although it has been shown to elicit immunity [7], there is currently no evidence directly demonstrating efficacy. It is unlikely that due to its crude preparation it will ever gain widespread international regulatory approval. Recent vaccine approaches for CCHF include a DNA vaccine expressing the entire open reading frame of the M segment which includes the envelope glycoproteins of CCHFv and has been shown to elicit antibody responses [8]. Similarly, another CCHF vaccine candidate based on transgenic plants expressing the CCHFv glycoprotein has been shown to induce antibody [9]. However, neither of these vaccines has been tested in an animal model. Work on CCHFv was hampered by a lack of a suitable animal model, until 2010, when mice deficient in STAT-1 [10] or type-I interferon receptor (A129, IFN-α/βR -/-) [11] were demonstrated to be susceptible to infection. The STAT-1 knockout mice exhibited selective signalling defects in their response to all three types of IFNs (type I, IFN-α and -β; type II, IFN-γ; and type III, IFN-λ) which leads to a complete abolishment of the intracellular IFN response [12], and therefore the A129 mice offer the more robust and intact immune system of the two models. A129 mice show no overt anomalies, but are unable to control with certain viral infections, despite otherwise normal immune responses [13,14]. Consequently, they provide a useful model for investigating the adaptive immune response and performing active protective studies under stringent, frequently lethal, conditions [15]. To date, only two CCHF vaccine candidates have been reported to confer protection against lethal challenge in the A129 mouse model. One used a cell culture based vaccine, which required the growth of live virus followed by an inactivation procedure (which included treatment with formaldehyde) and showed partial protection [16]. Due to the high biological containment needed to handle live CCHFv, this approach is unlikely to be applicable to large scale vaccine manufacture and suffers from the same limitations as the Bulgarian vaccine approach in being unlikely to gain international regulatory approval. The second vaccine showed complete protection against CCHFv using a modified vaccinia Ankara (MVA) virus vector expressing the CCHFv glycoprotein (MVA-GP) [17]. Therefore, it has been demonstrated that despite lacking the type-I interferon receptor, A129 mice maintain sufficient adaptive immunity to allow protection from a lethal challenge dose of CCHFv. The A129 mouse strain has also been used to decipher the protective role of antibodies induced by a novel vaccine candidate in protection against another arbovirus, Chikungunya virus [18]. Therefore, its application to the study of vaccination responses is well documented. Since the MVA-GP vaccine induced both antibodies and T-cells to the CCHFv antigen, the relative contribution of each in eliciting protective immunity is unknown. Therefore, this study was designed to ascertain whether immune sera, CD3 + T-lymphocytes, or both were required to exert protective effects. initially prepared from suckling mouse brain homogenate and then passaged twice in SW13 cells (European Collection of Cell Cultures, UK). Viral titre was determined by focus-forming unit (ffu) assay in Vero cells. Ethics statement All procedures were undertaken according to the United Kingdom Animals (Scientific Procedures) Act 1986. These studies were approved by the ethical review process of Public Health England, Porton Down, UK and the Home Office, UK via an Establishment Licence (PEL PCD 70/1707) and project licences (30/2476 and 30/2697). Animals Female A129 (IFN-α/βR -/-) mice aged 5-8 weeks were sourced from an approved supplier (B&K Universal, UK). Mice were housed in negative pressure flexible isolators to provide protection from opportunistic infections in a Containment Level 4 facility. Food and water were available ad libitum. All efforts were made to minimise animal suffering; manipulations were minimised and endpoints were limited to a moderate severity rating. For culling at Containment Level 4, mice were anesthetised with isofluorane gas and then a cervical dislocation was performed. Immunisation of mice Mice were injected into the caudal aspect of the proximal hind limb musculature with 10 7 plaque-forming units (pfu) per animal of MVA-GP (n = 45 mice) or MVA-1974 (n = 18 mice) diluted in endotoxin-free PBS. A total volume of 100 μl was delivered equally across two sites. Animals received a booster vaccination 14 days later. Control animals received 10 7 pfu of nonrecombinant MVA 1974 or an equivalent volume of saline. Twelve days after the booster vaccination, a tail bleed was collected from animals. On day 13 post-booster vaccination, a random selection of animals were euthanised from the MVA-GP (n = 36) and MVA-1974 (n = 9) groups with blood collected into serum separation tubes (Becton Dickinson, UK) and spleens collected into RPMI 1640 media (Sigma, UK) from all euthanised animals. Passive transfer with immune sera Blood from vaccinated animals was left to fully clot before centrifuging at 1300 RCF for 10 minutes. Sera collected the day before euthanasia were used in ELISA assays against the CCHFv Gn protein, as previously described [17], in order to confirm successful vaccination of the animals. Blood collected at the terminal bleed was used for passive transfer into recipient animals, with sera from the individual animals being pooled and then 200 μl being delivered via the intraperitoneal (i.p.) route. The passive transfer procedure was conducted within 6 hours of the blood being collected from the donor animals. Adoptive transfer with immune CD3 + T lymphocytes Spleens from immunised animals had any excess fat removed before being pooled according to the immunisation regime and dissociated in a C-tube using a GentleMACS (Milteny Biotech, UK). Splenocytes were filtered with a 70 μm filter and washed with RPMI 1640 medium containing 2% foetal bovine serum before being treated with 0.83% ammonium chloride solution for 5 minutes to lyse erythrocytes. Finally, they were washed with sterile PBS (Invitrogen, UK). For all procedures, cells were kept on wet ice and reagents were pre-cooled. CD3 + T cell enrichment was performed using negative selection with a commercially-available procedure (STEMCELL Technologies, UK). Briefly, splenocytes were resuspended in robosep buffer at a concentration of 10 8 cells/ml before the addition of rat serum (to reduce non-specific binding) and a CD3 + antibody cocktail. Tubes were mixed for 10 minutes before rapid spheres were added and incubated for 3 minutes. The tubes were then placed on a magnet and the unbound fraction was removed, containing the enriched CD3 + T cells. Cells were counted and diluted with sterile PBS to give a concentration of 5x10 7 cells/ml. For adoptive transfer studies, 200 μl of cell suspension was intraperitoneally delivered, equating to 1x10 7 cells. The adoptive transfer procedure was conducted within 6 hours of the spleens being collected from the donor animals. To confirm CD3 + enrichment, cells before and after the selection process were assessed via flow cytometry after staining with a CD3-FITC antibody (Becton Dickinson, UK) and using an anti-rat IgG2-FITC isotype antibody (Becton Dickinson, UK) to control for non-specific staining. The data were collected on an FC500 flow cytometer (Beckman Coulter) and analysed with CXP analysis version 2.1 software (Applied Cytometry Systems). Challenge with CCHFv One day after the administration of sera, CD3 + T-lymphocytes, or both, mice (n = 9 per group) received 200 ffu CCHF virus strain IbAr10200 intradermally in the midline of the lumbar region in a volume of 100 μl divided equally across two sites. 50 μl is the maximum recommended volume for intradermal inoculation of mice [19] and confirmation of intradermal delivery was seen by a visible bleb formation under the skin. Animals immunised with MVA-1974 and MVA-GP (n = 9 per group) were simultaneously challenged. Post-challenge, animals were weighed and body temperature measured daily by a subcutaneously located temperature chip. In addition, they were observed for clinical signs of disease twice daily (arching, ruffled fur, lethargy and immobility). Criteria for euthanasia on welfare grounds consisted of 20% weight loss or observation of two abnormal clinical signs. At 4 days post-challenge, randomly selected animals were euthanised and samples of blood, spleen and liver were collected for viral load studies. Spleen and liver samples were also collected for histopathological examination. Viral load determination Whole blood (100 μl) was collected into RNA Protect Animal Blood tubes (Qiagen) and stored at -80°C. Tubes were thawed, inverted and left for a further 2 hours at room temperature to ensure efficient cell lysis. Samples were treated with Red Blood Cell Lysis Solution (Miltenyi Biotec) before purification of total RNA using an RNeasy Mini kit (Qiagen). For viral load analysis, spleen and liver samples were collected into RNALater (Qiagen) and stored at -80°C. Thawed tissue was transferred to RLT buffer (Qiagen), homogenised by passing through a 70 μm sieve and then treated using an RNeasy Mini kit (Qiagen) for extraction of total RNA. CCHFv S segment was detected by RT-PCR on the ABi 7500 RT-PCR platform as previously described [20], with cycling conditions adjusted to those described in the QuantiFast probe assay: 50°C for 20 min and 96°C for 5 min, followed by 45 cycles of 95°C for 15 sec and 60°C for 30 sec (with quantification analysis of fluorescence performed at the end of each 60°C step), and final cooling of 40°C for 30 sec. A synthetic S segment of known concentration was used to quantify S segment copy number in blood and tissue samples. All reactions were run in triplicate. To normalise the CCHFv expression data, the hypoxanthine guanine phosphoribosyl transferase (HPRT) housekeeping gene was used. A one-step RT-PCR with singleplex detection was performed targeting an 89 bp product in the mouse HPRT gene (NCBI Reference sequence NM_013556) using the QuantiFast probe assay (Qiagen) and the ABi 7500 RT-PCR platform. CT values for CCHFv and HPRT were each inverted by subtracting the CT value from 45 (the total number of cycles), where CT is the number of cycles to reach the fluorescence threshold value. The mean value of CCHFv was then divided by the mean value of the HPRT reference gene for each sample. Histological analysis Samples of spleen and liver were placed in 10% neutral buffered formalin for 7 days and processed routinely to paraffin wax. Sections were cut at 3-5 μm, stained with haematoxylin and eosin (H&E) and examined microscopically. Lesions referable to infection with CCHF virus were scored subjectively using the following scale: normal, minimal, mild, moderate and marked. For immunohistochemistry (IHC), formalin-fixed, paraffin-embedded sections of spleen and liver, cut between 3-5 μm, were mounted on positively charged X-tra Adhesive slides (Leica Biosystems, UK), deparaffinised and rehydrated. Immunohistochemical staining was achieved using a BOND-MAX Immunostainer (Leica Microsystems, UK) and a Novacastra Bond Intense R (Leica Biosystems) detection kit. A heat-induced epitope retrieval cycle with buffer ER1 (Leica Biosystems) was performed for 20 minutes. Slides were incubated with rabbit serum (4%) (Abcam, Cambridge, UK) for 20 minutes before avidin/biotin blocking (15 minutes each) (Abcam). Polyclonal antibody raised in sheep immunised against recombinant CCHFv nucleoprotein (kindly provided by Dr John Barr, University of Leeds, UK) was incubated with the tissue for 30 minutes, and this was followed by incubation with a biotinylated rabbit anti-sheep polyclonal antibody (Abcam) at a dilution of 1:500, for 10 minutes, resulting in a brown stain. Haematoxylin was used as the counterstain. Positive and negative control slides were included. Immunolabelled slides were evaluated using light microscopy. Preparation of sera and enriched CD3 + T cells On the day before immunised animals were scheduled to be culled, blood samples from individual animals were collected and analysed for antigen-specific binding by ELISA. Samples from immunised mice demonstrated binding to the CCHF Gn protein at varying levels, with some animals demonstrating responses (n = 12) whilst others showed no specific binding (n = 24) (Fig 1). Upon culling of animals, sera were pooled from all similarly immunised animals to produce a population-based pool prior to being used for passive transfer. During preparation of splenocytes from immunised animals, samples were labelled with fluorescently-conjugated antibodies specific to lymphocyte markers and analysed by flow cytometry before and post-enrichment. The percentage of CD3 + T-lymphocytes in the splenocytes of MVA-1974 and MVA-GP immunised animals were 25.5% and 32%, respectively. After enrichment of CD3 + cells in the splenocytes of MVA-GP immunised animals the contribution increased to 72.4%. The viability of all cell preparations was >95%. Survival Mice given MVA-1974 all met humane clinical endpoints at day 3 post-challenge, whereas those which received the MVA-GP vaccine survived until the scheduled end of the study 12 days post-challenge (Fig 2). Transfer of CD3 + T cells or sera from MVA-GP immunised animals failed to confer any significant effects on the survival of MVA-1974 immunised animals (P = 1.000 and P = 0.197, respectively, Log-Rank survival analysis), with mean times to death of 3 and 3.33, respectively. However, transfer of both CD3 + T cells and sera from MVA-GP immunised animals did result in a significant increase in survival time (P = 0.005, Log-Rank survival analysis), with the mean time to death being 4.75 days post-challenge. Clinical signs Weight changes in animals receiving CD3 + T cells or sera from MVA-GP immunised donors followed a similar downward trend to that observed in animals immunised with MVA-1974 ( Fig 3A). The group of mice which received both CD3 + T cells and sera from MVA-GP immunised animals exhibited the same kinetics of weight loss as the animals just given either CD3 + T cell or sera, but effects were delayed by 1-2 days. Animals protected from the CCHF virus challenge after vaccination with MVA-GP did not lose weight. Average temperature increases of 1-2°C were recorded in all groups apart from those immunised with MVA-GP (Fig 3B). Upon reaching humane clinical endpoints, several animals had a substantial fall in temperature. The group which received MVA-GP showed stable temperatures throughout the course of the study. Numerical values were assigned to the recorded clinical signs to allow the data to be graphically represented. All groups showed clinical signs of illness apart from those immunised with MVA-GP (Fig 3C). Signs first started appearing 3 days post-challenge. However, in the animals which received both CD3 + T cells and sera from MVA-GP immunised animals, there was a delay before similar levels of signs appeared compared to the groups that received only CD3 + T cells or sera. Virology Samples of blood, spleen and liver were collected from 3 animals from each group 3 days postchallenge. The levels of CCHFv RNA were compared to those of the HPRT gene in order to standardise the ratio of viral RNA to a cellular house-keeping gene. Results demonstrated that in the blood, spleen and liver there were lower levels of viral RNA in the MVA-GP immunised animals then in the groups which received sera, CD3 + T-cells, or both (Fig 4). In the blood and spleen, there were indications that the viral RNA levels were lower in the group which received both humoral and cellular mediators. However, for the liver the responses were less distinct. Due to the small number of animals culled at this timepoint, statistical analysis was not carried out. Histology To provide a snapshot of responses in the spleen and liver of challenged mice, tissues collected from 3 animals per group 3 days post-challenge were analysed for histological changes and the presence of viral antigen by immunohistochemistry. Results showed that the MVA-GP immunised animals had neither evidence of lesions associated with CCHFv infection nor the presence of viral antigen, whereas those immunised with empty MVA or receiving sera, CD3 + T cells, or both from immunised animals had both lesions detected and viral antigen present (Table 1 and Fig 5). Of the six samples in the group that received both CD3 + T cells and sera, lesion severity was scored mild in five and moderate in 1 of the liver samples. This indicates that the lesions were less severe in this group at this timepoint. Discussion Due to the lack of available vaccines against CCHF, there is an urgent requirement for the development of a modern immunisation approach that can meet international regulatory approval. To date, the MVA-GP vaccine candidate developed against CCHF is the only vaccine that has demonstrated efficacy against lethal CCHFv infection and is attractive for international licensure [11]. It is therefore desirable to obtain more data about this candidate and develop a better understanding of its effects. In this study, MVA-GP vaccinated animals were used as a positive control group, and again demonstrated the 100% efficacy of the vaccine approach. However, due to previous work demonstrating that a prime-boost vaccination with MVA-GP generates both humoral and cellular immunity [11], further work was required to elucidate how each arm of the immune system contributed to the protective effects. In the present study, it was observed that animals which received both sera and CD3 + Tcells from immunised mice exhibited a significant increase in time to death when compared with those animals receiving either sera or CD3 + T cells. The finding that both arms of immunity are needed to exert a protective effect has been demonstrated with other viral pathogens. Using a highly pathogenic and cytolytic mouse pathogen, Ectromelia virus, it was demonstrated that antibodies and CD8 + T cells were complementary and essential to survival against infection in the natural host [21]. It is envisaged that to protect against CCHF virus, the T cells eliminate intracellular viruses and the humoral response targets extracellular viruses. During vaccine development for another viral haemorrhagic fever virus, Ebola virus, it has also been suggested that vaccines would need to induce protective humoral as well as protective cellular responses to efficiently clear both free virus and virus-infected cells [22]. In this study, although the transfer of both T cells and sera extended the time to death, all animals still met humane clinical endpoints. It is possible that the transfer of the immune mediators were at concentrations too low to provide complete protection. Alternatively, there may have been insufficient continuous stimulus to antigen or a breakdown of cells and antibodies which reduced their protective effects. Due to only a limited volume of cells and antibodies being transferred into the recipient mice, the infection may have overcome the level of adaptive immune response transferred. To ascertain whether antibodies on their own played a protective role, sera was passively transferred to recipient mice. The protective role of humoral immunity has been demonstrated with other viruses in the Bunyaviridae family, including Rift Valley Fever virus (RVFv) [23,24] and Hantavirus models [25,26]. In our work antibodies were delivered via the i.p. route, this is an effective route for transfer and other studies have shown that relocation from the peritoneal cavity to the blood occurred within 2 hours [27], however in our study transferred antibodies failed to protect mice from a virus challenge. In contrast passive transfer studies of antibodies generated by a Ross River virus vaccine to naive mice also using the i.p. route where able to transfer protection [15]. In this case 150μl of sera was i.p. transferred to A129, the assumption was made that the serum would be diluted approximately 1:10 in the mouse blood volume [15]. Despite our study delivering a larger volume of 200μl, protection was still not conferred, suggesting that antibodies alone are unable to protect. Whilst the dogma is that the general aim of vaccine application is the production of neutralising antibodies [6,9], this may not always hold true. During convalescence from CCHF in humans, the levels of neutralising antibody activity are relatively low [28,29] and the importance of neutralising antibodies in protection against disease is unknown. Additionally, it has been shown that in CCHFv there is no strict correlation between in vitro neutralisation and in vivo protection [30]. With a cell-based vaccine against CCHFv that conferred protection, neutralising antibodies were measured and deemed essential for the increased protection of mice [13]; however, the role of other immune responses, including T cell responses, was not assessed. Due to this cell-based vaccine being similar to the existing CCHF vaccine used in Bulgaria which generates T-cell and antibody responses in vaccinated humans [7], it is plausible that both arms of the immune system contribute to the protective effects observed. We have previously shown that not all MVA-GP immunised animals generate antibody responses yet complete protection against lethal CCHFv challenge was observed [17]. Similarly, in this study not all animals showed an antibody response to recombinant CCHF Gn protein yet the MVA-GP immunised group demonstrated complete protection. These observations support the role that antibody alone is unlikely to be the mediator of protection for the MVA-GP vaccine. To investigate the role of cellular immunity, CD3 + T cells from immunised animals were adoptively transferred into recipient mice. The i.p. route was chosen over the standard i.v. route. This was because i.p. injections are more convenient to administer than i.v. injections and they allow the delivery of more cells than can be tolerated via the i.v. route [31]. Additionally, working with A129 mice is technically demanding, as they need to be housed in infectionfree flexible film isolators making access to the i.v. route difficult and causing extra distress to mice [32]. Moreover, there is good evidence that immune cells infused via the i.p. route follow a similar course of distribution as those delivered via the i.v. route [33], i.e. migrating to the spleen [31]. Also both i.p. and i.v. transfer have an equilibrated homing time of 24 hours [34]. In the present study, CD3 + T cells (containing a crude mixture of both CD4 + and CD8 + cells) failed to confer protective effects on their own. This is in contrast to passive transfer studies using material derived from different vaccine studies enriched in CD8 + T cells, that have transferred protection [22,35]. For example, using the i.p. route to transfer cytotoxic T-lymphocytes from mice vaccinated with a Venezuelan equine encephalitis virus replicon encoding the Ebola virus nucleoprotein, mice were protected from a lethal Ebola virus challenge even though challenge was performed 4 hours post-cell transfer [22]. Similarly, CD8 + T cells delivered i.p. have also been demonstrated to confer protection from West Nile virus [35]. Our results are the first to provide evidence that both the humoral and cellular arms of the immune system are required to exert a protective effect against CCHFv infection after vaccination with a MVA vaccine expressing the CCHFv glycoprotein.
5,560.8
2016-06-07T00:00:00.000
[ "Biology", "Medicine" ]
Designing Songs for Teaching and Learning English: A Literature Review This paper presents the theoretical backgrounds covering the theories of material design, which is useful for understanding of the effective material and the steps to produce it, and mentioning the theories of learning, which include foci on the language, the learners, and learning process. The paper aslo addresses a relationship among factors that contribute to teaching and learning process by using songs as English language teaching material. The authors review the benefits of songs and music under the lights of the cognitive, linguistic, and pedagogical levels through the material design model suggested by Hutchinson and Waters (1987). This paper contributes to the understanding of designing songs as a tool for teaching and learning English. need requires three conditions: something to communicate, someone to communicate with, and some interest in the outcome of the communication. The long-term needs of learners are that they can communicate confidently not only in the classroom but also in real situations outside the classroom. Thus, teachers should teach the learners how to learn effectively outside the classroom. The language classroom, to Hall, has three componentsthe teacher, the learners, and the materials. The learners must be the active component in the learning process because the learners themselves exactly know their own needs and interests. Hence, material writers should pay much attention on the way to help the learners develop their inventiveness and participation in the learning process. Hutchinson and Water (1987) think that the most important step that materials writers consider before they start writing materials is the three basic questions under three main headings: Language descriptions, Theories of learning and Needs analysis. Hutchinson shows the relationship of the three in the figure below: Figure 1. Factors affecting English for specific purpose course design (Hutchinson & Waters, 1987, p. 22) According to Hutchinson, language description is the way in which the language system is broken down so that people can study it for learning. Hutchinson concludes that there are three levels consisted in all communication: a structural level, a functional level, and a discourse level. Hutchinson believes in the difference between learners' performance and learners' competence, which contributes to the success of learners in studying a language. Learning theories provide the theoretical basis for the methodology by helping us understand how people learn. Hutchinson emphasizes on three learning theories: behaviorist approach, cognitive approach, and affective criteria. He thinks that a material writer should combine all the three learning theories in designing a material. Needs analysis, in Hutchinson's opinion, is very important because it helps material writers produce effective materials that serve the learners better. Hutchinson states that needs analysis is a complex process, involving both target situation needs concerned with language use and learning needs dealt with the way our learners learn to do what they do with language. Hutchinson gives out two models of material design consisting of four main elements: Input, content focus, language focus and task. The relationship of these elements is shown in two figures below. Journal of Literature, Languages and Linguistics www.iiste.org ISSN 2422-8435 An International Peer-reviewed Journal Vol.61, 2019 86 Figure 2: A material design model (Hutchinson & Waters, 1987, p. 109) Figure 3: An expanded material design model (Hutchinson & Waters, 1987, p. 118) In Hutchinson's opinion, Figure 2 can be considered the nucleus of a material model. Depending on the purposes, the input can be chosen from any piece of communication data. It can provide learners with opportunities to use their information processing skills and use their existing knowledge of both the language and subject matter. The content and language are drawn from the input: Content focus is a means of conveying information and feelings about something; whereas, language focus, consisting of sentence structures, function or text construction, provides learners with opportunities to take the language to pieces, study how it works and practice putting it back together again. The two elements are selected according to what learners will need to perform the task, which is seen as the primary focus of the unit. Task, to Hutchinson, is the ultimate purpose of language learning. It can lead learners to use the content and language knowledge they have built up through the unit effectively. In order to refine the material, Hutchinson suggests putting some other elements that can be seen in Figure 3. The starter plays a number of important roles: It creates a context of knowledge for the comprehension of the input; it arouses the learners' interests in the topic; it activates the learners' minds and gets them thinking. Project is an expanded part to give learners the chance to apply the knowledge gained to their own situation. It means that after the unit, learners can apply what they studied in similar situations. Dumitrescu (2000) thinks when designing or choosing a suitable material for the learners, teachers should pay attention to three basic aspects of learners' backgrounds, consisting of linguistic background, knowledge background, and cultural background. This author believes that linguistic background influences classroom management, the selection of task, the focus of micro skill instruction; knowledge background determines the need for specificity of information in the selected materials; cultural background affects trainee-instructor interaction. He also adds two additional factors that influence the final decision of what materials to incorporate into specific language tasks: Applicability and adaptability. Applicability refers to the relationship between the content of the material with the learners' field of employment. Adaptability refers to the ease of task design and ease of text manipulation. Hutchinson (1987, pp. 124-125) gives out ten stages to write a material, including: THEORIES OF SECOND LANGUAGE LEARNING There is a need to pay attention to theories of second language learning, which helps material designers as well as the teacher understand better about the factors contributing to success of a teaching and learning process. Two theories are reviewed. The first one will mention three essential elements in any successful language learning experience suggested by Nunan (1999); the second one should be about three internal factors affecting on second language learning raised by Dulay, Burt, and Krashen (1982) Three essential elements in successful language learning experience According to Nunan (1999), there are three essential elements in any successful language learning experience: language, learners, and learning process. It is believed that teachers only play a crucial role in language learning contexts; they are not essential to the learning process. Focus on language knowledge Celce and Olshtain (2000) suggests three main resources of language that interact in ways that manifest themselves through discourse: Grammar, vocabulary, and phonology Richards, Platt, and Plat (2010) think of grammar as the first issue to be considered in language. It is normally believed that the essence of language lies in grammar, which is defined as 'a description of the structure of a language and the way in which linguistic units such as words and phrases are combined to produce sentences in the language. It usually takes into account the meanings and functions these sentences have in overall system of the language. It may or may not include the description of the sounds of a language. ' Nunan (1999, p. 99) emphasizes on the relationship between grammar and discourse: 'Grammar and discourse are tied together in a fundamentally hierarchical relationship with lower-order grammatical choices being driven by higher-order discoursal ones.' There are three dimensions of grammar to be considered, including form, meaning, and use. Hence, it is suggested that learners must be encouraged to explore the relationship of three interlinked dimensions via tasks designed to help learners explore the functioning of grammar in context and assist learners in using the developing grammatical competence in the creation of coherent discourse. The second issue in language comes to vocabulary. Vocabulary, as part of the language system, is interrelated with grammar. It is thought by proponents of comprehensive-based approach that an extensive vocabulary can enable learners to outperform their competence, i.e. learners can obtain meaning from spoken and written texts more efficiently. The third one in language is pronunciation, which plays an essential role in helping learners improve listening skill. Pronunciation and listening are believed to be in complementary relationship. Learners should be taught two features of phonology, called segmental phonology, which has to do with the individual sounds of the language, and suprasegmental phonology, which has to do with stress, rhythm, and intonation patterns in the language. The suprasegmental aspect of the sound system plays an important role in communication: Faulty stress, rhythm, and intonation patterns can cause failure in improving listening skill effectively. It is clearly seen that vocabulary, grammar, and phonology should be interrelated. Celce and Olshtain (2000) believe that 'in most instances of communication, all of the language resources are deployed simultaneously to create discourse. When language inaccuracies occur, the intended message may be comprehended because the contextual support is clear and unambiguous.' Hence, learners should be encouraged to use all their language resources to produce and interpret situated and purposeful discourse both within and outside the classroom. Nunan draws some pedagogical implications from the interrelation of the three language resources as follows: -Learners are exposed to authentic samples of language so that grammatical features one is trying to teach are encountered in a range of different linguistic and experiential contexts. -It is not assumed that once learners have been drilled in a particular form, they have acquired it, and drilling, if it is used, is seen only as a first step toward eventual mastery. -There are opportunities for recycling of language forms, and learners are engaged in tasks designed to make transparent the links between form and function. -Learners are given opportunities to develop their own understandings of the grammatical principles of English by progressively structuring and restructuring the language through inductive learning experiences that encourage them to explore the functioning of grammar in context. -Over time, learners encounter target language items in an increasingly diverse and complex range of linguistic and experiential environments. (Nunan, 1999, p. 142) Focus on learners Learners play an important role in learning process. Hence, there is a need to pay much attention to learners, which includes three main issues: Learner needs, learner roles and contributions, and learner choices. Richards, Platt, J., and Platt, H. (2010, pp. 242-243) define needs analysis (needs assessment) as 'the process of determining the needs for which a learner or group of learners requires a language and arranging the needs according to priorities.' According to Brinkley (as cited in Nunan, 1999, p. 149) needs can be analyzed into two groups: 'objective' needs and 'subjective' needs. 'Objective' needs are those that can be diagnosed by teachers on the basis of the analysis of personal data about learners along with information about their language proficiency and pattern of language uses. 'Subjective' needs, which often include wants, desires, expectations, or other psychological manifestations of a lack, cannot be diagnosed easily, or, in many cases, even stated by learners themselves. Nunan (1999) also suggests the distinction between 'content' needs and 'process' needs. Content needs include the selection and sequencing of topics, grammar, function, notions, and vocabularytraditionally the domain of syllabus design; process needs refer to the selection and sequencing of learning tasks and experiences -traditionally seen as the domain of methodology. There is another useful distinction between initial needs and ongoing needs. Initial needs analysis is that carried out before a course begins; ongoing needs analysis refers to the often relatively informal analysis carried out by teachers once a course has begun. Nunan believes that although the course objectives, materials, the teacher, room, and time of day are the same, no two classes are ever the same, which can be caused by the different role expectations that the participants (the teacher and learners) have of themselves and others in the group. Tony Wright (as cited in Nunan, 1999, p. 156) can identify two aspects of role: The first relates to social and interpersonal factors; the second relates to the learning tasks, particularly the teacher and learners expectations about the nature of the learning tasks, and the ways that they are dealt with in the classroom. These two factors will interact constantly to create a dynamic and unstable environment that can either facilitate the learning process or seriously impair it. The role expectations of participants may depend on individual personality factors. It should be believed that learner roles as active constructors of their own knowledge must be important, which can create a better learning environment for learners and the teacher to cooperate effectively in learning process. Tony Wright suggests that learners' expectations about the nature of learning tasks and the way in which individuals and groups deal with learning tasks should be an important aspect of the dynamics of the classroom. The difference between an effective and an ineffective language user is that effective learners make appropriate choices when it comes to the means through which they learn language. Thus, learning to make informed choices is an important skill for all learners to develop (Nunan, 1999, p. 164). There is a negotiation between the teacher and learners, i.e. the views of the learners and the pedagogical agenda of the the teacher are satisfied through a process of give-and-take. According to Nunan, negotiation is a continuous process, which can be done by incorporating a series of steps into the educational process. Nunan also suggests nine steps in moving learners along the negotiation continuum as follows: Kirschner, Sweller and Clark (2006) claims there are five attributes of an effective learner: self-directed, inquisitive/ curious, self-aware/ honest with self, risk-taking/ not afraid of making mistakes, and open-minded. Of all five attributes, self-direction is the most important to the learning process. They believe that if learners can envision how a certain skill or class will help them in long run, they can motivate themselves to take more pride in their day-to-day work. Asking questions and being inquisitive is also essential to the learning process because the process involved in formulating a question often helps crystallize a new concept in learners' mind. They also believe the essence of self-awarness. If learners are confused about something, they need to admit that they do not have the answers and go look for help, which can help them become better at whatever they are trying to accomplish. Learners need to take mistakes in stride as part of the learning process because if they are afraid of making mistakes, they will not engage their mind fully in training courses; thus, they will not learn as much in long run. Finally yet importantly, good learners need to be open-minded. Learners will enhance the ability to learn if they are willing to look at a problem from a different perspective though they may not agree with it. They conclude that the five attributes mentioned above can help learners much in opening their mind and getting more knowledge. Learning process One of the most important factors in learning process is learning strategies, the mental and communicative procedures learners use in order to learn and use language. Strategies to the learning process are very important because if learners are conscious of the processes underlying the learning learners are involved in, the learning will be more effective. It should be believed that learners who are taught the strategies underlying their learning are more highly motivated, more self-confident in learning. Rebecca Oxford (as cited in Nunan, 1999, p. 172) identifies 12 key features of language learning strategies: 1. contribute to the main goal, communicative competence 2. allow learners to become more self-directed 3. expand the role of teachers 4. are problem-oriented 5. are specific actions taken by the learner 6. involve many actions taken by the learners, not just cognitive 7. support learning both directly and indirectly 8. are not always observable 9. are often conscious 10. can be taught 11. are flexible 12. are influenced by a variety of factors Nunan (1999) thinks that 'underlying every task that one introduces into the classroom is a learning strategy of one kind or another.' A typology of commonly used learning strategies should be developed as follows: Classifying Putting things that are similar together in groups Example: Study a list of names and classify them into male and female Predicting Predicting what is to come in the learning process Example: Look at unit title and objectives and predict what will be learned Inducing Looking for patterns and regularities Example: Study a conversation and discover the rule for forming the simple past tense Taking notes Writing down the important information in a text in your own words Concept mapping Showing the main ideas in a text in the form of a map Inferencing Using what you know to learn something new Discriminating Distinguishing between the main idea and supporting information Diagramming Using information from a text to label a diagram INTERPERSONAL Cooperating Sharing ideas and learning with other students Example: Work in small groups to read a text and complete a table Role-playing Pretending to be somebody else and using the language for the situation you are in Doing controlled exercises to improve knowledge and skills Example: Listen to a conversation, and practice it with a partner Using context Using the surrounding context to guess the meaning if unknown words, phrases, and concepts Summarizing Picking out and presenting the major points in a text in summary form Selective listening Listening for key information without trying to understand every word Example: Listen to a conversation and identifying the number of speakers Skimming Reading quickly to get a general idea of a text Example: Decide if a text is a newspaper article, a letter, or an advertisement AFFECTIVE Personalizing Learners share their own opinions, feelings, and ideas about a subject Example: Read a letter from a friend in need and give advice Self-evaluating Thinking about how well you did on a learning task, and rating yourself on a scale Reflecting Thinking about ways you learn best CREATIVE Brainstorming Thinking of as many new words and ideas as you can Example: Work in group and think of as many occupations as you can Table 1: A typology of learning strategies (Nunan, 1999, pp. 83-84) It can be concluded that learners have an important role in learning process, i.e. the success of a learning process should lie on learners' effort to get knowledge. Hence, as suggested by Nunan (1999), Celce and Olshtain (2000), and Hutchinson (1994), it is necessary to design tasks and final projects that can encourage learners to be independent in learning. Teachers should create a better learning environment, in which learners may have many choices to activate their active roles. A good negotiation between the teacher and learners must be paid attention to so that a learning process can be successful. Besides three aspects of second language learning, it is also essential to have a look at three internal factors which can operate when learners learn a second language suggested by Dulay, Burt, and Krashen (1982). Dulay, Burt, and Krashen (1982) believes that learners' motivates, needs, attitudes, and emotional states may filter what learners hear, which has an influence on the success of learning process. There are three internal factors operate when learners learn a second language: Two subconscious processors named 'filter' and 'organizer', and one conscious processor called the 'monitor'. The filter is that part of the internal processing system that subconsciously screens incoming language. According to Krashen, the filter determines: (1) which target language models the learner will select, (2) which parts of language will be attended to first, (3) when language acquisition efforts should cease, and (4) how fast a learner can acquire the language. There are three main parts of the filter: Learners' motives, attitudes, and emotional states. Motivation in language acquisition can be thought of as the need that the learner feels to learn the second language; three kinds of motivation affecting language acquisition are integrative motivation, instrumental motivation, and social group identification. Gardner and Lambert (as cited in Heidi Dulay;et al., 1982, p. 47) define integrative motivation as 'the desire to achieve proficiency in a new language in order to participate in the life of the community that speaks the language', instrumental motivation as 'the desire to achieve proficiency in a new language for utilitarian reasons, such as getting a job'; and the social group identification motive as 'the desire to achieve proficiency in a language or language variety spoken by a social group with which the learner identifies.' In combination with attitudes and motivation, the learner's emotional states affect what the learner admits for further processing. It is believed that the less anxious the learners are, the better language acquisition proceeds. The organizer is that part of the internal processing system responsible for the learner's gradual organization of the new language system. Krashen thinks that in examining the language that learners produce, it is essential to see the functioning of the organizer reflected in three pervasive phenomena: (1) the transitional constructions that learners use before a structure is finally acquired, (2) the errors that systematically occur in learner speech, and (3) the common order in which mature structures are learned. Three internal factors affecting on second language learning The monitor is the part of the learner's internal system that appears to be responsible for conscious linguistic processing. Learners can use the linguistic knowledge they gain through monitoring to consciously formulate sentences and to correct or edit their speech and writing. Krashen believes that whenever conscious linguistic processing takes place, the learner is said to be using the monitor; Krashen also thinks that when a learner performs a drill that requires conscious attention to linguistic form, or when a learner memorizes a dialogue, conscious processing is taking place and the monitor is being used. The degree to which the monitor is used depends on: (1) the learner's age, (2) the amount of formal instruction the learner has experienced, (3) the nature and focus required by the verbal tasks being performed, and (4) the individual personality of the learners. Learners who are insecure, self-conscious, and afraid to make errors tend to use the monitor more than others. In brief, the three internal processing factors mentioned above should play an essential role in the successful acquisition of the second language. Discussions of previous works on teaching English via songs Many authors from different parts of the world have mentioned teaching English via songs. Many useful aspects found in songs can give effectiveness in English teaching methodology. Murphey (1992) emphasizes the importance of music and songs on language learning. The reasons are mentioned: First, songs are believed to strongly activate the repetition mechanism of the language acquisition device because brains have a natural propensity to repeat what can be heard in environment in order to make sense of it. Second, songs occupies everywhere in the world; thus, music and songs can be easily become part of people. Third, songs must be very relaxing, which can provide variety and fun, and encourage harmony within oneself and within a group; through songs, listeners can learn many things about different cultures, religions, patriotisms, etc. Last but not least, songs are short, self-contained texts that might be appropriate for teachers to handle in a lesson. Garza (1995), an assistant professor and coordinator in the Department of Slavic Languages at the University of Texas at Austin, says that "Songs, like poetry, are one of the most powerful combinations in helping the learner commit limited phrases and word combinations to memory, to be put into active service at a later time in communication. Rock music videos can also be most effective in teaching pronunciation and intonation. Songs often contain the elements of repetition, rhyme and rhythm that facilitate quick memorization and easy imitation of the original text material. It is no wonder, then, that in song a "foreign accent" pronunciation is much more easily masked or eliminated than in normal conversational speech." "Via songs, we can teach students particular sounds like /th/, word final /r/, schwa, etc. or grammatical items like nouns, adjectives, plurals, etc."; "The lyrics might provide the EFL instructor with excellent material to present and practice modal construction in English." According to Garza, learners will learn in songs the way to pronounce the words exactly and have a natural intonation. Besides, learners have an opportunity to experience some particular sounds that do not exist in their first language. However, the author of this thesis does not think that choosing rock music as teaching material is a good idea. Because rock music is usually strong and loud, learners may find it hard to listen and distinguish different sounds. Diamond and Minicz (1994), ESL consultants for the Adult Learning Resource Center in Des Plaines, Illinois also think of music as a useful source in teaching English, and these authors put an emphasis on country songs rather than rock ones. They think that country music can help the learners practice pronunciation, stress, intonation and help teachers to introduce new grammar points. More than that, Diamond also emphasizes on another aspect: "Music brings language alive, songs provide chunks of language complete with shadings of meaning and emotion; songs release tension and allow students to enjoyably hear and repeat language again and again." Diamond and Minicz agree that studying via songs help the learners overcome the barriers of cultures. Learners can study the language in real situations, which learners must deal with when they communicate to native speakers, which is hard to experience in textbooks. Teaching English via songs help learners know language structures, language skills and many interesting points lying beneath songs. Learners can get more experiences and feel more confident in everyday communication. Songs can be easily remembered because of its melody; hence, learners can study and remember many things from songs better than from textbooks. Diamond emphasizes the advantages of country songs. However, if frequently used in teaching English, country songs can bring boredom to learners and troubles of understanding cultural values to teachers. Besides, country songs, now and then, contain many old words that are not used in modern life. Thus, choosing suitable songs to teach requires teacher's careful consideration. "English through opera", written by Beliavsky (2001), an assistant professor of ESL at the Institute of ESL at St. John's University, Queens, New York, is another attractive way in teaching English. "Opera is drama in which the text is set to music and staged. The texts of operas are sung, with singing and stage action nearly always given instrumental accompaniment. Many operas also feature instrumental interludes (called intermezzi) and dance scenes, even extended ballets that interrupt the action. Opera began as an entertainment at the courts of the Italian aristocracy, with outdoor terraces and even enclosed tennis courts being adapted for performances. It had its origins in the last years of the 16th century, and eventually this new form of entertainment caught on with the public. Opera as a popular entertainment attained its zenith in the 19th and early 20th centuries, after which the disruptive effects of two world wars and far-reaching developments in music itself left opera in a state of fairly arrested development. Throughout its history opera has exerted great influence on other forms of music-and vice versa." (Encarta Encyclopedia, n.d.) From what it is known about opera, teaching English through this kind of music is believed to be so difficult. In spite of its complexity, Beliavsky still succeeds in applying opera to teach English for foreign learners. Beliavsky reasons that: "ESL students want a real life; they want real challenge. They want to hear, see, taste and touch the richness of what our world has to offer." It is true that learners need challenges because challenges bring them the confidence, the really fundamental ability. For that reason, in teaching, teachers should give learners more probational exercises that force them to think over for the answers; of course, the probation must be in their management. Using opera in teaching English is a good idea because it gives learners more knowledge of the richness of musical world. Nevertheless, as the author of this thesis mentions above, though opera has many benefits, it is challenging to be used effectively in teaching. It requires teachers a profound understanding of opera, which costs a lot of time to find out about. At least, teachers can consider this idea as the usefulness of music as well as its richness in teaching English. As mentioned above, it is believed that remembering what have been learned is essential in second language acquisition. Music with its strong points can help learners do that. This opinion is very clear and logical. Adkins (1997) concludes: "The music provides a strong emotional association with the words, the music is enjoyable and they are motivated to learn the song." To Adkins, using music in teaching is, without doubt, a good choice. With the exposures, learners can observe clearly what can be expressed in the songs throughout singers' performance, clothes, colors, etc. This gives learners more opportunities to realize the differences among cultures. Levine, Baxter, and McNulty (1987) think about the difficulties of cultures learners deal with when they study a foreign language. To these authors, these are barriers causing misunderstanding in contacting a new language. "Understanding the new culture and learning to communicate comfortably in it are as important goals as learning the rules of the language. Much of culture is hidden from sight; it cannot be seen. Culture is like an iceberg, most of which is deep within the ocean just as much of culture is deep within people. Some cultural differences can be seen right away; these are like the tip of the iceberg. Learning to communicate well across cultures means becoming aware of the hidden parts of culture." Hence, with the help of video music, besides all the skills can be achieved, learners can know more about cultures. Learners can realize the similarities and the differences in culture via exposures rather than via speech only. It is known that the meanings of non-verbal communication are different from cultures to cultures and that non-verbal is very necessary in communication, which accounts for approximately three quarters of communication. Thus, during teaching process, teachers need to give learners some knowledge about non-verbal language and its meanings in conversation. Teaching how to use a language effectively is really a challenge because language is like an iceberg; what can be seen is only the tip of that iceberg. Hence, teachers should enrich the lectures with 'the part of the iceberg that is hard to be seen', i.e. culture. Milano (1994) also suggests a teaching technique including four steps that can help teachers teach English via music successfully. First, it is a need to explain briefly the background about the story, the context, and the characters of the song learners are going to listen to. Second, teachers play the cassette player and let learners listen to the song, and then arrange the phrases in the order in which learners are heard. Third, after learners are familiar with the melody, they are required to write their own stanza to substitute some information in the songs while maintaining the rhyme of the music. Finally, the the teacher and learners can sing the song together. The importance of music and songs in language learning can be found in Murphey (1992, pp. 6-8), who believes that songs can be easily remembered and become part of learners. Murphey gives out ten advantageous points of music and songs in language learning: (1) To sing with vocalizations is significantly easier than speech; it seems easier to sing language than to speak it (2) Music and songs are extremely important in the development of language in young children. 'I often call songs 'adolescent motherese'. 'Motherese' is the highly affective and musical language that adults use with infant. As children grow up, they get less and less of this. At adolescent, they appear to be adult and go through many changes which seem to reduce the amount of affective 'motherese' speech they receive. Songs may to a certain extent replace this, filling a need that not only adolescents have, but which all of us have throughout our lives.' (4) Music and songs may occupy ever more of the world around us (5) Songs may work on our short and long-term memory (6) Songs may strongly activate the repetition mechanism of the language acquisition device (7) Songs in general use simple, conversational language, with a lot of repetition, which is just what many language teachers look for in sample texts (8) Songs can be appropriated by listeners for their own purpose; songs happen whenever and wherever one hears them and they are, consciously or subconsciously, about the people in one's own life (9) Songs are relaxing; they provide variety and fun, and encourage harmony within oneself and within a group (10) For language teachers, songs are short, which are easy to handle in a lesson. In summary, via many opinions of different authors, many useful aspects of using music in teaching and learning English can be clearly seen. Not only is music a means of helping learners improve English skills but music is also a means of recreation. Music can give learners relaxation; music can also help learners be more confident in daily communication, in contacting and understanding different cultures. The usefulness of songs in teaching and learning English is beyond imagination. With such many useful aspects, English songs should be used as materials in teaching and learning English. SUGGESTIONS Choosing good songs to be designed as teaching and learning materials, which can help improve all four skills, should be a challenge. Teachers are required to spend so much time and patience. Murphey (1992, pp. 8-9) thinks that different teachers have different ideas about teaching and learning English using musical materials. However, it is commonly believed that there are 20 concerns of the teacher about using music and songs in language teaching and learning: 1. Administrators/ teachers/ students do not take music and song seriously. -Songs can be used as input to help improve integrated skills, i.e. there is a requirement to exploit material designed from English songs to improve English skills subconsciously; thus, singing in classroom should be considered inessential. -There are two important aspects to be mentioned: First, learners should achieve language skills and language knowledge with material designed from songs; second, material should be designed carefully. CONCLUSION Many aspects of English songs have been examined. The authors initially assumed that English songs would benefit learners' language skills. Not only do songs help learners overcome tiredness but also encourage them to improve the four language skills. Songs can be used as a good tool to teach English; teaching and studying English via songs must be one of the most effective ways. Besides, songs supposed to be effective in teaching listening skill can be used to improve other skills as well. Firstly, the opinions of effective materials were reviewed. Then, models of designing English for specific purpose (ESP) materials suggested by Hutchinson (1994) were illustrated with explanations. The two models of Hutchinson should be the basic background for an English song to be designed as a teaching and learning material for ELT classes.
8,197
2019-01-01T00:00:00.000
[ "Education", "Linguistics" ]
Gibberellic Acid Production by Different Fermentation Systems Using Citric Pulp as Substrate/Support Gibberellic acid (GA3) is an important phytohormone, a member of gibberellins family, which acts as a promoter and regulator of plant growth. This study aimed to evaluate GA3 production by Fusarium moniliforme LPB03 and Gibberella fujikuroi LPB06 using different techniques of fermentation, solid state fermentation (SSF), submerged fermentation (SmF), and semisolid state fermentation (SSSF), and different types of bioreactors. In all techniques, citric pulp (CP), a subproduct obtained from the extraction of orange juice, was employed as the substrate/support. GA3 production by SSF reached 7.60 g kg−1 and 7.34 g kg−1 in Erlenmeyer flasks and column bioreactors, respectively. For SmF, the highest concentration of GA3 obtained was 236.00 mg L−1 in Erlenmeyer flasks, 273.00 mg L−1 in a 10 L stirred tank reactor (STR), and 203.00 mg L−1 in a 1.5 L bubble column reactor (BCR). SSSF was conducted with a CP suspension. In this case, GA3 concentration reached 331.00 mg L−1 in Erlenmeyer flasks and 208 mg L−1 in a BCR. The choice of the fermentation technique is undoubtedly linked to the characteristics and productivity of each process. The methods studied are inexpensive and were found to produce good proportions of GA3, making them suitable for several applications. Introduction Gibberellins (GAs) consist of a family of diterpenoid acids, an important group of phytohormones that exercise different effects on growth and development of plants, such as germination, cell elongation, expansion of leaves, and development of flowers [1][2][3]. Similar to auxins, they stimulate the activity of transference, generating higher development of xylem and phloem in ligneous plants [4][5][6]. These properties make gibberellins a valuable tool in agriculture to increase crop yields [7,8]. Among the 136 GAs isolated, gibberellic acid (GA 3 ) has received the most attention. The use of GA 3 has been extensively studied in different crop plants, and the results vary depending on the plant species, form of application, and concentration of this hormone [9][10][11][12][13][14]. GAs are found in plants, algae, fungi, and bacteria. However, due to high concentrations in fungus, industrial production of GAs is performed by submerged fermentation of the ascomycetous fungus G. fujikuroi. Production by plant extraction is not viable because of low concentrations of GAs, which contributes to waste generation [15]. While the chemistry, biosynthesis, mode of action, and relationships between structure and activity of GAs have already been extensively investigated [7,16,17], little is known about the production of GAs by fermentation [18]. There are several studies evaluating the decrease in production costs of GA 3 with the use of different techniques, such as screening and genetic manipulation of microorganisms, optimization of culture conditions and nutrients, development of new fermentative processes, minimization of extraction costs, and use of cheaper substrates such as agroindustrial byproducts and wastes [19]. These supports include straws, husks, bagasses, and brans, which have high fiber content and allow the possibility of working with high humidity content [20,21]. Some wastes have already been used for GA 3 production, such as wheat bran [22] and rice flour [8,23]. Citric pulp (CP) is a subproduct obtained through the treatment of liquid and solid wastes remaining from the extraction of orange juice. These wastes include peel, seed, and orange pulp, which constitute 50% of the fruit weight [25]. The estimate of global orange juice production (at 65 degrees' brix) for 2016/17 is two million metric tons [26]. For this purpose, it is necessary to use around 22 million metric tons of oranges; thus, 11 million tons of citric pulp is generated. In Brazil alone, the estimate is 12.9 million tons of oranges for processing [26] and around 6.5 million tons of citric pulp generated. These citric wastes are rich in carbohydrates and other nutrients and are a viable substrate for solid state fermentation (SSF) and other fermentation techniques as submerged fermentation (SmF) and semisolid state fermentation (SSSF), after a physical and/or chemical pretreatment. This study aimed to evaluate the production of GA 3 using CP as the substrate/support for SSF, SmF, and SSSF, using different types of bioreactors. Strain Maintenance. The strains Fusarium moniliforme LPB03 and Gibberella fujikuroi LPB06 were conserved in assay tubes previously prepared in potato dextrose agar (PDA) slants and incubated for six days at 28-30 ∘ C. Strains were then maintained at 4 ∘ C, for up to three months, and periodically renovated. Table 1. The pH of CP (5.76) is close to that normally used for the GA 3 production process. The amount of sugar supports the use of CP as a substrate for fermentation. In addition, the carbon : nitrogen (C : N) ratio is high, which favors the production of GA 3 [23]. In relation to ions, phosphate and sulfate are the most prevalent in the media composition for producing GA 3 [8,18], and nitrates are also used in synthetic media. Substrates. The composition of CP is shown in CP was initially dried, ground in a records mill, and classified in order to obtain particle sizes less than 5 mm. Solid CP was used in SSF. For SmF, an aqueous extract of CP (AECP) was prepared using the ratio 1 : 10 of dry CP and water (w/v). The suspension was heated in a boiling water bath for 30 min and then filtrated in order to remove the suspended solids, thus obtaining the AECP. Inoculum Preparation. Inoculum of G. fujikuroi LPB06 was grown in 250 mL Erlenmeyer flasks containing 100 mL of 5% (w/v) AECP, at 28 ∘ C, in a rotary shaker at 120 rpm, for four days. For F. moniliforme LPB 03, 10% (w/v) AECP was supplemented with sucrose (30 g L −1 ). The strain was then transferred and grown in a rotary shaker at 120 rpm for four days, at 28 ∘ C [27]. SSF in Erlenmeyer Flasks (Aeration by Diffusion). CP was impregnated with a nutritive solution containing 1.5 g L −1 urea and 1.5 g L −1 MgSO 4 ⋅7H 2 O, in order to produce initial moisture of 75%. Initial pH was 5.5-5.8, which corresponds to the natural pH of CP. The support was inoculated with 10% (v/w) of the mycelia suspension. After homogenization, the inoculated support was transferred to Erlenmeyer flasks (250 mL). These conditions were previously optimized (data not shown). A kinetic study was performed over seven days, at 29 ∘ C, in triplicate. Analysis of GA 3 concentration was performed every 24 h. SSF in Column Bioreactors (Forced Aeration) . SSF was conducted in 0.25 L column bioreactors (4 cm diameter and 20 cm length), containing 30 g of dry CP, to study the influence of forced aeration on GA 3 production. The initial moisture of CP was adjusted to 70% (v/w) with a nutritive solution composed of 1.5 g L −1 urea and 1.5 g L −1 MgSO 4 ⋅7H 2 O. The inoculated substrate was transferred to column bioreactors, which were then placed into a water bath with temperature control at 29 ∘ C. Saturated air was pumped continually through the columns in order to control substrate temperature and moisture. The airflow was controlled at 30 mL min −1 . The microorganism respiratory metabolism was evaluated by determining the O 2 consumption and CO 2 production ( Figure 1) [24]. Fermentation was carried out for seven days. Analysis was performed every 24 h. GA 3 Production in Submerged Fermentation (SmF) 2.5.1. SmF in Erlenmeyer Flasks. GA 3 production by SmF was carried out in 250 mL Erlenmeyer flasks containing 50 mL of medium composed of 10% (w/v) AECP and 0.5 g L −1 of MgSO 4 ⋅7H 2 O. The inoculum rate was 10% (v/v) of mycelia suspension, previously grown in medium composed of 5% (w/v) AECP. The study was conducted at 29 ∘ C and 120 rpm BioMed Research International in a rotary shaker, over 240 h. Samples were withdrawn every 24 h for analysis of GA 3 concentration. SmF in Stirred Tank Reactor (STR). Batch fermentation was conducted in a 10 L STR (New Brunswick Scientific, Bioflo 110) with 6 L working volume medium composed of 10% (w/v) AECP (18 g L −1 of total sugars) and 0.5 g L −1 of MgSO 4 ⋅7H 2 O. The medium was inoculated with mycelia suspension of G. fujikuroi at a rate of 10% (v/v). GA 3 production was performed at 29 ∘ C, with an initial pH of 5.0, agitation of 500 rpm, and an aeration rate of 1 L min −1 . Fermentation was carried out over 240 h, and samples were withdrawn every 24 h for analysis. SmF in Bubble Column Reactor (BCR). GA 3 production was scaled up in a bubble column reactor (BCR), under pneumatic agitation. The BCR is a cylindrical tube of borosilicate glass with a total volume of 1.5 L and working volume of 1 L (diameter of 80 mm and a height of 300 mm) ( Figure 2). Sterile air is injected through the inlet from the bottom of the column, passing through a porous plate and forming bubbles. The top has three connections: one lateral connection for air outlet, one lateral connection for inoculum or culture medium feeding, and a central connection for sampling during fermentation. SmF was conducted for 216 h, with production medium composed of 10% (w/v) AECP and 0.5 g L −1 of MgSO 4 . The temperature was maintained at 29 ∘ C, with a 10% (v/v) inoculum rate, an initial pH adjusted to 5.0, and an aeration rate of 1 L min −1 . Production of GA 3 by Semisolid State Fermentation (SSSF) 2.6.1. SSSF in Erlenmeyer Flasks. A suspension with 5% (w/v) of CP solids was used for GA 3 production. 100 mL of the medium, added to 20 g L −1 of sucrose and 0.6 g L −1 of urea, was distributed in 250 mL Erlenmeyer flasks. Flasks were inoculated with a mycelia suspension of G. fujikuroi at a rate of 10% (v/v). SSSF was performed at 29 ∘ C and 120 rpm in an orbital shaker. Samples were withdrawn every 24 h for GA 3 analysis. 2.6.2. SSSF in BCR. GA 3 production was performed in a 1.5 L BCR containing 1 L of a culture medium (5% (w/v) CP suspension added to 20 g L −1 of sucrose and 0.6 g L −1 of urea). Assays were conducted at 29 ∘ C with a 10% (v/v) inoculum rate, an initial pH of 5.0, and an aeration rate of 1 L min −1 , for 216 h. Analytical Procedure. After SSF, GA 3 was extracted with phosphate buffer (pH 8.0) and filtered through membranes. After SmF and SSSF, the broth was filtered to remove biomass and CP particles. GA 3 was quantified by spectrophotometry at 254 nm, according to Holbrook et al. [28]. Biomass was determined by the ergosterol method [29]. The respiratory metabolism of the microorganism in SSF was evaluated by determining the O 2 consumption and CO 2 production, as an indirect method for biomass evaluation. The data acquisition system was composed of a Software, Fersol 2, and sensors [30]. These sensors acquired online data for the fermentation process parameters (O 2 , CO 2 , temperature, and humidity). GA 3 Production by SSF Using F. moniliforme LPB 03. GA 3 production was carried out by SSF with the strain F. moniliforme LPB 03 and with two different bioreactors, Erlenmeyer flasks (aeration by diffusion) and column bioreactors (forced aeration) ( Figure 3). The highest productivity of GA 3 (0.06 g kg −1 h −1 of dry CP), with a production of 7.34 g kg −1 , was observed in column bioreactors after 120 h. In Erlenmeyer flasks, the production was 7.60 g kg −1 of dry CP in 144 h, which represents a productivity of 0.05 g kg −1 h −1 . This means that the production with forced aeration led to a gain of 13.72% compared to the results obtained in Erlenmeyer flasks. This increase may have occurred due to differences in bioreactor configuration and the influence of aeration and light. Such a positive effect of aeration on GA 3 production was demonstrated by Machado et al. [31], who achieved 0.49 g kg −1 of dry substrate in Erlenmeyer flasks and 0.93 g kg −1 of dry substrate in aerated columns, corresponding to an increase of 87%. According to Taiz and Zeiger [32], the production of gibberellins is also induced by light. Column bioreactors are more exposed to light; this may be one reason for the better GA 3 synthesis. Forced aeration through a solid fixed bed is also responsible for heat dissipation, which is generated by the exothermic reaction of the fungus metabolism. When heat is not dissipated, high temperatures are attained, and these could inhibit growth. The configuration of the column bioreactor (forced aeration and direct exposure to sunlight) likely favored the production of biomass, which was higher than that observed in Erlenmeyer flasks (aeration by diffusion without illumination). The growth profile observed in column type bioreactors was significantly accelerated compared to Erlenmeyer flasks. The exponential growth phase with forced aeration occurred from 24 to 96 h, whereas with aeration by diffusion it was observed between 24 and 120 h. The respiratory metabolism of the fungus is closely linked to growth of the microorganism [33] (Figure 4). After 24 h of fermentation, it was possible to see the beginning of a more pronounced cellular respiration, due to the fact that the organism was already adapted to the environment. The highest rates of respiration (O 2 consumption and CO 2 production) are thought to be related to the exponential growth phase, which was expected. After 48 h of fermentation, there was a drop in O 2 consumption and CO 2 production, which is probably linked to the beginning of the stationary phase. After 70 h, the respiration of the fungus remained almost constant until the end of fermentation for fungus maintenance and the synthesis of the secondary metabolite, or GA 3 . GA 3 Production by SmF Using G. fujikuroi LPB 06. GA 3 production by G. fujikuroi LPB 06 was performed by SmF in Erlenmeyer flasks, with CPAE. The strain was previously screened and showed the best adaptation to this fermentation system (data not shown). The synthesis of GA 3 started after 48 h ( Figure 5), reaching the highest concentration (236.00 mg L −1 or 2.73 g kg −1 of dry CP) at 216 h, with a productivity of 1.09 mg L −1 h −1 . However, the highest productivity, 1.52 mg L −1 h −1 , was obtained at 120 h of fermentation. GA 3 concentration did not increase continuously during fermentation, and this behavior was also observed during GA 3 production by SSF. The scale-up of GA 3 production by SmF in a 6 L (working volume) STR ( Figure 5) provided higher GA 3 concentrations and productivity, 273.00 mg L −1 (3.17 g kg −1 of dry CP) and 2.8 mg L −1 h −1 , respectively, after 96 h of fermentation. In BCR, using the same aeration rate that was used in STR, 1 L min −1 , GA 3 production started after 96 h of fermentation, and the highest GA 3 concentration reached 203.00 mg L −1 (0.94 mg L −1 h −1 ) in 216 h. GA 3 production reached a higher concentration and productivity in STR than in Erlenmeyer flasks and BCR. The increase of GA 3 production and productivity in STR is related to the influence of the forced aeration and mechanical agitation, which promotes better homogeneity and better mass and oxygen transfer during the fermentation. According to Escamilla et al. [23] and Lale and Gadre [34], GA 3 production is influenced by the high concentration of dissolved oxygen in the medium. The influence of agitation and aeration rate on GA 3 production by STR has been observed in previous work. Durán-Páramo et al. [35] source. On the contrary, Shukla et al. [8] used a 1.80 L (working volume) STR and higher conditions of agitation speed and aeration (700 rpm and 1 vvm), which reached 1.00 g L −1 (5.90 mg L −1 h −1 ) of GA 3 , using a medium composed of 80.00 g L −1 of glucose as the carbon source. In the present study, 273.00 mg L −1 was obtained (2.80 mg L −1 h −1 ) using a culture medium composed of an agroindustrial subproduct (AECP) without the addition of a carbon source, which represents an advantage in terms of process costs. Despite the aeration rate of 1 vvm, GA 3 production was lower in BCR than in STR. This bioreactor did not provide very good homogeneity and mass transfer. This was probably due to the higher size of fungi pellets (data not shown), which led to mass transfer limitations and lowering of nutrients and oxygen levels, especially in the center region of pellets [36]. The influence of pellet size on GA 3 production was described by Escamilla et al. [23]. Chavez-Parga et al. [37] used a 3.5 L air lift bioreactor for GA 3 production, with an aeration rate of 1.6 vvm (5.6 L min −1 ). In this case, GA 3 production reached 100.00 mg L −1 (0.30 mg L −1 h −1 ). In the airlift bioreactor, the agitation of broth is pneumatic, similar to BCR; however, the airlift bioreactor is composed of an internal tube, which organizes and directs the airflow in the system. GA 3 concentration and productivity obtained in the present study were around two and three times higher than those obtained by Chavez-Parga et al., respectively. 3 Production by SSSF Using G. fujikuroi LPB 06. GA 3 production was also carried out by SSSF in Erlenmeyer flasks using CP suspended solids (Figure 6), promoting a maximum GA 3 concentration of 331.00 mg L −1 (7.70 g kg −1 dry CP) after 240 h of fermentation, or a productivity of 1.38 mg L −1 h −1 . In terms of dry substrate, GA 3 production by SSSF was similar to the production obtained by SSF. The process was also performed in a BCR, where GA 3 production reached 208.00 mg L −1 (productivity of 0.96 mg L −1 h −1 ), after 216 h of fermentation; this is equivalent to the results obtained in BCR by SmF. SSSF presented some advantages when compared to SmF because CP is directly in the medium. Therefore, the substrate does not need chemical or enzymatic pretreatment before fermentation. CP particles provide nutrients as well as support to filamentous fungus during fermentation. The adhesion of fungus to CP particles as a support is evidenced in Figure 7. Comparison between Different Fermentation Techniques for GA 3 Production. GA 3 production was performed by SSF, SmF, and SSSF, using different bioreactors, and with CP as the substrate/support ( Table 2). The highest GA 3 production and productivity was obtained using SSF (7.60 g kg −1 of dry CP). In terms of GA 3 production by kg of dry CP, SSSF presented comparable results to SSF (7.60 g kg −1 of dry CP). The evaluation of the production of biocompounds by SSF and SmF has been studied, and certain bioactive compounds have been found to be produced in higher quantities in SSF, whereas other compounds have been extracted Figure 7: Microscopy (200x) of CP particle after fermentation to evaluate the adhesion of fungus to CP particle as a support. using SmF [38]. Higher production of biomolecules, such as enzymes [39,40] and organic acids [41], is generally obtained through SSF fermentation systems. However, this fermentation technique and the SSF bioreactors are not easily scaled up. SSF currently promotes higher concentrations of biomolecules, in comparison to SmF or SSSF, due to the higher concentrations of substrate under limited free water conditions. Moreover, SSF reproduces a natural environment for filamentous fungi growth [42][43][44]. These fermentation systems present advantages, such as the use of solid agroindustrial wastes/subproducts as substrates in their natural form, which contributes to less wastewater production [43,44]. In this way, CP proved to be a very good substrate for GA 3 production in different bioreactors and with different fermentation techniques, which provides a very positive outlook for this process. Conclusions The results in this study demonstrate that SSF provides the highest GA 3 production and productivity by F. moniliforme. Furthermore, this technique uses a low amount of water, which consequently lowers the cost and waste generation associated with this process. In the studied bioreactor systems, GA 3 , which is a high-value molecule, was produced in good proportions. Medium to high concentrations of the 7 product can be achieved with simple processes employing CP as a substrate. CP is a subproduct of the orange juice industry, which is generated in abundance. Therefore, the costs of GA 3 production could be reduced, making its application viable in different important agriculture cultivars. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication this paper.
4,898.6
2017-09-07T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
CONSERVATIVE CARE COST ANALYSIS OF PATIENTS OF LONG BONE FRACTURE IN EMERGENCY ROOM. A REVIEW TO SEE THE SUFFICIENCY OF HEALTH INSURANCE COSTS Article history: Submitted: June, 26 2020 Received in revised form July 01 2020 Accepted: August, 18 2020 Introduction: Long bones are bones that include the humerus, radius, ulna, femur, tibia and fibula. Aside from fibula, the main function of long bones is supporting weight and facilitate movement. Therefore, fracture in the long bone will cause liited movement or even loss. Long bone fracture is still a global problem because the number of events is quite large. This is in line with the increase in socioeconomic status and the incidence of traffic accidents which is one of the causes of fractures. Method: This is a Prospective study to determine the magnitude of the cost of treatment conservatively in cases of long bone fractures in RSUD dr. Soetomo. The study design used was a prospective cohort. The sample size used in this study was determined by consecutive sampling, ie patients who met the inclusion criteria in the period May August 2017. Results: From the observations for four months from May 2017 August 2017 at Emergency Room Soetomo General Hospital, we found 77 patients with long bone fractures that were casted. Of these patients, 38 patients were placed in a slab and 39 patients were placed in a circular cast. Among the 39 people, 17 patients were excluded according to the exclusion criteria and 22 patients were included as the study sample according to the inclusion criteria. By using a statistical test using paired sample T test with a value of α = 0.05, a significance of 0.025 was obtained. Because the significance value is 0.025 <0.05 (α). Discussion: From the results, there was a difference between the BPJS rate of installing circular cast and the real cost of installing circular cast on long bone fractures. In addition, we get that the average value of the BPJS rate is greater than the real cost value, which means we can conclude that the BPJS cost can cover the cost of conservative therapy in cases of long bone fractures. Conclusion: In the economic aspect, the longer stay means higher costs that must be paid by the patient (the payer) and accepted by the hospital. This only applies to real tariffs, whereas to INACBG's long or short length of stay does not affect the cost. @2020 Medical and Health Science Journal. 10.33086/mhsj.v4i1.1415 INTRODUCTION Long bones is a main framework for movement. Fracture of long bone can limited movement. The occurrence of long bone fracture is still a global problem because the number of events is quite large. This is in line with the increase in socioeconomic status and the incidence of traffic accidents which is one of the causes of fractures 1 . Long bone fractures mainly result from significant trauma. Data shows incidence of long bone fractures is 21 fractures per 1000 people per year in United States, men is higher than women. Similar data also reported in Norway and United Kingdom 2 . Approxi- CASE REPORTS mately, 2109 patients seeking ED to Orthopedics and Traumatology at the Soetomo General Hospital itself, in 2016, about 360 (17%) were patients with long bone fractures treated with conservative therapy 3. In every case of long fracture, it is necessary to choose the right therapy. In general, therapy for long bone fractures is divided into conservative (non-operative) and operative therapy. The costs between conservatively and operative therapy are quite different. James Heckman in 1997 showed that the cost of therapy in tibia fracture cases is 65% lower conservatively compared to operative therapy 4 . Recently, health financing model In Indonesia is mainly centered on BPJS (Social Security Organizing Agency) Health and health insurance. Funding with Health BPJS based on the INA-CBGs (Indonesia Case Base Groups) package makes it possible to carry out conservative treatment of long bone fractures comprehensively. In BPJS, the products covered are regulated by national formulary 5 . Therefore, to estimate adequacy of conservative therapy treatment costs for long bone fractures and its efficiency, this study focus on the costs incurred by the INA-CBG package and the amount the need for conservative therapy of patients with long bone fractures at dr. Soetomo General Hospital 3 . CASE(S) From 22 patients included in inclusion criteria, there were 14 patients (65%) male and 8 female patients (35%). From these patients we get a number of cases with diagnoses of fractures in the upper limb, such as: closed fracture antebrachia, closed fracture distal radius, and Galeazzi fracture and closed fractures in the lower extremities such as: closed tibia fractures, calcaneus fractures, and metatarsal fractures. Most of them, we found most cases were fractures in the antebrachial section. In antebrachial fractures, we do closed reduction management and immobilization using a long arm cast, while distal radius fracture, immobilization is done by placing a long arm cast. In lower extremities, most fractures ensue in tibia. It could be done conservatively that is closed reduction and immobilization with long leg cast, whereas metatarsal fracture, conservative management was performed under knee cast. From data, it was found that the highest incidence of closed fractures occurred in the range of first and second life periods. Children run into dramatic change in motoric development so as they tend to fall like from furniture or stairs. Meanwhile, in second decade of life, the trauma mechanism mainly occurs during outdoor activities. This is almost in line with study conducted by Hedstrom EM et all in 2010 in Sweden 6 . From table 2, the highest incidence of closed fractures is found in men twice than women. It caused by trauma, especially trauma due to traffic accidents, workplace accidents and activities/ sports. Based from table 3, it was found that the incidence of fractures is more common in the upper limb than the lower limb DISCUSSION By using a statistical test using paired sample T test with a value of α = 0.05, a significance of 0.025 was obtained. Because the significance value is 0.025 <0.05 (α), it can be concluded that Ho is rejected, which means there is a difference between the BPJS and real cost of installing circular cast on long bone fractures. In addition, average value of BPJS rate is greater than real cost value. It means that the BPJS cost can cover cost of conservative therapy in cases of long bone fractures. This happens because in patients who are treated conservatively do not require a long treatment time. According to Cleverly (1997), one way that costs for a package payment system (Cased Base Groups) can be reduced by reducing patient length of stay (Cleverley, 1997). Meanwhile Sudra (2009) reported, from medical aspect the longer patient stay at hospital, quality of medical performance is declining because the patient must be treated longer (longer recovery). Conversely, the quality of medical performance is good in case patients treated in shorter period 7 . From economic aspect, the longer the length of stay means the higher the costs that must be paid by the patient (the payer) and accepted by the hospital. This only applies to the real tariff, whereas the tariff of INACBG's long or short length of stay does not affect the cost 8 . The results of this study are almost in line with study conducted at Kalisat Hospital Jember in January -June 2015 in which there was a statistically significant difference between real cost rate and INA-CBG package rates on the payment of diabetes mellitus patient claims. In this case a lower real cost rate is obtained 9 . However, different results were obtained in study conducted at RSUD dr. Achmad Mochtar Bukittinggi in January to December 2016. In this study, the difference in negative rates that illustrates hospital rates is greater than the payment of INA-CBG rates. 90 Negative rate difference is found in surgical cases involving operative medical measures 10 . Weaknesses of this study involve: period is too short, only 4 months (May to August 2017), so that the number of cases obtained is relatively low. Then further study is needed to assess especially in terms of patient satisfaction and operator satisfaction. In addition, this study also received funding from sponsors in accordance with agreed cooperation. CONCLUSION Conservative therapy with circular cast for closed bones with long bones is still one of the main choices today, especially in children. The current health cost financing model mainly uses BPJS based on the INA-CBGs package. The rate of the INA-CBG package is greater than the real cost rate for patients with closed fractures that are treated conservatively in Emergency Room Dr. Soetomo General Hospital. Further research is needed to review more deeply about conservative therapy in the treatment of closed fractures, especially in terms patient satisfaction as well as from operator or doctor's point of view.
2,067.8
2020-08-28T00:00:00.000
[ "Medicine", "Economics" ]
Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the issue and a tailor-made model from which meaningful data are obtained using Natural Language Processing word vectors in pretrained Google Word2Vec, Twitter and Wikipedia GloVe word embeddings. The results of the study offer the first set of data that evidences the existence of bias against the poor and suggest that Google Word2vec shows a higher degree of bias when the terms are related to beliefs, whereas bias is higher in Twitter GloVe when the terms express behaviour. This article contributes to the body of work on bias, both from and AI and a social sciences perspective, by providing evidence of a transversal aggravating factor for historical types of discrimination. The evidence of bias against the poor also has important consequences in terms of human development, since it often leads to discrimination, which constitutes an obstacle for the effectiveness of poverty reduction policies. Introduction It is widely documented that Artificial Intelligence (AI) reproduces and often amplifies biases against historically disempowered groups (Bolukbasi et al. 2016;Garga et al. 2018;Manzini et al. 2019;Nadeem et al. 2020). This constitutes a risk for the exacerbation of those biases offline and the eventual increase in discrimination (Vinuesa et al. 2020). AI systems are not ethically neutral but, more and more, we are all dependent on AI for our decisions (Fry 2018). In the information society, AI is at the core of high risk services, such as healthcare (Watson et al. 2019;Zetterholm et al. 2021; Vallès-Peris and Domènech 2021), financial services (Kostka 2019;Townson 2020;Lee and Floridi 2020; Aggarwal 2020; Anshari et al. 2021) justice and security (Poitras 2014;Hauge et al. 2016;Merler et al. 2019;Green et al. 2019) and even the military ( de Vynck 2021). AI is also an integral part of marketing, predicting users' interests through big data that contain each person's personal digital profile, in what has been called "surveillance capitalism" (Zuboff 2019). While the amount of algorithmic systems performing in questionable ethical manner continues to grow (Tsamados et al. 2021a), governmental efforts to regulate AI have gained momentum (Smith et al. 2016;SCMP Research 2020;European Commission 2021). At a regional level, the European Union is considered to have an ethically superior regulatory framework in terms of citizens' rights (Allison and Schmidt 2019;Gill 2020;Imbrie et al. 2020;Roberts et al. 2021), which has a positive impact at a global level (Bradford 2020). At the core of the EU AI framework, there is the principle of "diversity, non-discrimination and fairness", including the "avoidance of unfair bias", especially in the case of the historically discriminated groups (HLE-GAI 2019). However, the legal framework is not sufficient, considering that the ethical principles contained in the law are described as too abstract to implement in practice, often leading to some counterproductive practices, such as ethics shopping, ethics blue-washing, ethics lobbying, ethics dumping or ethics shirking (Floridi 2019a). There is a growing agreement on the urgent need to know how to translate this general ethical framework into the operational AI development (Floridi 2019b;Vakkuri et al. 2020;Morley et al. 2021a, b). In this context of "moral panic" (Ess 2020), there has been a proliferation of AI Ethics guidelines [more than 173 documents in existence in 2021 (Algorithm 2021)], there is a panoply of strategy proposals to detect and correct bias in the data of AI NLP systems (Bolukbasi et al. 2016;Garga et al. 2018;Manzini et al. 2019;Nadeem et al. 2020;Zhao et al. 2021), incipient attempts to train algorithms to detect bias (Sap et al. 2020;Jiang et al. 2021) and algorithmic mathematical constructs which try to achieve partial approximations to fairness (Dwork et al. 2011;Hardt et al. 2016;Kroll et al. 2017;Green and Hu 2018;Card and Smith 2020). However, to translate the principle of AI fairness (HLE-GAI 2019; European Commission 2021), into an operational reality, an in-depth analysis is required, far from the existing turmoil of quick-fix solutions. Bias within AI systems is only the tip of the iceberg, since AI reproduces the prejudices of the societies where they are trained (West et al. 2019;Vinuesa et al. 2020) in an unsupervised manner (Radford et al. 2019;Talmor et al. 2021), either within the data (Rudinger et al. 2018;Chiappa et al. 2020), the algorithms (Mittelstadt et al. 2016;Tsamados et al. 2021b) or even as a result of development procedures (Floridi 2019a;Vakkuri et al. 2020). Therefore, trying to solve the AI ethical problems only through a technical approach is clearly insufficient, since it only has a superficial impact on fundamental inequalities (Zajko 2021). Blodgett et al. (2020) analysed 146 papers studying bias in NLP systems (published prior to May 2020) and concluded that these papers do not provide an actual conceptualisation of bias outside NLP systems. Card and Smith (2020) suggest that literature on fairness within ML depends mostly on assumptions. A growing number of voices highlight the need for involvement from the social sciences perspective (Green and Hu 2018;Kusner and Loftus 2020;Zajko 2021) since bias needs to be discussed in the "onlife", using Floridi (2015). In fact, the aim to debias AI systems is based on the illusion that there is a neutral value-free environment, when it is really meant to align with the dominant scientific, social and political values (Green 2020). When we analyse the nature of bias, it becomes evident that we cannot draw a hard line between what is sufficient and insufficient proof of it, since it is based on our beliefs and a characteristic of human cognition (Allport 1954;Reicher 2007;Pettigrew 2020;Paolini et al. 2021). In fact, the reason why human beings are not only perceived based on their individual characteristics is because we do not have enough time to understand every single detail of every person. Therefore, we put information into categories and generalise based on previous experience. Overgeneralised and erroneous beliefs lead to prejudices. When prejudices have a social category, they are described as stereotypes and, when they are transmitted through the linguistic process, we know them as bias, generating a selfperpetuating cycle in which prejudices are socially shared and maintained (Maass 1999;Beukeboom and Burgers 2019). Where bias is the linguistic expression of shared social prejudices within a specific culture, discrimination has been defined as an action of exclusion as a result of prejudice (Allport 1954). But seeing the tip of the iceberg (bias in AI systems), also tells us that there is an iceberg. Bias in AI acts as a mirror, showing the prejudices that go unnoticed off-line and helping us to evidence an unnoticed discriminatory phenomenon (Hoffmann 2019). While algorithms reproduce inherent tensions at a technical level (Hacker 2018), these data can be used as a warning towards a stigma, which can then be studied from a social sciences perspective since it has a history behind (Zajko 2021). This is precisely what this paper offers: the evidence of bias against the poor in social networks, a neglected type of discrimination in both AI bias and social sciences literature, named "aporophobia" by the philosopher Adela Cortina (2017). The bias against the poor, which often leads to discriminatory behaviour, has dramatic repercussions since it hinders the effective implementation of poverty reduction policies (Arneson 1997;Applebaum 2001;Everatt 2009;Nunn and Biressi 2009), hampering the work towards the first Sustainable Development Goal of the United Nations (no poverty). It also has a clear impact on the historically discriminated groups (Alessina and Glaeser 2013) and it is closely related to gender discrimination in capitalist development (Folbre 2021). Sadly, it has been underestimated as a transversal type of discrimination, since there is the tendency within the antidiscrimination discourse towards a single-axis thinking (Crenshaw 1991). However, stereotypes exist within a network of beliefs (Freeman and Ambady 2011), where there is a dynamic interaction among them (Ridgeway and Smith-Lovin 1999) and an aggravating effect for what Hoffman defines as the "multi-oppressed" (2019). Eubanks (2018) identifies algorithms that discriminate the poor and O'Neal (2016) describes how some predatory AI systems target people in need. However, there is no evidence about bias against the poor in the existing literature. This study aims to fill in that gap by offering a first approach to the identification and measurement of bias against the poor in the publicly available Google News Word2, Wikipedia GloVe and Twitter GloVe pre-trained word embeddings, providing a study at scale and in context (Joseph and Morgan 2020). This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, and it is organised in 5 parts. First, it provides an analysis on the roots of discrimination against the poor. Then, we present the materials and methods being used, such as the rationale behind the target terms and attributes that are being searched, the pre-trained word embeddings that have been analysed and the methodology to identify and measure bias against the poor using Natural Language Processing (NLP). The key results are then analysed to discuss the main implications and conclude. The roots and consequences of bias against the poor Redistributive justice is at the very foundation of welfare states, where the principle of equal opportunity is considered to be the main political answer to reduce poverty and an attempt to promote social mobility. But the rhetoric of equal opportunity has also been associated with the blamefulness of the poor, who are considered responsible for not climbing up the social ladder (Young 1964;Anderson 1999;Sandel 2020). However, meritocracy, understood as a system where you prosper by working hard, is more collective entelechy than a reality: only 7% of the population of the United States within the 20% lower rents get to the 20% top rents in their lifetime (Chetty et al. 2014) and some European countries, such as Germany, have lower social mobility than the US (OECD 2018). In fact, the principle of equal opportunity, per se, can be considered an ideal, since every individual is inevitably exposed to different environments from the moment of birth (Fishkin 2014). This shared belief, though, assigns the responsibility to avoid poverty to each individual, promoting a competition among citizens seeking to work their way up and obtain social recognition (Fraser and Honneth 2003;Mounk 2017) especially in the US, where citizens overestimate the real possibilities to climb up the ladder, as opposed to the Europeans, who tend to underestimate their possibilities of social mobility (Alesina et al. 2018). In the meritocratic logic, where technocratic governments are mainly oriented towards the market, the rich are considered to be the winners, associated with being hard-working and smart, while the poor are considered also to deserve their fate (Mounk 2017;Sandel 2020). The disempowerment and resentment of the poor are aggravated by the increasing inequality in the US since 1980s (Piketty et al. 2018), which has boosted as a result of the COVID-19 crisis, according to Gini coefficient estimates. The bias against the poor, therefore, is aggravated by the blamefulness associated to this condition and leads to discrimination. This has an impact at a macro-international level, where developing countries are considered to be responsible for their poverty, instead of working towards farer deals in areas, such as international commerce and financial markets (Sampedro 1972;Tortosa 2001;Yapa 2002;Lamo de Espinosa 2004;Reis et al. 2005). At a mesonational level, discrimination towards the poor constitute a hindrance for the effective implementation of poverty reduction policies (Arneson 1997;Applebaum 2001;Everatt 2009;Nunn and Biressi 2009), where policy-makers are forced to justify which poor are victims of bad luck, and therefore deserving support, and which are deserving aid ("luck egalitarism") (Anderson 1999). Finally, at a micropersonal level, the stigma towards the poor generates a selfdepreciation, which contributes to a self-fulfilling prophecy of failure to climb up the ladder (Honneth 1996;Habermas 1990;Taylor 1931). Nevertheless, bias against the poor reflects a morally narrow view of social merit, limited to economic and professional credentialism. It is only when the focus is on salary and consumption that badly paid jobs lack social recognition. During the COVID-19 crisis, precariously paid workers in sectors, such as delivery and hospital staff enjoyed an increased social recognition, which is essential to overcome the feelings of shame among the stigmatised and beliefs of deservingness on the side of the stigmatisers. (Goffman 1963;Hegel 1991;Honneth 1996). By offering preliminary evidence about the bias against the poor, this study only scratches the surface of a global and transversal type of social exclusion that potentially can affect 700 M people (10% of the total world population) that currently live in extreme poverty, according to the United Nations (evidence suggests that global poverty could increase by 8% as a result of COVID-19) and is not limited to developing countries (in 2019, 92,4 M people in the EU-27 are at risk of poverty or social exclusion (21.1% of EU-27 population) according to Eurostat). Target terms and attributes Bias cannot be treated as a generalizable manner, but in a context (Zajko 2021), for which a framework is required, from the social sciences perspective, to obtain and analyse meaningful data that can be offered by AI. With that purpose, this paper offers a model to identify and interpret bias based on Cortina's work on aporophobia (rejection towards the poor) (2017) and Allport's categorization of the degrees of "negative action" associated with prejudices (1954). Cortina uses a list of 17 expressions associated with rejection towards the poor. In our study, we have used 262 synonyms, antonyms and related terms to Cortina's expressions to understand how these are related to the concepts of "rich" and "poor". We investigate whether or not a set of favourable attributes is closer or not to the target term "rich" (positive bias towards the rich) and a set of unfavourable attributes more closely related or not to the target term "poor" (bias against the poor). This preliminary approach to measure bias against the poor offers some limitations due to the polysemy of the terms "rich" and "poor". The term poor carries a negative sentiment in English which is not limited to socio-economic topics and the opposite happens with the term "rich". One can talk, for example, about poor results or poor language, which surely has no direct relation to poverty, described as the lack of freedom to carry out a meaningful life with dignity (Sen 2001). Used as adjectives, the terms "rich" and "poor" can be associated to positive and negative attributes for reasons that might have no direct connection to bias against poor people. Therefore the obtained results need to be considered with caution. Further studies using a larger list of key terms related to poverty which do not offer polysemy (such as unemployed or homeless) should be carried out to contrast the results. However, one should also carefully analyse why such a negative sentiment is associated with the adjective "poor" while there is a positive connotation of the adjective "rich", as it is the case with other existing types of bias in terms of race, for example, (where implicit positive connotations are associated with the term "white" as opposed to negative implicit connotations to the term "black", as shown in the Harvard Implicit Association Test) (Xu et al. 2014). Further studies should also analyse the origin of the negative connotations associated to the term poor. Following Allport's categorization of "negative action" resulting from prejudices, the favourable and unfavourable attributes for which the association is measured with the target terms "rich" and "poor" are grouped into 1 first category expressing "belief" (28 favourable and 23 unfavourable words) and 5 categories expressing different degrees of favourable (93 words) or unfavourable attitudes (119 words). The different categories defined by Allport are not sealed compartments, but a conceptual way to organize the favourable and unfavourable expressions that are part of the study and can potentially express bias against or in favour of the poor and the rich. Word coding/embeddings We have measured the semantic distance between the 262 favourable-and unfavourable attributes-related Cortina's expressions and the key terms "rich" and "poor" using vector word representations, which is the state-of-the-art technique in natural language processing. More specifically, we have observed the semantic relationships between the vector word representations in word embeddings (key terms and attributes) in a simple and intuitive way using the cosine distance. In our model, we have proposed the use of three types of categories of words, which we have called favourable, neutral and unfavourable attributes, to measure the semantic distance to the key terms "rich" and "poor" to detect and measure bias. The concept of embedding was born as dense vector representations of words or sentences, with the ability to map, syntactic and semantic relations in a vector space, which is core to Natural Language Processing (NLP) application (Almeida and Xexéo 2019; Camacho-Collados and Pilehvar 2020). Word embeddings are classically classified into two types: count-based embeddings, whose representation is derived from word counts and word frequencies, and predictbased embeddings, which are derived from word context (words neighbouring a core word). The latter are the base of cutting-edge Neural Language Models approach (Adamuthe 2020). The most used embeddings are the predict-based family (Gutiérrez and Keith 2019). For our work, we have used Word2Vec (Mikolov et al. 2013a), FastText (Bojanowski et al. 2016) and Glove (Pennington et al. 2014) which are unsupervised approaches based on the hypothesis that words whose occurrence arises in the same contexts tend to have similar meanings. Using this approach for our work, we are able to measure the distance between words/vectors within a context, since the embedding contains the context information of the data used to build it. The technique we present in this paper could be compared, in a certain way, with a text mining analysis based on an exploratory study where word counting and word clouds could be proposed for a semantic analysis, where the word with the highest frequency is considered the most relevant. However, for a study involving millions of different grammars, the task would become very complex to reach relevant conclusions in terms of identifying bias. Besides, we have selected to perform a vectorial study of the numerical representations of the embedding context, because it offers better explainability, required for all approaches based on machine learning models. Pre-trained embeddings We have detected and measured bias against the poor in pre-trained word embeddings, which are trained on large datasets and constitute an appropriate and available option to measure the distance between the target terms and attributes of the study. In future studies, we aim at training our own embedding, which will allow us to ensure the quality of the data involved and to have more control on the amount of context being compared, providing the possibility, for example, to look for bias against the poor not only using term associations, but also sentence associations, which would contribute to solve the polysemy caveat of the terms "rich" and "poor" identified in this study. We have obtained results from three different embeddings (Google News Word2, Wikipedia GloVe and Twitter GloVe). We have then compared the results obtained, reaching conclusions about the common trends among the three datasets as regards bias against the poor and also about the specificities of this phenomenon in each embedding. • Google news word2vec pre-trained embedding The Google News 300 word embedding is a pre-trained model of word representation as vectors, using 300 features or coordinates in a 300-dimensional system. This model was trained using a Google News database (about 100 million words). A representation of more than 3 million words and phrases was obtained. The base algorithm used for the creation of this embedding was proposed by Mikolov et al. (2013). The resulting model has a weight of 1.3 Gb. • Wikipedia GloVe pre-trained embedding The Wikipedia GloVe word embedding is a pre-trained word representation model, using the GloVe technique based on the global co-occurrence matrix between words. The training corpus is a dataset of Wikipedia publications. The Wikipedia corpus contains about 2000 million words of text from 4400 million Wikipedia pages consolidated up to 2014. Additionally, it contains the Gigaword 5 dataset, a comprehensive collection of news text data that has been acquired over several years by the Linguistic Data Consortium (LDC) and contains 4 billion words. The resulting word representation model contains 6 billion tokens, 400 thousand vocabulary words and was trained with all words uncased. Thus, there are four versions of trained embeddings with different vector dimensions: 50, 100, 200 and 300 dimensions. The weight of the resulting model is 822 MB. • Twitter GloVe pre-trained embedding The Twitter GloVe word embedding is a pre-trained word representation model using the GloVe technique based on the global co-occurrence matrix between words. The training corpus is a dataset of tweets extracted from Twitter social network. For the construction of the model, 2 billion tweets written in English were taken. The resulting model contains 27 billion tokens, 1.2 million vocabulary words and was trained with all words uncased. For this word representation model, there are 25-, 50-, 100-and 200-dimensional versions. The weight of the resulting model is 1.42 GB Methods The following diagram (Fig. 1) illustrates the proposed solution to detect and measure bias against the poor using the key terms "rich" and "poor", 262 "favourable" and "unfavourable" attributes and vector word representations to measure semantic proximity using the cosine distance in pre-trained word embeddings (Google News Word2Vec, Wikipedia GloVe and Twitter GloVe). We have also tested the model using "neutral" attributes. We are fully aware of the limitations attached also to the use of some of these attributes, in particular those that work both as nouns and adjectives. For this reason, a rich array of expressions was chosen. Semantic analysis of words based on vector distances The basis of this work is the semantic analysis based on distance. To get reliable information of the relationship between words, we have decided to use the cosine distance, since this numeric metric preserves the relative direction of two vectors, inside the vectorial space (in our case, the meaning direction between words). Cosine distance between words The cosine of angle indicates directly proportional similarity between two-word vectors. As the metric increases, it indicates that there is greater similarity between the words. Mathematically, similarity between vectors is defined as the cosine of Favourable , unfavourable and neutral attribute the angle between the vectors, so the closer the vectors form an angle to zero, the more similar they are. The cosine of the angle is defined with Eq. (1): Thus, the cosine of the angle is defined as the dot product divided by the multiplication of its norms. Calculation of the dot product between words The similarity metric based on the dot product between the word vectors is directly proportional to the scalar value resulting from the operation. However, this metric increases not only by the cosine of the angle of the vectors, but also by the length of the vectors, so it is necessary to take into account that the metric may be biased by the length of the word vectors. The dot product is defined as in Eq. (2): Semantic relations between target and attribute words based on cosine distance 262 registers were built to capture the semantic relationships between the two target terms "rich" and "poor" literally, and the attribute words to be used as reference points to measure the semantic similarity. It should be taken into account that the value obtained is a number between -1 and 1, since the cosine of an angle belongs to this interval. To carry out our study, we have applied the function arc cosine, presented in Eq. (3), to find the original value of the angle in its natural magnitude radians. Identifying logical relationships (analogies) in the same context (embedding) A word embedding model can be evaluated on the basis of performance in solving analogy questions. This task was first introduced by Mikilov et al. (2013) and consists of performing additive operations between word vectors. The following equation summarises the so-called "analogy relation" that exists between vector operations. Based on the above, one can seek to predict the vector of one of the words by clearing the equation as follows: (1) The result of this equation would be the vector of the word2. In practice, cosine similarity is used to determine that the closest word vector corresponds to the correct answer of the analogy. As a result, we can provide evidence whether a word embedding model is able to maintain the semantic and syntactic relationship between words. Results and discussion The proximity was calculated between the different attributes and the target terms "poor" and "rich". In Table 1, the relative value of 1 indicates that the attribute is closer to "poor" than to "rich" in terms of cosine. Alternatively, relative distances can be calculated in radians and then results need to be read the other way round, namely, the longer the distance, the weaker the association between the attributes and the categories of rich and poor. The main advantage of using radians is that we can calculate "distances of distances" (DD), evaluating the difference between how a certain attribute is associated to "poor" as compared to "rich", allowing a quantitative expression of the bias net effect, which we have named "aporophobia bias indicator" (ABI). The ABI, therefore, constitutes an intrinsic and preliminary way to evaluate bias against the poor in pretrained models for given attributes. We have named this model AWEAT (Aporophobia Word Embedding Association Test), since it is inspired on the WEAT (Word Embedding Association Test) by Caliskan et al. (2017). The AWEAT allows to order and classify the different attributes from higher to lower ABI for a given pretrained embedding (Google News Word2Vec, Wikipedia GloVe and Twitter GloVe) and find out which negative attributes imply higher bias, since they are more closely related to the term "poor" as opposed to the term "rich". If we consider that the lowest negative ABIs are around 0, 14 and that the highest are around 0, 5, we can split this interval into quartiles (following the standards of the Human Development Index). The cut-off points are less than 0.02 for low bias, 0.18 for medium bias, from 0.18 to 0.34 for high bias and above 0.34 to very high bias against the poor. This classification is based on the current selection of attributes. Should the attributes change, the classification should change accordingly. This order and classification bring meaningful information to the research, since attributes, such as "antipathy", "hate speech" and "hate act", would be classified as low bias (in the sense of the level of association of these attributes to "poor" as compared to "rich" in Google News Word2vec pre-trained embedding), whereas at the other extreme, attributes, such as "mediocre", "dreadful" and "substandard", would be classified as very high bias. Therefore, we should distinguish here between association (distance) and gravity (seriousness) of a construct. In this analysis, we are not handling any evidence about the gravity of these attributes. Instead, our focus is on their degree of association (distance) with the poor in the characterisation of bias. For instance, as much as "substandard" seems to present the highest association with the term "poor", as showed in Table 1, it seems to be a relatively inconsequential attribute if compared to "hate acts" or "insults" in terms of their gravity. It is also interesting to analyse some of the attributes that were originally used by Cortina (2017) to see how they compare to each other in terms of ABI. Although Cortina used them quite indistinctly in her discussion, it is possible to see from Fig. 2 that some attributes, such as 'disgust', 'disregard' and 'fear', appear to be more closely associated to the term "poor" (meaning that there is a lower relative distance of that attribute in relation to the term "poor" than in relation to the term "rich") than others, such as 'antipathy' and 'aversion'. Our study, however, includes a wider range of negative expressions (other than those mentioned by Cortina) and this unveils a more complex reality. First, the range of attributes that are closely related to the term "poor" is much richer and more intense than the one originally used by Cortina. Figure 3 illustrates in blue the attributes used by Cortina and in black a sample of other attributes included in the study, following Allport's categorization of prejudices according to the degree of associated action (Table 2 in the Appendix). As a result of broadening the semantic scope and the number of attributes, we find out that attributes that can be included under the categories of "beliefs" or "communication", such as "substandard", "mediocre" or "indifference", according to Allport (1954), have clearly higher ABIs (Table 2). In contrast, attributes that have a stronger degree of action, such as "insult", "hate speech" or "hate act", which are associated to Allport's categories of "discrimination" and "physical Table 1 Proximities and distances between unfavourable attributes and the key terms "poor" and "rich" and the ABI in Google News Word2vec pre-trained embeddings Source: author's creation Negative attributes Proximity to "poor" (cosine) Proximity to "rich" (cosine) attack", are more equidistant to the key terms "rich" and "poor" and therefore less closely associated to the poor. When analysing the results of the favourable attributes (Table 3), two features are immediately evident from a first inspection. First, results for favourable attributes are not necessarily symmetric to unfavourable attributes (as expected, since the terms themselves are not completely symmetric). Second, some favourable attributes are more Fig. 2 ABIs (difference in distance between how an attribute is associated to the term "poor" as compared to the term "rich") for unfavourable attributes used by Cortina (2017) in Google News Word-2vec pre-trained embeddings. Source: authors' creation. OBS: These words have been used by Cortina (2017) and identified by Comim, Borsi and Valerio (2019) Fig. 3 ABIs for unfavourable attributes in Google News Word2vec Pre-trained embedding. Unfavourable attributes used by Cortina (2017) are shown in blue. Source: authors' creation closely related to the term "poor" than to the term "rich", characterising elements that prima facie could be understood as positive bias towards the poor. However, a close inspection reveals that attributes of "sympathy", "politeness", "pleasing", "goodwill", "cordiality" and "friendliness" are all compatible with a certain sense of subservience that can be expected from the poor, reinforcing a certain stereotype of inferiority. We can also verify that some words are relatively neutral towards the rich and the poor. On the other hand, the closer distances found out between favourable attributes and the "rich" reveal hedonist attributes related to attractiveness, pleasure, taste, etc., all part of elements of 'distinction', as famously portrayed by Bourdieu (2010). This phenomenon could be an evidence of plutofilia or overestimation of the rich, which, according to Allport is a previous step to aporophobia, since "one must first overestimate the things one love before one can underestimate their contraries" (1954: 25). It is important to remark, however, that Google News Word2vec pre-trained embedding is not the only informational basis that has been used for this assessment. Two additional embeddings, trained on different databases, are integral part of the study, namely Twitter Glove and Wikipedia Glove. The coincidences between the three analysed embeddings provide robustness to the AWEAT model. Figures 4, 5 and 6 display the key results. Table 3 Proximities and distances between favourable attributes and the key terms "poor" and "rich" and the ABI in Google News Word2vec pre-trained embeddings Source: author's creation Favourable attributes Proximity to "poor" (cosine) Proximity to "rich" (cosine) Fig. 4, positive results indicate that the ABI in Google News is larger than the ABI in Twitter GloVe pretrained embedding. On the other hand, negative results uncover those attributes whose ABIs are higher in Twitter. In fact, by taking the difference between ABIs in the different embeddings, we are calculating a comparative ABI (CABI), resulting from the use of different informational bases, and we are able to see which embedding includes higher bias for specific attributes. In Fig. 4, evidence shows that for attributes related to Allport's category of "belief" (see Table 2 in the appendix), such as "substandard", "mediocre" or "inferior" the CABIs are positive, that is, the bias against the poor is relatively higher in Google News Word2Vec than in Twitter GloVe pretrained embeddings. This finding was unexpected in the study, since most sources in Google News are journalists and professionals (Bolukbasi et al. 2016), as compared to Twitter. Although more evidence is needed, this preliminary results could suggest that news could show higher bias against the poor, for the attributes that express beliefs. On the other hand, negative CABIs suggest that bias against the poor is higher in Twitter GloVe, as compared to Google News Word2Vec, when the attributes correspond to Allport's (1954) categories of "discrimination" or "physical attack" (see Table 2 in the Appendix), that is for attributes, such as "hate speech", "aversion", "rejection", "insult" and "contempt". We find a similar trend, although not as consistent, when comparing the ABIs of unfavourable attributes between Google News Word2Vec and the Wikipedia Glove pretrained embeddings (Fig. 5), suggesting that there is higher degree of bias against the poor in Google News in for attributes that express beliefs. When comparing Twitter GloVe and Wikipedia GloVe pre-trained embeddings (Fig. 6), bias expressed as actions under the categories "discrimination" and even "physical attack" ( Table 2 in the Appendix) appears to be higher in Twitter, whereas bias expressed as beliefs is higher in Wikipedia or equidistant in the two pretrained embeddings. Finally, following Nadeem et al. (2020), we have calculated the distance between the key attributes "rich" and "poor" and neutral attributes using the names of plants, animals and planets, among other terms, to test the robustness of the AWEAT model. Although all terms show a bias (that is appear slightly closer to either "rich" or "poor"), only 4"neutral" terms out of 166 show an ABI level in the order of the first decimal. This proves, on the one hand, that we live in a market economy and therefore all terms have an economic association either to "rich" or "poor". On the other, since this association is much lower than the "favourable" and "unfavourable" attributes used in the study, the test with "neutral" words validates the AWEAT model to evaluate bias against the poor in pre-trained embeddings by measuring the distances between "favourable" and "unfavourable" attributes associated to the poor as compared to the rich. Conclusion This study offers a preliminary disruptive contribution to the body of work on bias with the first set of empirical data evidencing the existence of bias against the poor within the three pre-trained word embeddings included in the study, namely Google Word2Vec, Twitter and Wikipedia GloVe. As a result, this paper empirically illustrates a transversal type of bias that has been unnoticed, since it is an expression of fundamental shared values in welfare states: the belief of equal opportunity and individual responsibility to climb up the ladder. However, when this bias leads inevitably to discriminatory acts, it has serious consequences towards the achievement of the first Sustainable Goal of the United Nations (no poverty). The article also provides evidence that there is a consistently higher degree of bias in Google News Word2Vec, as compared to the other two embeddings, when the attribute terms express beliefs and a higher level of bias against the poor in Twitter GloVe when the terms express behaviour. This preliminary results could suggest that some news in the media would express a higher level of bias against the poor than individuals in terms of expressed beliefs, whether individuals would offer a higher level of bias shown as behaviour (discrimination or physical attack), for the terms included in the study. AI systems act as a warning flag of inconspicuous prejudices expressed as bias, but also contribute to spread biased opinions that can eventually lead to discriminatory behaviours. Further studies should be carried out with wider sample of target terms to mitigate the distorting effect of the polysemy of the selected terms "rich" and "poor". It should also be analysed why, even when not referring to socioeconomic topics, "poor" has a negative connotation as compared to "rich". In addition, further studies could also include a wider list of attributes and pre-trained embeddings to obtain evidence on the impact of the bias against the poor on the communities that are historically disempowered as a result of other factors, such as gender, race, nationality or religion, to name some examples. A comparative study between the bias against the poor in Global North and the Global South would also be recommended, exploring the correlation between the bias against the poor in line with poverty and inequality levels as well as cultural factors. A deeper analysis is also required to compare biases through different social networks communication channels. Although it is not possible to make the world a better place only through algorithms, they can contribute to make a diagnosis and monitor bias and discriminatory behaviours such as hate speech. This study, therefore, constitutes a first step towards taking action to mitigate pre-existing prejudices that can derive in discriminatory actions. In addition, this work constitutes an evidence for the need to oversee AI technologies and the opportunity that human-in-the-loop decision-making, the agreement on pro-ethical development and the implication of social science experts to analyse the roots of bias constitute to convert AI tools not only on autonomous reproducers (and often aggravators) of social inequalities, but on enables for sustainable development. (16) Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. The research leading to these results received partial funding support from the Aristos Campus Mundus Project, promoted by the Universities of Ramon Llull, Deusto and Comillas, with the aim to foster excellence in academics. The authors have no relevant financial or non-financial interests to declare that are relevant to the content of this article. Data availability The datasets analysed during the current study are included in this published article. Supplementary information files generated for the study are available from the corresponding author on reasonable request. Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
9,165.8
2022-06-28T00:00:00.000
[ "Computer Science", "Sociology", "Political Science" ]
Effective Removal of Pb(II) Ions by Electrospun PAN/Sago Lignin-Based Activated Carbon Nanofibers Heavy metal pollution, such as lead, can cause contamination of water resources and harm human life. Many techniques have been explored and utilized to overcome this problem, with adsorption technology being the most common strategies for water treatment. In this study, carbon nanofibers, polyacrylonitrile (PAN)/sago lignin (SL) carbon nanofibers (PAN/SL CNF) and PAN/SL activated carbon nanofibers (PAN/SL ACNF), with a diameter approximately 300 nm, were produced by electrospinning blends of polyacrylonitrile and sago lignin followed by thermal and acid treatments and used as adsorbents for the removal of Pb(II) ions from aqueous solutions. The incorporation of biodegradable and renewable SL in PAN/SL blends fibers produces the CNF with a smaller diameter than PAN only but preserves the structure of CNF. The adsorption of Pb(II) ions on PAN/SL ACNF was three times higher than that of PAN/SL CNF. The enhanced removal was due to the nitric acid treatment that resulted in the formation of surface oxygenated functional groups that promoted the Pb(II) ions adsorption. The best-suited adsorption conditions that gave the highest percentage removal of 67%, with an adsorption capacity of 524 mg/g, were 40 mg of adsorbent dosage, 125 ppm of Pb(II) solution, pH 5, and a contact time of 240 min. The adsorption data fitted the Langmuir isotherm and the pseudo-second-order kinetic models, indicating that the adsorption is a monolayer, and is governed by the availability of the adsorption sites. With the adsorption capacity of 588 mg/g, determined via the Langmuir isotherm model, the study demonstrated the potential of PAN/SL ACNFs as the adsorbent for the removal of Pb(II) ions from aqueous solution. Introduction Heavy metals can contribute to environmental problems, which could have an impact on humans due to its toxicity. The heavy metal pollution could bring an implication to the aquatic life bodies, natural water bodies, and possibly get trapped in the soil through bioaccumulation [1]. Its high solubility in the aquatic environment allows it to be absorbed by living organisms, and large concentrations of heavy metals might accumulate in the human body once they enter our food chain and caused severe health disorders [2]. Lead has been utilized as industrial raw materials for battery manufacturing, printing, pigments, fuels, photographic documents, and dyeing [3]. Lead is a metal which is considered the highest of the environmental concerns [4,5]. A few methods have been developed for the removal of lead from wastewater such as electrochemical treatment, ion exchange, coagulation, reverse osmosis, Molecules 2020, 25 Electrospun PAN/SL Nanofibers Electrospun PAN/SL fibers were prepared by blending PAN and sago lignin (SL) with the ratio of SL to PAN varied from 10% to 50%, using DMF as a solvent. Figure 2 shows the FTIR spectra of SL, PAN and PAN/SL nanofibers of different ratio of PAN:SL. From the IR spectra of PAN nanofibers, the vibrations of the aliphatic -CH groups (-CH, -CH2, and -CH3) are observed at around 2900 cm -1 , 1450 cm -1 , 1350 cm -1 , and 1220 cm -1 , respectively. The peak at 2200 cm -1 can be assigned to the nitrile group, C≡N [26]. C=O carbonyl group around 1640 cm -1 is due to the C=O group from the trace of DMF solvent that did not evaporate during the drying process [27]. For sago lignin, the broad peak at 3387 cm -1 is assigned to -OH stretching peak either alcoholic or hydroxyl group of sago lignin. The peak at 1617.76 cm -1 represented the C=C stretching of the benzene skeleton [28]. The peak at 1460 cm -1 is attributed to C-H bending vibration in methyl groups while the peak at 1069 cm -1 is assigned to C-O deformation in secondary and primary alcohols or aliphatic ether [29]. PAN/SL nanofibers consist of peaks from both sago lignin and PAN. The aliphatic -CH groups of PAN appeared 2930 cm -1 and the absorption peaks between 2200 and 2100 cm -1 were assigned to the C≡N nitrile group. At around 1100 cm -1 , a peak appeared which can be assigned to C-O deformation of the benzene ring in lignin structure while at peak between 780 and 800 cm -1 , the peaks were contributed by the bending of C-H groups of lignin's benzene ring. Electrospun PAN/SL Nanofibers Electrospun PAN/SL fibers were prepared by blending PAN and sago lignin (SL) with the ratio of SL to PAN varied from 10% to 50%, using DMF as a solvent. Figure 2 shows the FTIR spectra of SL, PAN and PAN/SL nanofibers of different ratio of PAN:SL. From the IR spectra of PAN nanofibers, the vibrations of the aliphatic -CH groups (-CH, -CH 2 , and -CH 3 ) are observed at around 2900 cm −1 , 1450 cm −1 , 1350 cm −1 , and 1220 cm −1 , respectively. The peak at 2200 cm −1 can be assigned to the nitrile group, C≡N [26]. C=O carbonyl group around 1640 cm −1 is due to the C=O group from the trace of DMF solvent that did not evaporate during the drying process [27]. For sago lignin, the broad peak at 3387 cm −1 is assigned to -OH stretching peak either alcoholic or hydroxyl group of sago lignin. The peak at 1617.76 cm −1 represented the C=C stretching of the benzene skeleton [28]. The peak at 1460 cm −1 is attributed to C-H bending vibration in methyl groups while the peak at 1069 cm −1 is assigned to C-O deformation in secondary and primary alcohols or aliphatic ether [29]. PAN/SL nanofibers consist of peaks from both sago lignin and PAN. The aliphatic -CH groups of PAN appeared 2930 cm −1 and the absorption peaks between 2200 and 2100 cm −1 were assigned to the C≡N nitrile group. At around 1100 cm −1 , a peak appeared which can be assigned to C-O deformation of the benzene ring in lignin structure while at peak between 780 and 800 cm −1 , the peaks were contributed by the bending of C-H groups of lignin's benzene ring. Due to the lack of chain structures and/or molecular entanglements of lignin, lignin alone could not be electrospun into nanofibers [30,31]. Therefore, lignin was blended with high molecular weight polymers, PAN in order for lignin to be electrospun. The ratio of SL was varied from 10wt% to 50wt%. Figure 3 shows SEM micrographs of PAN/SL nanofibers electrospun at different ratio of PAN/SL; 50:50, 60:40, 70:30, 80:20 and 90:10. The average diameter of PAN/SL of ratio 50:50 was 154 ± 24 nm, 60:40 was 366 ± 13 nm, 70:30 was 374 ± 38 nm, 80:20 was 402 ± 16 nm and 90:10 was 451 ± 21 nm. From the SEM micrographs, the morphology of electrospun PAN/SL nanofibers of 50:50 ratio has beads forming and breakage of fibers. For the 60:40 ratio of PAN/SL, the surface of the fibers appeared to be uneven with the occasional appearance of beads, while 70:30, 80:20 and 90:10 ratios had smooth and uniform fiber morphology. However, PAN/SL nanofibers of the 70:30 ratio of PAN/SL blend solution were chosen as the optimum concentration because its average diameter was smaller compared to 80:20 and 90:10 ratio. The bead-free nanofibers also can be tightly-packed during the electrospinning process compared to 50:50 and 60:40, and this would make the fibers possess a smaller pore size [25]. From here onwards, PAN/SL of 70:30 nanofibers will be denoted as PAN/SL nanofibers only. Due to the lack of chain structures and/or molecular entanglements of lignin, lignin alone could not be electrospun into nanofibers [30,31]. Therefore, lignin was blended with high molecular weight polymers, PAN in order for lignin to be electrospun. The ratio of SL was varied from 10wt% to 50wt%. Figure 3 shows SEM micrographs of PAN/SL nanofibers electrospun at different ratio of PAN/SL; 50:50, 60:40, 70:30, 80:20 and 90:10. The average diameter of PAN/SL of ratio 50:50 was 154 ± 24 nm, 60:40 was 366 ± 13 nm, 70:30 was 374 ± 38 nm, 80:20 was 402 ± 16 nm and 90:10 was 451 ± 21 nm. From the SEM micrographs, the morphology of electrospun PAN/SL nanofibers of 50:50 ratio has beads forming and breakage of fibers. For the 60:40 ratio of PAN/SL, the surface of the fibers appeared to be uneven with the occasional appearance of beads, while 70:30, 80:20 and 90:10 ratios had smooth and uniform fiber morphology. However, PAN/SL nanofibers of the 70:30 ratio of PAN/SL blend solution were chosen as the optimum concentration because its average diameter was smaller compared to 80:20 and 90:10 ratio. The bead-free nanofibers also can be tightly-packed during the electrospinning process compared to 50:50 and 60:40, and this would make the fibers possess a smaller pore size [25]. From here onwards, PAN/SL of 70:30 nanofibers will be denoted as PAN/SL nanofibers only. Due to the lack of chain structures and/or molecular entanglements of lignin, lignin alone could not be electrospun into nanofibers [30,31]. Therefore, lignin was blended with high molecular weight polymers, PAN in order for lignin to be electrospun. The ratio of SL was varied from 10wt% to 50wt%. Figure 3 shows SEM micrographs of PAN/SL nanofibers electrospun at different ratio of PAN/SL; 50:50, 60:40, 70:30, 80:20 and 90:10. The average diameter of PAN/SL of ratio 50:50 was 154 ± 24 nm, 60:40 was 366 ± 13 nm, 70:30 was 374 ± 38 nm, 80:20 was 402 ± 16 nm and 90:10 was 451 ± 21 nm. From the SEM micrographs, the morphology of electrospun PAN/SL nanofibers of 50:50 ratio has beads forming and breakage of fibers. For the 60:40 ratio of PAN/SL, the surface of the fibers appeared to be uneven with the occasional appearance of beads, while 70:30, 80:20 and 90:10 ratios had smooth and uniform fiber morphology. However, PAN/SL nanofibers of the 70:30 ratio of PAN/SL blend solution were chosen as the optimum concentration because its average diameter was smaller compared to 80:20 and 90:10 ratio. The bead-free nanofibers also can be tightly-packed during the electrospinning process compared to 50:50 and 60:40, and this would make the fibers possess a smaller pore size [25]. From here onwards, PAN/SL of 70:30 nanofibers will be denoted as PAN/SL nanofibers only. Spectroscopic Study of The Nanofibers After Thermal Treatment (Stabilization and Carbonization Process) Figure 4 shows the FT-IR spectra of PAN and PAN/SL nanofibers after thermal stabilization and carbonization. For PAN and PAN/SL nanofibers after stabilization, the peak of C≡N nitrile group of PAN shifted to a higher wavenumber which was from between 2200-2100 cm −1 to 2400 cm -1 , indicating a conversion of C≡N into C=N as a result of cyclization and cross-linking of PAN into the ladder-like structure during stabilization. The absorption peak around 1570 cm -1 belonged to a mix of C=N, C=C, and N-H groups and the appearance of C=C was due to dehydrogenation [32]. After carbonization, the C≡N nitrile group disappeared because at a higher temperature (carbonization process) the functional groups including NH3, HCN, and N2 disappeared. The peak between 1680 cm -1 and 1530 cm -1 was assigned to the stretching and bending of C=C groups while the absorption peak around 1100 cm -1 was due to C-H deformations [33]. Figure 4 shows the FT-IR spectra of PAN and PAN/SL nanofibers after thermal stabilization and carbonization. For PAN and PAN/SL nanofibers after stabilization, the peak of C≡N nitrile group of PAN shifted to a higher wavenumber which was from between 2200-2100 cm −1 to 2400 cm −1 , indicating a conversion of C≡N into C=N as a result of cyclization and cross-linking of PAN into the ladder-like structure during stabilization. The absorption peak around 1570 cm −1 belonged to a mix of C=N, C=C, and N-H groups and the appearance of C=C was due to dehydrogenation [32]. After carbonization, the C≡N nitrile group disappeared because at a higher temperature (carbonization process) the functional groups including NH 3 , HCN, and N 2 disappeared. The peak between 1680 cm −1 and 1530 cm −1 was assigned to the stretching and bending of C=C groups while the absorption peak around 1100 cm −1 was due to C-H deformations [33]. Morphology Study of PAN Nanofibers and PAN/SL Nanofibers After Stabilization and Carbonization Process PAN nanofibers and PAN/SL nanofibers underwent stabilization and carbonization and will be denoted as PAN CNFs and PAN/SL CNFs. In Figure 5a,b, the appearance of the nanofibers remained intact and no melting of fibers was observed after stabilization. The stabilization process prior to carbonization is an important process that influence the morphologies and the mechanical properties of the final carbon nanofibers [34]. For PAN nanofibers, after stabilization (Figure 5a), the average diameter of the fibers was 377 ± 6 nm, and after further thermal treatment at a higher temperature (carbonization (Figure 5b), the average diameter significantly decreased to 368 ± 8 nm. Overall, the average diameter of electrospun PAN fibers reduced almost 1.5 times after carbonization. The average diameter reduced significantly after these two processes (stabilization and carbonization) and might be due to the liberation of gases such as CO2, CO, NH3 and HCN and the densification of carbon atoms in the polymer chains during thermal treatment [35,36]. Figure 5c,d illustrates the SEM micrographs for PAN/SL nanofibers after stabilization and carbonization, respectively. The average diameter of the fibers for PAN/SL nanofibers was reduced to 371 ± 5 nm after stabilization and further reduced to 326 ± 5 nm after carbonization. In comparison to PAN CNFs, the diameter of average fibers for PAN/SL CNFs was smaller compared to PAN CNFs. This is because when PAN blended with sago lignin, the viscosity of the polymer solution decreased due to the smaller molecular weight of lignin and accompanied by the removal of hydrogen and oxygen atoms in SL structure during the carbonization [30] may contribute to the significant reduction of the diameter of the fibers of PAN/SL CNFs compared to PAN CNFs. Figure 5e represents the SEM image of PAN/SL activated CNF (ACNF). The average fiber diameter of ACNF is 323 nm. The SEM images reveal that after the carbonization and activation processes, the basic nanofibrous structure of PAN/SL remained intact but the fiber diameter shrank significantly. Morphology Study of PAN Nanofibers and PAN/SL Nanofibers After Stabilization and Carbonization Process PAN nanofibers and PAN/SL nanofibers underwent stabilization and carbonization and will be denoted as PAN CNFs and PAN/SL CNFs. In Figure 5a,b, the appearance of the nanofibers remained intact and no melting of fibers was observed after stabilization. The stabilization process prior to carbonization is an important process that influence the morphologies and the mechanical properties of the final carbon nanofibers [34]. For PAN nanofibers, after stabilization (Figure 5a), the average diameter of the fibers was 377 ± 6 nm, and after further thermal treatment at a higher temperature (carbonization (Figure 5b), the average diameter significantly decreased to 368 ± 8 nm. Overall, the average diameter of electrospun PAN fibers reduced almost 1.5 times after carbonization. The average diameter reduced significantly after these two processes (stabilization and carbonization) and might be due to the liberation of gases such as CO 2 , CO, NH 3 and HCN and the densification of carbon atoms in the polymer chains during thermal treatment [35,36]. Figure 5c,d illustrates the SEM micrographs for PAN/SL nanofibers after stabilization and carbonization, respectively. The average diameter of the fibers for PAN/SL nanofibers was reduced to 371 ± 5 nm after stabilization and further reduced to 326 ± 5 nm after carbonization. In comparison to PAN CNFs, the diameter of average fibers for PAN/SL CNFs was smaller compared to PAN CNFs. This is because when PAN blended with sago lignin, the viscosity of the polymer solution decreased due to the smaller molecular weight of lignin and accompanied by the removal of hydrogen and oxygen atoms in SL structure during the carbonization [30] may contribute to the significant reduction of the diameter of the fibers of PAN/SL CNFs compared to PAN CNFs. Figure 5e represents the SEM image of PAN/SL activated CNF (ACNF). The average fiber diameter of ACNF is 323 nm. The SEM images reveal that after the carbonization and activation processes, the basic nanofibrous structure of PAN/SL remained intact but the fiber diameter shrank significantly. Carbon Yield The determination of carbon yield was conducted using TGA under an inert atmosphere according to previously reported by Thunga et al. [37]. The carbon yield of PAN CNFs and PAN/SL CNFs were obtained from the residual wt% at 1000 ℃ in the TGA thermogram. The carbon yield for PAN CNFs and PAN/SL CNFs were 55.3% and 66.51%, respectively. After blending with sago lignin, the carbon yield was increased by 11.21% which indicates that the aromatic structure of lignin contributed to the increase in carbon yield. Thunga et al. [37] also reported a similar result in which the carbon yield increased after PLA was blended with butyration lignin. At temperatures above 500 ℃, the amorphous carbon was formed and most of the benzene rings of lignin were transformed into aromatic structures and increased the densification of carbon atoms which lead to the increase in carbon yield [35,38]. PAN/SL and PAN (nanofibers, after stabilization and after carbonization) were further characterized using XRD. The results and discussion of XRD characterization are included in the Supplementary Materials. Carbon Yield The determination of carbon yield was conducted using TGA under an inert atmosphere according to previously reported by Thunga et al. [37]. The carbon yield of PAN CNFs and PAN/SL CNFs were obtained from the residual wt% at 1000°C in the TGA thermogram. The carbon yield for PAN CNFs and PAN/SL CNFs were 55.3% and 66.51%, respectively. After blending with sago lignin, the carbon yield was increased by 11.21% which indicates that the aromatic structure of lignin contributed to the increase in carbon yield. Thunga et al. [37] also reported a similar result in which the carbon yield increased after PLA was blended with butyration lignin. At temperatures above 500°C, the amorphous carbon was formed and most of the benzene rings of lignin were transformed into aromatic structures and increased the densification of carbon atoms which lead to the increase in carbon yield [35,38]. PAN/SL and PAN (nanofibers, after stabilization and after carbonization) were further characterized using XRD. The results and discussion of XRD characterization are included in the Supplementary Materials. Spectroscopic Study of PAN/SL ACNFs After the carbonization process, PAN/SL CNFs were subjected to an activation step to produce activated PAN/SL CNFs (PAN/SL ACNFs). This step modified the surface of PAN/SL CNFs using concentrated nitric acid. Figure 6 shows the IR spectra of (a) PAN/SL CNFs and (b) PAN/SL ACNFs respectively. A hydroxyl band appeared at 3200 cm −1 for PAN/SL ACNFs and was attributed to the presence of more -OH groups probably due to phenolic groups after modification with HNO 3 [39]. Peaks at 1559 cm −1 and 1547 cm −1 were observed for PAN/SL ACNFs and CNFs can be assigned to highly conjugated carbonyls (quinone groups) and C=C stretching vibrations of aromatic rings [40,41]. This peak demonstrated that the C=C carbon backbone of CNFs was not affected even after modification with nitric acid. A new peak was observed for the PAN/SL ACNFs at 1374 cm −1 belong to symmetric NO 2 stretch vibrations which indicates that NO 2 groups were successfully introduced on the surface of carbon fibers upon HNO 3 treatment [42]. The peaks at 1069 cm −1 and 1103 cm −1 were assigned to C-O deformations which can be observed for both PAN/SL ACNFs and CNFs. Molecules 2020, 25, x FOR PEER REVIEW 8 of 22 After the carbonization process, PAN/SL CNFs were subjected to an activation step to produce activated PAN/SL CNFs (PAN/SL ACNFs). This step modified the surface of PAN/SL CNFs using concentrated nitric acid. Figure 6 shows the IR spectra of (a) PAN/SL CNFs and (b) PAN/SL ACNFs respectively. A hydroxyl band appeared at 3200 cm -1 for PAN/SL ACNFs and was attributed to the presence of more -OH groups probably due to phenolic groups after modification with HNO3 [39]. Peaks at 1559 cm -1 and 1547 cm -1 were observed for PAN/SL ACNFs and CNFs can be assigned to highly conjugated carbonyls (quinone groups) and C=C stretching vibrations of aromatic rings [40,41]. This peak demonstrated that the C=C carbon backbone of CNFs was not affected even after modification with nitric acid. A new peak was observed for the PAN/SL ACNFs at 1374 cm -1 belong to symmetric NO2 stretch vibrations which indicates that NO2 groups were successfully introduced on the surface of carbon fibers upon HNO3 treatment [42]. The peaks at 1069 cm -1 and 1103 cm -1 were assigned to C-O deformations which can be observed for both PAN/SL ACNFs and CNFs. Figure 7 shows the N2 adsorption-desorption isotherms of (a) PAN/SL CNFs and (b) PAN/SL ACNFs. From the graph, N2 adsorption-desorption was lower for PAN/SL ACNFs which might be due to the limited amount of pores necessary for N2 adsorption. Table 1 summarizes the results of N2 adsorption-desorption isotherms of PAN/SL CNFs and PAN/SL ACNFs. These data demonstrate that after activation with HNO3, the total surface area was significantly decreased from 292.24 to 57.37 m 2 /g accompanied by a widening of the average pore diameter. The widening of average pore diameter after activation suggests the effective attack by nitric acid on the micropores, resulting in the partial destruction of their structure [43,44]. The attachment of oxygenated groups at the entrance and/or on the walls of micropores might destruct the walls of the pores and lead to the decreased in the total surface area and pore volume of the adsorbent [45]. Figure 7 shows the N 2 adsorption-desorption isotherms of (a) PAN/SL CNFs and (b) PAN/SL ACNFs. From the graph, N 2 adsorption-desorption was lower for PAN/SL ACNFs which might be due to the limited amount of pores necessary for N 2 adsorption. Table 1 summarizes the results of N 2 adsorption-desorption isotherms of PAN/SL CNFs and PAN/SL ACNFs. These data demonstrate that after activation with HNO 3 , the total surface area was significantly decreased from 292.24 to 57.37 m 2 /g accompanied by a widening of the average pore diameter. The widening of average pore diameter after activation suggests the effective attack by nitric acid on the micropores, resulting in the partial destruction of their structure [43,44]. The attachment of oxygenated groups at the entrance and/or on the walls of micropores might destruct the walls of the pores and lead to the decreased in the total surface area and pore volume of the adsorbent [45]. a BET surface area determined by the BET method within relative pressure of P/P0 = 0.1-0.01. b Micropores surface area and pore volume were calculated by the t-plot method. c Total pore volume measured at a relative pressure of P/P0 = 0.9. XPS Analysis of PAN/SL ACNFs The activation of PAN/SL ACNFs with nitric acid was further characterized using XPS analysis to study the chemical structure and elemental composition on its surface [26]. The C1s, O1s, and N1s spectra are shown in Figure 8. The C1s binding energy (BE) of PAN/SL ACNFs composed of three components assigned to carbon atoms of the polymer chain. BE of C1s that was located at 283.6 eV was assigned to C-C and C=C bonds of a graphitic structure [46]. The BE of this band was not the typical carbon in graphitic structure as this band was broadened toward lower BE which might be due to a significant contribution of the C-H bonding in the lignin [46]. The BE at peaks 285.6 and 288.8 eV are due to C-O-C (carbon in phenolic, alcohol, or ether) and O-C=O, respectively suggesting the presence of a variety of oxygenated functional groups in ACNFs [39]. The XPS O1s of PAN/SL ACNFs consist of three BE bands. The band at 529.7 eV was assigned to C=O groups while the band at 532.9 eV was due to oxygen in phenolic groups, C-O-H which is common in the lignin structure [46]. BE at band 536.2 eV was caused by the adsorbed water and/or oxygen [39]. On the other hand, XPS N1s only consists of one band at 397.1 eV that was assigned to pyridinic N which is due to the nitric acid treatment [47]. a BET surface area determined by the BET method within relative pressure of P/P0 = 0.1-0.01. b Micropores surface area and pore volume were calculated by the t-plot method. c Total pore volume measured at a relative pressure of P/P0 = 0.9. XPS Analysis of PAN/SL ACNFs The activation of PAN/SL ACNFs with nitric acid was further characterized using XPS analysis to study the chemical structure and elemental composition on its surface [26]. The C 1s , O 1s , and N 1s spectra are shown in Figure 8. The C 1s binding energy (BE) of PAN/SL ACNFs composed of three components assigned to carbon atoms of the polymer chain. BE of C1s that was located at 283.6 eV was assigned to C-C and C=C bonds of a graphitic structure [46]. The BE of this band was not the typical carbon in graphitic structure as this band was broadened toward lower BE which might be due to a significant contribution of the C-H bonding in the lignin [46]. The BE at peaks 285.6 and 288.8 eV are due to C-O-C (carbon in phenolic, alcohol, or ether) and O-C=O, respectively suggesting the presence of a variety of oxygenated functional groups in ACNFs [39]. The XPS O 1s of PAN/SL ACNFs consist of three BE bands. The band at 529.7 eV was assigned to C=O groups while the band at 532.9 eV was due to oxygen in phenolic groups, C-O-H which is common in the lignin structure [46]. BE at band 536.2 eV was caused by the adsorbed water and/or oxygen [39]. On the other hand, XPS N 1s only consists of one band at 397.1 eV that was assigned to pyridinic N which is due to the nitric acid treatment [47]. Figure 9 shows the percentage removal of Pb(II) ions for PAN/SL CNFs and PAN/SL ACNFs. Although the PAN/SL CNF has a higher surface area, the adsorption of Pb(II) ions on PAN/SL ACNF (63%) is three times higher than that of PAN/SL CNF (19%). The high percentage removal of Pb(II) ions of PAN/SL ACNF is due to the presence of the oxygenated surface group generated after the activation of the PAN/SL with HNO 3 , as evidenced by FTIR and XPS analyses. The attachment of surface oxygen groups decreases the hydrophobicity of the adsorbent surface, thus improves the interaction with Pb(II) ions and promotes the Pb(II) ion adsorption on the PAN/SL ACNF [44]. ions of PAN/SL ACNF is due to the presence of the oxygenated surface group generated after the activation of the PAN/SL with HNO3, as evidenced by FTIR and XPS analyses. The attachment of surface oxygen groups decreases the hydrophobicity of the adsorbent surface, thus improves the interaction with Pb(II) ions and promotes the Pb(II) ion adsorption on the PAN/SL ACNF [44]. Effect of Contact Time Since the time taken for the adsorption process to achieve equilibrium is of considerable significance to develop economical heavy metal adsorbents, the effect of contact time on the percentage removal of Pb(II) ions by PAN/SL ACNFs was investigated. As depicted in Figure 10, the percentage removal of Pb(II) ions increased rapidly up to 120 min. It then gradually decreased until equilibrium at 240 min with Pb(II) ions removed at a rate of 76%. The rapid adsorption at the early stage was due to the availability of a large number of vacant adsorption sites, which lead to the increase of adsorbate accumulated on the carbon nanofibers surface. The gradual decrease in the adsorption at a later stage was due to repulsive forces between the adsorbed Pb(II) ions and the Pb(II) ions in the solution as the surface became saturated and reached equilibrium [48]. Effect of Contact Time Since the time taken for the adsorption process to achieve equilibrium is of considerable significance to develop economical heavy metal adsorbents, the effect of contact time on the percentage removal of Pb(II) ions by PAN/SL ACNFs was investigated. As depicted in Figure 10, the percentage removal of Pb(II) ions increased rapidly up to 120 min. It then gradually decreased until equilibrium at 240 min with Pb(II) ions removed at a rate of 76%. The rapid adsorption at the early stage was due to the availability of a large number of vacant adsorption sites, which lead to the increase of adsorbate accumulated on the carbon nanofibers surface. The gradual decrease in the adsorption at a later stage was due to repulsive forces between the adsorbed Pb(II) ions and the Pb(II) ions in the solution as the surface became saturated and reached equilibrium [48]. Effect of Adsorbent Dosage The effect of adsorbent dosage on the percentage removal of Pb(II) ions and the adsorption capacity, q is illustrated in Figure 11. The percentage removal of Pb(II) ions increased from 63% to Effect of Adsorbent Dosage The effect of adsorbent dosage on the percentage removal of Pb(II) ions and the adsorption capacity, q is illustrated in Figure 11. The percentage removal of Pb(II) ions increased from 63% to 95% with increasing adsorbent dosages. As the adsorbent dosage increases, the number of adsorption sites also increases, leading to higher percentage removal of Pb(II) ions. The adsorption capacity, q, however, decreased with increasing adsorbent dosage. The decrease is due to the increasing number of unoccupied adsorption sites as the ratio of Pb(II) ions to the adsorption sites decreases with increasing adsorbent dosage [49]. Based on both the percentage removal and the q values, the best adsorbent dosage for the adsorption study is 40 mg. Effect of Adsorbent Dosage The effect of adsorbent dosage on the percentage removal of Pb(II) ions and the adsorption capacity, q is illustrated in Figure 11. The percentage removal of Pb(II) ions increased from 63% to 95% with increasing adsorbent dosages. As the adsorbent dosage increases, the number of adsorption sites also increases, leading to higher percentage removal of Pb(II) ions. The adsorption capacity, q, however, decreased with increasing adsorbent dosage. The decrease is due to the increasing number of unoccupied adsorption sites as the ratio of Pb(II) ions to the adsorption sites decreases with increasing adsorbent dosage [49]. Based on both the percentage removal and the q values, the best adsorbent dosage for the adsorption study is 40 mg. Contact time (min) Figure 11. Effect of adsorbent dosage on the percentage removal and adsorption capacity, q. Condition: [Pb(II)] = 100 mg/L, pH = 5, contact time = 120 min. Figure 12 shows the effect of various initial Pb(II) concentrations on the percentage of removal by PAN/SL ACNFs. As the concentration of Pb(II) ion increases, the percentage removal decreases while the adsorption capacity increases. With a fixed mass of adsorbent used, the ratio of adsorption sites to Pb(II) ions changes with an increasing initial Pb(II) ion solution. At a low Pb(II) concentration, the surface area and the availability of adsorption sites were relatively high, and Pb(II) ions were easily adsorbed and removed from the solution. At a higher concentration of PB(II) solution, the total available adsorption sites were limited, thus, resulting in a decrease in percentage removal of Pb(II) ions. At a higher concentration of Pb(II) solution, a higher driving force is available to overcome the resistance of Pb(II) ion between the aqueous and solid phases, which leads to an increase in adsorption capacity. At a higher concentration of 125 mg/L, the change in adsorption capacity was relatively small; thus, 125 mg/L was chosen to be used for other optimization parameters. easily adsorbed and removed from the solution. At a higher concentration of PB(II) solution, the total available adsorption sites were limited, thus, resulting in a decrease in percentage removal of Pb(II) ions. At a higher concentration of Pb(II) solution, a higher driving force is available to overcome the resistance of Pb(II) ion between the aqueous and solid phases, which leads to an increase in adsorption capacity. At a higher concentration of 125 mg/L, the change in adsorption capacity was relatively small; thus, 125 mg/L was chosen to be used for other optimization parameters. Effect of pH The adsorption of Pb(II) ions on PAN/SL ACNFs at different pH values is affected by the surface properties of the adsorbent and the speciation of the metal ions. The pHpzc of the adsorbent, as determined using the pH drift method, was 3.8, which indicates that the surface of the adsorbent was positively charged at pH below 3.8 and negatively charged at pH beyond 3.8. Pb(II) ions are known to precipitate as Pb(OH)2 at pH above 7. The effect of the initial pH solution, conducted at a pH range of 1 to 7, on the removal of Pb(II) ions is as shown in Figure 13. The percentage removal of Pb(II) ions was low in very acidic conditions (pH 1-3) due to electrostatic repulsion between the Pb(II) ions and the positively charged surface of the adsorbent. Besides, competition for the adsorption sites between the abundantly available H + ions and Pb(II) ion can also lead to a reduction in percentage removal. The removal of Pb(II) is more significant at a pH range of 4 to 7, with the highest removal of 67% observed at pH 5-6. The enhancement of the percentage removal was due to the electrostatic attraction between the Pb(II) ion and the negatively charged surface of the adsorbent. Effect of pH The adsorption of Pb(II) ions on PAN/SL ACNFs at different pH values is affected by the surface properties of the adsorbent and the speciation of the metal ions. The pHpzc of the adsorbent, as determined using the pH drift method, was 3.8, which indicates that the surface of the adsorbent was positively charged at pH below 3.8 and negatively charged at pH beyond 3.8. Pb(II) ions are known to precipitate as Pb(OH) 2 at pH above 7. The effect of the initial pH solution, conducted at a pH range of 1 to 7, on the removal of Pb(II) ions is as shown in Figure 13. The percentage removal of Pb(II) ions was low in very acidic conditions (pH 1-3) due to electrostatic repulsion between the Pb(II) ions and the positively charged surface of the adsorbent. Besides, competition for the adsorption sites between the abundantly available H + ions and Pb(II) ion can also lead to a reduction in percentage removal. The removal of Pb(II) is more significant at a pH range of 4 to 7, with the highest removal of 67% observed at pH 5-6. The enhancement of the percentage removal was due to the electrostatic attraction between the Pb(II) ion and the negatively charged surface of the adsorbent. The adsorption mechanism of Pb(II) was investigated by fitting the experimental adsorption data to several kinetic models, namely, pseudo-first-order, pseudo-second-order, and intraparticle Adsorption Kinetics and Isotherm The adsorption mechanism of Pb(II) was investigated by fitting the experimental adsorption data to several kinetic models, namely, pseudo-first-order, pseudo-second-order, and intraparticle diffusion models. The linearized form of the pseudo-first-order and pseudo-second-order is as shown in Equations (1) and (2), respectively; ln q e − q t = lnq e − k 1 t (1) where k 1 and k 2 are the rate constants, and q t and q e are the amounts of Pb(II) adsorbed at time t and equilibrium, respectively. Table 2 summarizes the adsorption capacities, q e , rate constants, and the correlation coefficient (R 2 ) values at different initial concentrations of Pb(II) solutions. The R 2 value of the pseudo-second-order model is closer to unity. Besides, the experimental q e is also comparable to that of the calculated q e of the pseudo-second-order model. These results show that the kinetics of Pb(II) adsorption onto PAN/SL ACNFs can be described well by the pseudo-second-order model, suggesting that the adsorption process is governed by the availability of the adsorption sites rather than the concentration of the Pb(II) ions [50]. As shown in Table 2, the second-order rate constant, k 2 , increased with increasing Pb(II) concentration up to 125 mg/L, before decreasing with further increases in the initial Pb(II) concentration. The initial increase of k 2 may be due to increase driving force for mass transfer with increasing Pb(II) concentrations. However, at higher Pb(II) concentration, although the q e increased, the k 2 decreased. This decrease could likely be due to competitive adsorption between Pb(II) ions (highly concentrated) towards the limited adsorption sites and diffusion of metal ions from the surface to the interior adsorption sites of the adsorbent. The contribution of diffusion in the adsorption mechanism was analyzed using the intra-particle diffusion model. The linearized form of the intra-particle diffusion model is expressed in Equation (3) as follows: where k id is the intra-particle diffusion rate constant (mg/g.min 1/2 ) and C is the intercept which is related to the boundary layer thickness. Figure 14 illustrates the intra-particle diffusion multilinear plot for adsorption of Pb(II) ions onto PAN/SL ACNFs at different initial Pb(II) concentrations, suggesting a multistep adsorption process. The first step is due to the adsorption of Pb(II) on the external surface of the adsorbent via boundary layer diffusion [51]. The second step is the gradual adsorption stage, which can be related to the diffusion of the Pb(II) ions into the pores of the adsorbent, where intra-particle diffusion is the rate-controlling step [52]. The intra-particle diffusion rate, k id , (Table 3) at Pb(II) concentrations above 125 ppm is much higher than that below 125 ppm. This observation corresponds to the greater driving force for mass transfer at higher Pb(II) concentration, which in turn leads to an increase in the q e value. where kid is the intra-particle diffusion rate constant (mg/g.min 1/2 ) and C is the intercept which is related to the boundary layer thickness. Figure 14 illustrates the intra-particle diffusion multilinear plot for adsorption of Pb(II) ions onto PAN/SL ACNFs at different initial Pb(II) concentrations, suggesting a multistep adsorption process. The first step is due to the adsorption of Pb(II) on the external surface of the adsorbent via boundary layer diffusion [51]. The second step is the gradual adsorption stage, which can be related to the diffusion of the Pb(II) ions into the pores of the adsorbent, where intra-particle diffusion is the rate-controlling step [52]. The intra-particle diffusion rate, kid, (Table 3) at Pb(II) concentrations above 125 ppm is much higher than that below 125 ppm. This observation corresponds to the greater driving force for mass transfer at higher Pb(II) concentration, which in turn leads to an increase in the qe value. [53]. In this study, Langmuir and Freundlich isotherm models were applied to investigate the interaction between adsorbate and adsorbent. The linearized form of the Langmuir and Freundlich models is as shown in Equations (4) and (5), respectively: log q e = log K F + 1 n logC e (5) The adsorption isotherm is essential for validating an accurate prediction of adsorption parameters and quantitative analysis of adsorption behavior for various adsorbent systems and conditions [53]. In this study, Langmuir and Freundlich isotherm models were applied to investigate the interaction between adsorbate and adsorbent. The linearized form of the Langmuir and Freundlich models is as shown in Equations (4) and (5), respectively: logq e = logK F + 1 n logC e (5) where, C e (mg/L) and q e (mg/g) is the concentration of metal ions, and the amount of metal ion adsorbed at equilibrium, respectively, while q m (mg/g) is the maximum adsorption capacity. K L (L/mg) is the Langmuir constant that is related to the affinity of the binding site, whereas n and K F are Freundlich constants related to adsorption intensity and adsorption capacity, respectively. As shown in Table 4, the R 2 value for both the Langmuir and Freundlich isotherm model is near unity, indicating that the adsorption of Pb(II) ions on PAN/SL ACNFs fitted both adsorption models. However, the calculated q e values of pseudo-second-order kinetics for Pb(II) concentrations of 150 and 175 ppm ( Table 2) are the same as the calculated q m value of Langmuir isotherm. This similarity indicates that monolayer adsorption of Pb(II) ions took place on the surface of the adsorbent. Hence, the adsorption process can be said to follow the Langmuir isotherm model with q m of 588.24 mg/g and K L of 0.2537 L/mg. The adsorption of Pb(II) by PAN/SL ACNF was compared to that of other carbon-based adsorbents reported in the literature, and the results are shown in Table 5. The maximum adsorption capacity of PAN/SL ACNFs demonstrated that it had higher efficiency for the adsorption of Pb(II) from aqueous solution compared to other carbon-based adsorbents stated in the table. Preparation of Electrospun PAN and PAN/SL Nanofibers The isolation of sago lignin from sago waste was conducted in accordance with Schwanninger and Hinterstoisser [55] with a slight modification. PAN solutions were prepared by dissolving PAN with DMF as the solvent with various concentrations (2.5, 5, 7.5, and 10% (wt/v)) for the preparation of electrospun PAN fibers. The solutions were stirred at room temperature until homogenous solutions were obtained. Electrospun PAN/SL fibers were prepared by blending PAN with SL with a ratio of lignin to PAN varied from 10% to 50% using DMF as the solvent. The polymer blend was stirred at room temperature until the polymers were dissolved completely. For the electrospinning process, the polymer solution was loaded into a 5 mL syringe attached with a 0.8 mm inner diameter needle. The distance between the needle tip and the aluminum foil collector was fixed at 10 cm. The electrospinning was conducted using 18 kV voltage with a 2 mL/h flow rate. The nanofibers formed on the collector was collected and dried in the oven overnight. Preparation of Carbon Nanofibers PAN nanofiber and PAN/SL nanofibers undergone two steps of thermal treatment to transform into CNFs. The electrospun nanofibers were thermally stabilized at 250 • C for 1 h in the air. The sample was then carbonized in a tube furnace at 1000 • C for 1 h under nitrogen gas flow. The carbon nanofiber produced from PAN nanofiber and PAN/SL nanofiber was denoted as PAN CNF and PAN/SL CNF, respectively. The PAN/SL CNF was then activated, to produce PAN/SL ACNF, according to Ihsanullah et al. [56], with some minor modifications. The activation was done to decrease the hydrophobicity of the carbon adsorbent. Three hundred milliliter of 69% nitric acid was added to 2 g of PAN/SL CNF, and the mixture was heated to 80°C and refluxed at 120°C for 48 h under a fume hood. Once it had cooled to room temperature, about 200 mL of deionized water was added into the mixture. The PAN/SL ACNFs was recovered via filtration and washed with deionized water until the pH of the filtrate was neutral. The activated adsorbent was dried in the oven at 100°C overnight. All the prepared samples were sent for characterization. Characterizations of PAN/SL ACNFs The morphology of the nanofibers was imaged using scanning electron microscopy, SEM (JEOL JSM 6400, Tokyo, Japan), and the average fiber diameter was determined using ImageJ software (downloaded at https://imagej.net/Downloads) with 200 readings per sample. The infrared spectrum of the samples, before and after activation, was recorded in the range of 4000 to 280 cm −1 on an attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectrometer (Perkin Elmer Spectrum RXI, Waltham, MA, USA) to determine the functional groups of the samples. Thermal gravimetric analysis (TGA), Mettler Toledo Thermogravimetric Model TGA/SDTA, was used to determine the thermal stability and carbon yield of the CNFs. The samples were heated from 35 • C to 1000 • C with a heating rate of 10 • C/min under a nitrogen atmosphere, and the amount of weight loss as a function of temperature was examined. X-ray diffraction (XRD) is an ideal method for characterization and identification of crystalline phase of the nanofibers and CNFs. The powder was put in specific mould and diffraction was generated by CuKα radiation. The analysis was carried out using Shimadzu Model XRD-6000 (Tokyo, Japan) and the 2θ range from 2 0 to 60 0 with scanning rate of 2 0 /min were used. The surface area and porosity of the prepared carbon fiber were determined by N 2 adsorption measurements at 77 K on a BELsorp Mini II. The surface area of the carbon fibers was obtained using the Brunauer-Emmet-Teller (BET) method. At the same time, the porosity (pore volume and average pore diameter) was evaluated using the Barret, Joyner, and Halenda (BJH) method. X-ray photoelectron spectroscopy (XPS) was carried out on a PHI 5000 ESCALAB MKII instrument (Chanhassen, MN, USA) with an Al Kα anode. XPS was used to examine the chemical composition of the ACNFs and the attachment of oxygenated functional groups. A monochromatic X-ray beam source at 1638.4 eV (aluminum anode) was used to scan the sample surface. All binding energies were referenced to the neutral C1s peaks at 284.8 eV. The narrow scan data were used to determine probable compounds for the C, O, and N peaks [42]. The surface charge of the adsorbent was evaluated by determining the pH point of zero charge (pHpzc). At this pH, the electrical charge density of the adsorbent surface was zero [57,58]. The pHpzc was determined using the pH drift method, with sodium chloride served as an inert electrolyte [59]. Sodium chloride (0.1 M) was placed in a series of conical flasks, and the pH of the solution was adjusted to 1-7 using 0.1 M HCl or 0.1 M NaOH. ACNFs (40 mg) were added into each of the flasks and agitated for 24 h at room temperature. After 24 h, the equilibrated solutions were decanted, and the final stabilized pH was measured using pH meter. The final pH was plotted against the initial pH and the pH at which the curve crosses the pH initial = pH final line was taken as pHpzc [60]. Batch Adsorption of Pb(II) Ions The adsorption of Pb(II) ions by PAN CNFs and PAN/SL CNFs was studied using the batch adsorption method. The stock solution of Pb(II) ions (1000 ppm) was prepared by dissolving 1.5980 g of Pb(NO 3 ) 2 salt with deionized water in 1000 mL volumetric flask. Different initial concentrations of Pb(II) ion solutions were prepared by diluting the stock solution. The adsorption of Pb(II) was conducted by contacting 100 mg/L Pb(II) solution with 25 mg each of PAN CNF and PAN/SL CNFs at pH 5. The experiments were conducted in a 250 mL conical flask and the solution mixture was magnetically stirred at 200 rpm for 240 min at room temperature. After specified time intervals, the solution mixture was collected and analyzed using an inductively coupled plasma-optical emission spectrometer (ICP-OES, Perkin Elmer Optima 2100 DV, Shelton, CT, USA). To optimize the adsorption process, the one factor at a time method was used. The four parameters used in the optimization experiments were adsorbent dosage (25-100 mg), contact time, initial concentration of Pb(II) solution (75-175 mg/L) and pH of the solution (1-7). The pH of the solution was adjusted to the desired value by the addition of either 1.0 M HCl or 1.0 M NaOH solution. The percentage of removal of Pb(II) and the adsorption capacity was calculated using Equations (6) and (7), respectively. Adsorption capacity, q = C 0 − C t m × V where q (mg/g) is the amount of Pb(II) adsorbed on the adsorbent C 0 (mg/L) is the initial concentration of Pb(II) ions, C t (mg/L) is the concentration of Pb(II) in solution at time t, V (L) is the volume of Pb(II) ions solution, and m (g) is the mass of adsorbent. Conclusions In this study, PAN/SL ACNFs were successfully prepared by incorporating sago lignin into PAN via a simple and versatile technique which is electrospinning. PAN/SL ACNFs have shown significantly higher percentage removal of Pb(II) by three times after activation with HNO 3 due to the presence of surface oxygen groups. The best-suited adsorption conditions that gave the highest percentage removal of 67%, with an adsorption capacity of 524 mg/g, were 40 mg of adsorbent dosage, 125 ppm of Pb(II) solution, pH 5, and a contact time of 240 min. It was determined that the pseudo-second-order kinetic model was well-fitted with the experimental data which indicates that the adsorption process was controlled by chemisorption. From the intra-particle diffusion multilinear plot, it was suggested that two steps of adsorption were involved in the adsorption of Pb(II) ions. Besides, the adsorption process can be said to follow the Langmuir isotherm model with q m of 588.24 mg/g. These results imply that PAN/SL ACNFs are effective adsorbent for the removal of Pb(II) from aqueous solution and have the potential to be used as the adsorbent for wastewater treatment.
11,379.2
2020-07-01T00:00:00.000
[ "Engineering" ]
Comment on amt-2022-5 Anonymous Referee # 2 Referee comment on " Validation of StreamLine XR Doppler LiDAR wind observations using in-situ measurements and WRF simulations I have some issues with the novelty of the manuscript. As pointed out by the authors in the introduction, a major selling point of the manuscript is that the validation here has a wider range of reference instruments compared to previous validation exercises. However, on the other hand the manuscript only considers one site and a narrow range of atmospheric conditions. Therefore, I am not fully convinced that the manuscript achieves its goal to provide a more general validation. Line 27: "Emitted laser beam" would be more precise than "reference". Line 28: Pearson (2009) and subsequent citations therein might be a better references here, because they relate to the instrument used in this study. (Pearson, G., Davies, F., & Collier, C. (2009). An Analysis of the Performance of the UFAM Pulsed Doppler Lidar for Observing the Boundary Layer, Journal of Atmospheric and Oceanic Technology, 26(2), 240-250). Line 28-30: Statement is too general. I recommend listing specific advantages over a particular measurement technique. Line 31-34: The provided examples should be supported with references to literature. Line 39: More importantly, the mathematical manipulations require certain assumptions on the state of the atmospheric flow. Line 43: Please provide references for the various studies mentioned. Line 83: The site description could be more precise. What is the topography surrounding the site? Are there surface roughness elements like plants, trees or buildings and what is their approximate height (important to know if the measurements might be affected by the roughness sublayer)? Line 105: Specify which frequency that is (e.g. the laser-repetition frequency or the sampling frequency of the return signal). Section 2.2, 2.3 and 2.4: Information provided for the instruments other than the lidar is thin. What is the accuracy / precision for those instruments? Did the tethered balloon record its position and drift? Why the random sampling time for the tethered balloon? On which site of the tower were the instruments located (tower effects)? How often were the radiosondes launched? Line 157-159: How many pulses were averaged for one estimate of the radial velocity? Line 213-214: Doppler lidars are also able to observe the turbulence state from the variance / standard deviation of the velocity, which can provide information on atmospheric layers, too. Line 230: Is there a significant trend between the average wind direction difference and the wind speed? Fig 3a: I counted something around 67 data points, which is far less than I would expect for a three-day period. Is the difference explained by the filtering criteria of the LiDAR SNR and the height deviation of the tethered balloon? If yes, then that should be explained in the text and stated how much data was rejected. Also, the variable names and formatting of minus sign in equation of the linear regression could be improved. Figure 3: Unless I missed it earlier, this is the first time that the duration of the measurement campaign is mentioned. That information should be provided in the main text at the beginning of the results or in the methods. Caption of Line 240-241: Is there also a difference between the two height levels? One would also expect to see larger differences closer to the surface. If the authors can extract stability information from the measurements, the dependency of the errors to the stability could be also interesting (as it affects the horizontal homogeneity). As it is, I believe there are not enough data points for both booms and too many for a single boom. Line 350: While wind direction seems to agree, the wind speed is overestimated quite a lot by the model (more than a factor of two at times it seems). Therefore, I recommend providing objective error values instead of using satisfactorily to describe the agreement. Figure 9 and 10: It might improve clarity of the figures, if the wind speed and wind direction information is separated into two panels. Figure 11: Indicating the times of sunrise and sunset would help following the discussion. Line 407-408: Sentence should specifiy, that the preference of 60° over 80° is for the extraction of wind direction and wind speed (because it might be different, if one would extract other quantities from the measurements). Also, the limitations of the study should be highlighted. Only one site is considered here and the conditions only covered what is considered a radiation driven diurnal cycle of the atmospheric boundary layer. Technical corrections Line 10: Capitalize "D" in Doppler. And I recommend checking the journal guidelines for the capitalization "Lidar". Line 68 and 70: Consider using "horizontal" instead of "lateral". Lateral is some fields used to specify a velocity component perpendicular to the streamwise direction. Line 74: I believe the abbreviation WRF was not introduced. Line 90: Dayan and Ednizik (1999) Line 96: I recommend spelling out "four" here (similar instances throughout the text later on e.g. line 123, ). Line 239 and 246: The "s" in "m/s" should be written with a power to "-1". Line 254: As it is a continued discussion of Table 2, I believe a new paragraph is not needed here. Line 404: Replace "Out" with "Our".
1,223.6
2022-02-17T00:00:00.000
[ "Environmental Science", "Physics" ]
Adsorption of Metal Atoms on SiC Monolayer : The electronic, magnetic, and optical behaviors of metals (M = Ag, Al, Au, Bi, Ca, Co, Cr, Cu, Fe, Ga, K, Li, Mn, Na, Ni) adsorbed on the SiC monolayer have been calculated based on density functional theory (DFT). The binding energy results show that all the M-adsorbed SiC systems are stable. All the M-adsorbed SiC systems are magnetic with magnetic moments of 1.00 µ B (Ag), 1.00 µ B (Al), 1.00 µ B (Au), 1.01 µ B (Bi), 1.95 µ B (Ca), 1.00 µ B (Co), 4.26 µ B (Cr), 1.00 µ B (Cu), 2.00 µ B (Fe), 1.00 µ B (Ga), 0.99 µ B (K), 1.00 µ B (Li), 3.00 µ B (Mn), and 1.00 µ B (Na), respectively, except for the Ni-adsorbed SiC system. The Ag, Al, Au, Cr, Cu, Fe, Ga, Mn, and Na-adsorbed SiC systems become magnetic semiconductors, while Bi, Ca, Co, K, and Li-adsorbed SiC systems become semimetals. The Bader charge results show that there is a charge transfer between the metal atom and the SiC monolayer. The work function of the K-adsorbed SiC system is 2.43 eV, which is 47.9% lower than that of pristine SiC and can be used in electron-emitter devices. The Bi, Ca, Ga, and Mn-adsorbed SiC systems show new absorption peaks in the visible light range. These results indicate that M-adsorbed SiC systems have potential applications in the field of spintronic devices and solar energy conversion photovoltaic devices Introduction Since the successful preparation of graphene [1], there has been a surge in research into two-dimensional (2D) materials, including 2D WS 2 [2,3], GaN [4][5][6], BN [7,8], black phosphorus [9,10], ZnO [11,12], SiC [13,14], etc. SiC is a third-generation semiconductor material with a wide band gap, high electron saturation drift rate, high breakdown field strength, high thermal conductivity, high radiation resistance, etc. It has a wide range of applications in solar cells, high-frequency high-power devices, and high-temperature electronic devices. Two-dimensional SiC has the advantages of high electron mobility, chemical stability, and high catalytic activity and is often used to make photocatalysts [15]. Based on the first-principles approach, 2D SiC has been predicted to have a graphene-like honeycomb structure and can exist stably as a semiconductor material with a direct band gap of 2.52-2.87 eV [16,17]. Chabi et al. have successfully prepared SiC nanosheets with an average thickness of 2-3 nm through a catalytic carbon thermal reduction method and ultrasonic pretreatment process [18]. Two-dimensional SiC has great potential in the field of nanoelectronic devices, but there are some problems in photocatalysis. Two-dimensional SiC is only responsive to partially visible light [19], so it is necessary to reduce the band gap and improve the absorption efficiency of visible light. Current methods to effectively modulate the band structure include doping [20,21], stacking [22,23], adsorption [24,25], heterojunctions [26][27][28], etc. The adsorption of metal atoms is one of the most important means to modulate the properties of 2D materials. The adsorption of different atoms on the surface of 2D materials can modulate the optical, electrical, and magnetic properties of 2D materials. Nie et al. have studied the adsorption of 3D transition metals on the SnO monolayer [29]. They found that 3D transition metal adsorption induced magnetism and achieved ntype and p-type doping. Guo WSSe monolayer by adsorbing Fe, Co, and Ni atoms and developed its applications in gas sensors and single-atom catalysts [30]. Cui et al. have studied the adsorption of transition metals on the Pd 2 Se 3 monolayer [31]. They found that the adsorption of transition metals improved light absorption in the ultraviolet, visible, and infrared regions. Xu et al. have predicted the magnetism of the SnSe 2 monolayer after the adsorption of transition metals and found that the adsorption of Ti atoms can endow the SnSe 2 monolayer with perpendicular magnetic anisotropy [32]. In this paper, the electronic structure, magnetic, and optical properties of 15 metal atoms adsorbed on SiC monolayer have been calculated using the first-principles approach. The influence of the M atoms on the properties of the SiC monolayer is analyzed according to the band structure, work function, and light absorption spectra, and the application prospects of M-adsorbed SiC systems in the field of spin devices and photovoltaic devices are explored. Computational Details The electronic, magnetic, and optical behaviors of M-adsorbed SiC systems have been investigated in the Vienna ab initio calculation simulation package (VASP) [33,34] using density functional theory (DFT) [35,36]. The electron-ion interactions are performed using the Perdew-Burke-Ernzerhof (PBE) form of the generalized gradient approximation (GGA) approach [37]. The exchange-correlation interactions are performed using the projector-enhanced wave (PAW) approach [38]. Dispersion corrections are considered by Grimme's DFT-D3 method [39]. The plane wave cutoff energy is 400 eV, the Monkhorst-Pack scheme [40] grid in the Brillouin zone is 4 × 4 × 1, and the vacuum space is 20 Å. During structural relaxation, the convergences of the force and self-consistent energy are 1 × 10 −2 eV Å −1 and 1 × 10 −5 eV, respectively. The optical properties are considered according to the frequency-dependent dielectric response theory, including the local field effects in the random-phase-approximation (RPA) method [41]. Results and Discussion The pristine SiC has a graphene-like structure with an alternating arrangement of C and Si atoms, and its lattice parameter is 3.1 Å with a bond length of 1.78 Å. From the band structure and density of state (DOS) of Figure 1b,c, it can be seen that pristine SiC is a nonmagnet semiconductor with a direct bandgap of 2.5 eV, and the conduction band minimum (CBM) is mainly contributed by the hybridization of the p-state of Si and C, while the valence band maximum (VBM) is mainly contributed by the 2p-state of C. The VBM and CBM are not at the same high symmetry point, indicating that the pristine SiC is an indirect bandgap semiconductor. These results are consistent with previous reports [42], indicating that our computational method is reliable. In order to study the stability of metal adsorption on the SiC system, we constructed four adsorption models for each type of metal, as shown in Figure 1a. The adsorption sites were located above a Si atom, above a C1 atom, above a C2 atom, and above the Si-C bond. Adsorption energy (E ads ) was used to characterize the stability of the adsorption system, which can be calculated using the following formula: where E M-SiC is the total energy of the M-adsorbed SiC systems, including the interaction energy between the metal atom and the SiC monolayer; E SiC is the energy of the pristine SiC monolayer; µ M is the chemical potential of an isolated metal atom. As listed in Table 1 It can be seen that the E ads of all systems are negative, indicating that the systems are stable. Different metal adsorption has different adsorption sites that are the most stable. The most stable adsorption site for Ag is located at S C2 , while for Co, Cr, K, Li, Mn, Na, and Ni, it is at S H . In order to investigate the effect of metal adsorption on the electronic properties of SiC systems, we studied the band structures of different metal-adsorbed silicon carbide In order to investigate the effect of metal adsorption on the electronic properties of SiC systems, we studied the band structures of different metal-adsorbed silicon carbide systems, as shown in Figure 2. It can be seen that, except for the Ni-adsorbed SiC system, the spin-up and spin-down of other systems do not overlap, indicating that these systems all exhibit magnetism. Among them, the adsorption of Ag, Al, Au, Cr, Cu, Fe, Ga, Mn, and Na atoms on SiC systems result in a magnetic semiconductor, and the bandgaps of 0.521 eV (Ag), 0.659 eV (Al), 0.837 eV (Au), 0.199 eV (Cr), 0.705 eV (Cu), 0.734 eV (Fe), 0.640 eV (Ga), 0.494 eV (Mn), and 0.442 eV (Na), respectively. However, the adsorption of Ni on the SiC system leads to a non-magnetic semiconductor with a bandgap of 1.754 eV. The band gaps of the systems after adsorption are all smaller than those of the unadsorbed systems. Interestingly, the Bi, Ca, Co, K, and Li-adsorbed SiC systems exhibit semimetallic characteristics, indicating that they can be used as sensitive components in magnetic materials, electrodes, or electronic devices. Furthermore, Figure 3 describes the spinpolarized charge density of these magnetic systems. In addition to the magnetic distribution of Li and K-adsorbed SiC systems mainly distributed on the SiC monolayer, it can be clearly seen that the magnetic distribution of other systems mainly lies on the adsorbed metal and the atoms underneath it. The magnetic moments of the M-adsorbed SiC systems are 1. all exhibit magnetism. Among them, the adsorption of Ag, Al, Au, Cr, Cu, Fe, Ga, Mn, and Na atoms on SiC systems result in a magnetic semiconductor, and the bandgaps of 0.521 eV (Ag), 0.659 eV (Al), 0.837 eV (Au), 0.199 eV (Cr), 0.705 eV (Cu), 0.734 eV (Fe), 0.640 eV (Ga), 0.494 eV (Mn), and 0.442 eV (Na), respectively. However, the adsorption of Ni on the SiC system leads to a non-magnetic semiconductor with a bandgap of 1.754 eV. The band gaps of the systems after adsorption are all smaller than those of the unadsorbed systems. Interestingly, the Bi, Ca, Co, K, and Li-adsorbed SiC systems exhibit semimetallic characteristics, indicating that they can be used as sensitive components in magnetic materials, electrodes, or electronic devices. Furthermore, Figure 3 describes the spin-polarized charge density of these magnetic systems. In addition to the magnetic distribution of Li and K-adsorbed SiC systems mainly distributed on the SiC monolayer, it can be clearly seen that the magnetic distribution of other systems mainly lies on the adsorbed metal and the atoms underneath it. The magnetic moments of the M-adsorbed SiC systems are 1.00 μB (Ag), 1.00 μB (Al), 1.00 μB (Au), 1.01 μB (Bi), 1.95 μB (Ca), 1.00 μB (Co), 4.26 μB (Cr), 1.00 μB (Cu), 2.00 μB (Fe), 1.00 μB (Ga), 0.99 μB (K), 1.00 μB (Li), 3.00 μB (Mn), and 1.00 μB (Na), respectively. This indicates that the adsorption of metal atoms can modulate the band structure and magnetic properties of SiC monolayers, so the M-adsorbed SiC systems can be applied to the production of spintronic devices. Charge transfer is an important parameter for describing the interaction between substrate material and the adsorbed atoms. The charge density difference (CDD) c clearly see the charge transfer and distribution, and the CDD of M-adsorbed SiC syste can be calculated using the following formula: Adsorption Style where Δρ is the CDD; ρM-SiC is the charge density of the M-adsorbed SiC systems; ρSiC the charge density of the pristine SiC monolayer; and ρM is the charge density of an i lated metal atom. The CDD of M-adsorbed SiC systems is studied in Figure 4 of this s tion. It can be seen that there is a significant charge transfer between the metal atoms a the SiC monolayer. For the Ag, Au, Cu, K, and Ni-adsorbed SiC systems, the adsorb atom is the acceptor, and the SiC monolayer is a donor. For other M-adsorbed SiC system the adsorbed atom is the donor, and the SiC monolayer is the acceptor. Bader charges [4 45] are used to accurately describe the amount of charge transfer. After calculation, amount of charge transfer for various metals to the SiC monolayer are +0.446|e| (A −0.588|e| (Al), +0.319|e| (Au), −0.109|e| (Bi), −0.766|e| (Ca), −0.110|e| (Co), −0.560| (Cr), +0.023|e| (Cu), −0.280|e| (Fe), −0.292|e| (Ga), +1.455|e| (K), −0.867|e| (L −0.468|e| (Mn), −0.391|e| (Na), and +0.045|e| (Ni), respectively. The work function is a crucial parameter for evaluating the electron emission perf mance of optoelectronic materials, which can be calculated using the following formul where Ф, Evacuum, and EFermi represent work function, vacuum level, and Fermi level, spectively. We have studied the work functions of various metals-adsorbed SiC monol ers and presented the results in Figure 5. It can be seen that the work function of prist SiC is 4.8 eV, and the work function of the M-adsorbed SiC systems fluctuates after a sorption. Interestingly, apart from Bi-adsorbed SiC, the work functions of all other adsorbed SiC systems are lower than that of the pristine SiC of 3.58 eV (Ag), 4.23 eV (A Charge transfer is an important parameter for describing the interaction between the substrate material and the adsorbed atoms. The charge density difference (CDD) can clearly see the charge transfer and distribution, and the CDD of M-adsorbed SiC systems can be calculated using the following formula: where ∆ρ is the CDD; ρ M-SiC is the charge density of the M-adsorbed SiC systems; ρ SiC is the charge density of the pristine SiC monolayer; and ρ M is the charge density of an isolated metal atom. The CDD of M-adsorbed SiC systems is studied in Figure 4 of this section. It can be seen that there is a significant charge transfer between the metal atoms and the SiC monolayer. For the Ag, Au, Cu, K, and Ni-adsorbed SiC systems, the adsorbed atom is the acceptor, and the SiC monolayer is a donor. For other M-adsorbed SiC systems, the adsorbed atom is the donor, and the SiC monolayer is the acceptor. Bader charges [43][44][45] are used to accurately describe the amount of charge transfer. After calculation, the amount of charge transfer for various metals to the SiC monolayer are +0.446|e| (Ag), −0.588|e| (Li), −0.468|e| (Mn), −0.391|e| (Na), and +0.045|e| (Ni), respectively. The work function is a crucial parameter for evaluating the electron emission performance of optoelectronic materials, which can be calculated using the following formula: where Φ, E vacuum , and E Fermi represent work function, vacuum level, and Fermi level, respectively. We have studied the work functions of various metals-adsorbed SiC monolayers and presented the results in Figure 5. It can be seen that the work function of pristine SiC One of the important indicators for evaluating the performance of photoelectro devices is light absorption. The optical properties of matter are represented by the tra verse dielectric function ε(ω) [46,47]. where ε1(ω) and ε2(ω) are the real and imaginary parts of the dielectric function, a One of the important indicators for evaluating the performance of photoelectronic devices is light absorption. The optical properties of matter are represented by the transverse dielectric function ε(ω) [46,47]. where ε1(ω) and ε2(ω) are the real and imaginary parts of the dielectric function, and ω is the photon frequency. The ε2(ω) can be obtained by dipole transition amplitude from the valence band (occupied states) to the conduction band (unoccupied states), while the ε1(ω) can be obtained from the Kramers-Kronig relationship. In additional, the absorption coefficient α(ω) can be obtained from the ε1(ω) and ε2(ω) [48]: One of the important indicators for evaluating the performance of photoelectronic devices is light absorption. The optical properties of matter are represented by the transverse dielectric function ε(ω) [46,47]. where ε 1 (ω) and ε 2 (ω) are the real and imaginary parts of the dielectric function, and ω is the photon frequency. The ε 2 (ω) can be obtained by dipole transition amplitude from the valence band (occupied states) to the conduction band (unoccupied states), while the ε 1 (ω) can be obtained from the Kramers-Kronig relationship. In additional, the absorption coefficient α(ω) can be obtained from the ε 1 (ω) and ε 2 (ω) [48]: (5) Figure 6 shows the light absorption spectra of different metals adsorbed on the SiC monolayer. The pristine SiC mainly absorbs in the ultraviolet region and hardly absorbs in the visible light range, indicating that SiC can be used as a UV photodetector, but its application in the visible light range is limited. After metal adsorption, the absorption peak in the ultraviolet region is enhanced. The Bi, Ca, Ga, and Mn-adsorbed SiC systems show new absorption peaks in the visible light range. The Cu-adsorbed SiC system shows a strong absorption peak at 352.1 nm. These indicate that the systems can be used for solar energy conversion photovoltaic devices. (5) Figure 6 shows the light absorption spectra of different metals adsorbed on the SiC monolayer. The pristine SiC mainly absorbs in the ultraviolet region and hardly absorbs in the visible light range, indicating that SiC can be used as a UV photodetector, but its application in the visible light range is limited. After metal adsorption, the absorption peak in the ultraviolet region is enhanced. The Bi, Ca, Ga, and Mn-adsorbed SiC systems show new absorption peaks in the visible light range. The Cu-adsorbed SiC system shows a strong absorption peak at 352.1 nm. These indicate that the systems can be used for solar energy conversion photovoltaic devices. Conclusions The electronic, magnetic, and optical behaviors of the metals (M = Ag, Al, Au, Bi, Ca, Co, Cr, Cu, Fe, Ga, K, Li, Mn, Na, Ni) adsorbed SiC systems have been calculated based on the first-principles. The binding energy results show that the most stable adsorption sites are SC2 for Ag atoms, SH for Co, Cr, K, Li, Mn, Na, and Ni atoms, and SC1 for Al, Au, Bi, Ca, Cu, Fe, and Ga atoms. All the M-adsorbed SiC systems are magnetic except for the Ni-adsorbed SiC system. The magnetic distribution of Li and K-adsorbed SiC systems is mainly distributed on the SiC monolayer, while the magnetic distribution of the other systems mainly lies on the adsorbed metal and the atoms underneath it. The band gap is smaller in the M-adsorbed SiC systems compared to the pristine SiC. The Ag, Al, Au, Cr, Cu, Fe, Ga, Mn, and Na-adsorbed SiC systems are magnetic semiconductors with band gaps of 0.521 eV (Ag), 0.659 eV (Al), 0.837 eV (Au), 0.199 eV (Cr), 0.705 eV (Cu), 0.734 eV (Fe), 0.640 eV (Ga), 0.494 eV (Mn), and 0.442 eV (Na), while SiC becomes semimetal after adsorption of Bi, Ca, Co, K, and Li atoms. The Bader charge results show that the adsorbed atom is more readily charged in the Ag, Au, Cu, K, and Ni-adsorbed SiC systems, while the SiC monolayer is more readily charged in the other M-adsorbed SiC systems. The work function of the K-adsorbed SiC system is 2.43 eV, which is 47.9% lower than the work function of pristine SiC and can be used in an electron emitter device. After metal atom adsorption, the absorption peak of the M-adsorbed SiC systems in the UV region is enhanced, and new absorption peaks in the visible range appeared for the Bi, Ca, Ga, and Conclusions The electronic, magnetic, and optical behaviors of the metals (M = Ag, Al, Au, Bi, Ca, Co, Cr, Cu, Fe, Ga, K, Li, Mn, Na, Ni) adsorbed SiC systems have been calculated based on the first-principles. The binding energy results show that the most stable adsorption sites are S C2 for Ag atoms, S H for Co, Cr, K, Li, Mn, Na, and Ni atoms, and S C1 for Al, Au, Bi, Ca, Cu, Fe, and Ga atoms. All the M-adsorbed SiC systems are magnetic except for the Ni-adsorbed SiC system. The magnetic distribution of Li and K-adsorbed SiC systems is mainly distributed on the SiC monolayer, while the magnetic distribution of the other systems mainly lies on the adsorbed metal and the atoms underneath it. The band gap is smaller in the M-adsorbed SiC systems compared to the pristine SiC. The Ag, Al, Au, Cr, Cu, Fe, Ga, Mn, and Na-adsorbed SiC systems are magnetic semiconductors with band gaps of 0.521 eV (Ag), 0.659 eV (Al), 0.837 eV (Au), 0.199 eV (Cr), 0.705 eV (Cu), 0.734 eV (Fe), 0.640 eV (Ga), 0.494 eV (Mn), and 0.442 eV (Na), while SiC becomes semimetal after adsorption of Bi, Ca, Co, K, and Li atoms. The Bader charge results show that the adsorbed atom is more readily charged in the Ag, Au, Cu, K, and Ni-adsorbed SiC systems, while the SiC monolayer is more readily charged in the other M-adsorbed SiC systems. The work function of the K-adsorbed SiC system is 2.43 eV, which is 47.9% lower than the work function of pristine SiC and can be used in an electron emitter device. After metal atom adsorption, the absorption peak of the M-adsorbed SiC systems in the UV region is enhanced, and new absorption peaks in the visible range appeared for the Bi, Ca, Ga,
4,730.6
2023-05-30T00:00:00.000
[ "Materials Science" ]
An Integrated Approach to Determine the Boundaries of the Azaphilone Pigment Biosynthetic Gene Cluster of Monascus ruber M7 Grown on Potato Dextrose Agar Monascus-type azaphilone pigments (MonAzPs) are produced in multi-thousand ton quantities each year and used as food colorants and nutraceuticals in East Asia. Several groups, including ours, described MonAzPs biosynthesis as a highly complex pathway with many branch points, affording more than 110 MonAzP congeners in a small group of fungi in the Eurotiales order. MonAzPs biosynthetic gene clusters (BGCs) are also very complex and mosaic-like, with some genes involved in more than one pathway, while other genes playing no apparent role in MonAzPs production. Due to this complexity, MonAzPs BGCs have been delimited differently in various fungi. Since most of these predictions rely primarily on bioinformatic analyses, it is possible that genes immediately outside the currently predicted BGC borders are also involved, especially those whose function cannot be predicted from sequence similarities alone. Conversely, some peripheral genes presumed to be part of the BGC may in fact lay outside the boundaries. This study uses a combination of computational and transcriptional analyses to predict the extent of the MonAzPs BGC in Monascus ruber M7. Gene knockouts and analysis of MonAzPs production of the mutants are then used to validate the prediction, revealing that the BGC consists of 16 genes, extending from mrpigA to mrpigP. We further predict that two strains of Talaromyces marneffei, ATCC 18224 and PM1, encode an orthologous but non-syntenic MonAzPs BGC with 14 genes. This work highlights the need to use comprehensive, integrated approaches for the more precise determination of secondary metabolite BGC boundaries. INTRODUCTION Monascus-type azaphilone pigments (MonAzPs) are a complex mixture of secondary metabolites (SMs) with a tricyclic azaphilone scaffold, produced by a few fungal genera in the Eurotiales order such as Monascus and Talaromyces spp. MonAzPs are traditionally classified as red, orange, and yellow pigments based on their absorbance maxima (Feng et al., 2012). To the best of our knowledge, more than 110 MonAzPs components have been identified from various fungi (Chen et al., 2019). As colorants, MonAzPs have been widely used in various food products for centuries, especially in Southeast Asian countries . Moreover, MonAzPs also possess wide-ranging biological activities such as preventing hypertension (Lee and Pan, 2012a), lowering cholesterol levels (Lee et al., 2010), causing hypolipidemic effects (Lee and Pan, 2012b), and displaying anti-obesity (Choe et al., 2012) and antitumor activities (Akihisa et al., 2005). Investigations of the MonAzPs biosynthetic pathway started in the 1960s (Birch et al., 1962;Kurono et al., 1963;Hadfield et al., 1967). With the advent of fungal genome sequencing, several groups, including ours, proposed a unified MonAzPs biosynthetic process active in various Monascus species and strains that differ in their azaphilone pigment and citrinin productivities (He and Cox, 2016;Chen et al., 2019). This pathway was found to consist of a trunk pathway with many biosynthetic branches that use enzymes with substrate-and product flexibility. It was also seen to utilize adventitious biochemical or chemical transformations, and to incorporate some still not well characterized biosynthetic steps (Chen et al., 2019). Functional studies on the biosynthesis of MonAzPs utilized a combination of targeted gene knockouts, heterologous gene expression, and in vitro chemical and enzymatic reactions (Balakrishnan et al., 2013(Balakrishnan et al., , 2014a(Balakrishnan et al., ,b,c, 2015(Balakrishnan et al., , 2017aXie et al., 2013;Liu et al., 2014;Liu J. et al., 2016;Chen et al., 2017;Liang et al., 2018;Li et al., 2020). These studies also revealed that the MonAzPs biosynthetic gene cluster (BGC) in Monascus ruber M7 also encodes the biosynthesis of monasones, anthraquinone-type SMs with antibacterial activities (Li et al., 2020). The functional and structural complexities of MonAzPs BGCs and biosynthetic processes, and their variability among different fungi make it difficult to predict the extent of these BGCs, especially when relying only on routine sequence similarity searches to draw cluster boundaries. It remains possible that genes outside the currently predicted boundaries of the BGC are also involved in MonAzPs production, especially when their functions cannot be easily predicted from the similarities of their encoded proteins alone. Conversely, it is also possible that bioinformatic methods over-estimate the extent of MonAzPs BGCs, and include genes in the predicted clusters that have in fact no role in pigment biosynthesis. Such prediction mistakes may undermine biosynthetic proposals by omitting important genes or including spurious ones, thus highlighting the need for more comprehensive prediction workflows to delimit BGC boundaries. The carbon skeletons of SMs are often synthesized by "core" enzymes such as polyketide synthases (PKSs) and nonribosomal peptide synthetases (NRPSs). Several widely used software tools that predict fungal SM BGCs, including SMURF 1 and antiSMASH 2 detect the genes encoding such core enzymes, and anchor the predicted BGCs around these genes (Khaldi et al., 2010;Blin et al., 2019). However, these very useful software tools are not particularly well suited to define BGC boundaries. To address this need, Takeda et al. (2014) devised a novel comparative genomics method to predict the extent of SM BGCs by searching for gene similarities in genome sequence assemblies, and by evaluating the presence of similar genes even in nonsyntenic blocks. This method made it possible to better identify known SM BGCs featuring core genes, and some even without such anchors, in the genome sequences of 10 filamentous fungi (Takeda et al., 2014). The biosynthesis of fungal SMs is governed by a hierarchical regulatory network that often involves pathway-specific regulators (Lyu et al., 2020). Some pathway-specific regulators control the transcription of all the genes involved in the production of a given SM, while others regulate only a key subset of the structural genes. For example, the transcription factor Sol4 governs all six biosynthetic genes in the BGC of solanapyrone, a polyketide-derived phytotoxic SM from the fungus Ascochyta rabiei (Kim et al., 2015). Such pathway-specific regulators may also be exploited for cluster boundary predictions. Thus, the boundaries of the BGC for azanigerone A, an azaphilone pigment from Aspergillus niger, were predicted using RT-PCR analysis in a strain with an activated pathway-specific positive regulator (Zabala et al., 2012). In contrast, the expression of only some key structural genes is modulated by the pathway-specific regulators for apicidin, a histone deacetylase inhibitor, fusaric acid, a mycotoxin produced by fusaria, and sterigmatocystin, a carcinogenic mycotoxin produced by aspergilli (Jeon et al., 2011;Studt et al., 2016;Wiemann et al., 2018). Finally, pathway-specific regulators may also modulate the expression of genes with no obvious function in SM biosynthesis, resistance or export, as is the case with the fusarin C BGC in Fusarium fujikuroi (Niehaus et al., 2013). Therefore, transcription analysis that relies solely on the differential expression of genes governed by a pathwayspecific regulator to predict SM BGC boundaries may also omit important genes, or overestimate BGC size due to pleiotropic effects, or the extent of euchromatic regions. In the current study, we determined the boundaries of the MonAzPs BGC of Monascus ruber M7 by an integrated approach. We compared a 100 kb stretch of the M. ruber M7 genome flanking the MonAzPs PKS gene mrpigA, to the genome sequences of eight other species of filamentous fungi. Next, we conducted a differential transcriptomic analysis of the MonAzPs BGC in M. ruber M7 and its knockout mutant deficient in the pathway-specific regulator MrPigB. Finally, we knocked out genes at the predicted BGC boundaries, and compared the MonAzPs metabolic profiles of the resulting strains to that of the wild-type strain. This work provides a more accurate prediction of the extent of the MonAzPs BGC in M. ruber M7, and by extension in other Monascus-type azaphilone pigment producer fungi. It also exemplifies a workflow to predict, with higher confidence, the boundaries of similarly complex SM BGCs in other fungi. Fungal Strains, Culture Conditions, and DNA Extraction The wild-type strain Monascus ruber M7 (CCAM 070120, Culture Collection of the State Key Laboratory of Agricultural Microbiology, China Center for Type Culture Collection, Wuhan, China) (Chen and Hu, 2005;Liu et al., 2014;Liu J. et al., 2016;Chen et al., 2017Chen et al., , 2019Li et al., 2020) and its derivatives used in this study are listed in Table 1. For the generation of the mrpigAup1, mrpigAup2, mrpigPdown1, and mrpigPdown2 strains, initial transformants were selected on potato dextrose agar (PDA) medium containing 30 µg/mL hygromycin B (Sigma-Aldrich, Shanghai, China) at 28 • C. For phenotypic characterization, all the tested strains were cultivated in triplicates on PDA plates at 28 • C for 10 d. Fungal genomic DNA was isolated from mycelia grown on cellophane membranes covering PDA plates, using the cetyltrimethylammonium bromide (CTAB) method (Shao et al., 2009). Bioinformatic Methods The nucleotide and deduced amino acid sequences of the genomes of nine strains of filamentous fungi, including M. ruber M7, were retrieved from the Broad Institute 3 and GenBank 4 , as shown in Supplementary Table 1. To predict the MonAzPs gene cluster boundaries, the predicted proteomes encoded by these genome sequences were subjected to comprehensive pairwise comparisons (Takeda et al., 2014). Briefly, pairwise similarities among the deduced proteome of M. ruber M7 and the other species were determined first, to detect orthologous, co-located genes that may form a BGC "seed" region (e < 1.0e −10 ; gap penalty: −0.2; and mismatch penalty: −0.2; Figures 1A,B). In the second step, the seed region was extended and the boundaries of the extended BGC were trimmed (extension length: 35 genes; negative penalty: −0.3; Figures 1C,D), as previously described (Takeda et al., 2014). 100-kb sequences flanking the MonAzPs polyketide synthase MrPigA (Xie et al., 2015) in the genome of M. ruber M7 were submitted to antiSMASH 5.0 (Blin et al., 2019), and PRISM (Skinnider et al., 2020) for SM gene cluster characterization. The MIBiG 2.0 curated repository for SM BGCs was also considered (Kautsar et al., 2020). The SMURF 5 and CASSIS servers 6 could not be accessed, or returned uncharacterized errors. The amino acid sequences encoded by the genes were deduced using FGENESH 7 , and analyzed using Pfam 27.0 8 . Similarities of the deduced amino acid sequence were analyzed using BLASTP 9 . Quantitative Reverse Transcription PCR (RT-qPCR) Analysis The wild-type M. ruber M7 and the pathway-specific regulator knockout strain mrpigB were cultured in PDB medium with shaking at 180 rpm for 6 days at 28 • C (start of the active production phase for MonAzPs). RT-qPCR was performed by Wuhan Goodbio Technology Co., Ltd. (Wuhan, Hubei, China) as described by Liu et al. (2014). For both the wild-type and mrpigB strains, three biological replicates were analyzed, and for each sample, three technical replicates for each targeted gene were performed. GADPH was used as the reference gene, and the relative expression fold-change was calculated using the comparative CT method. Significance analysis was performed using the one-way ANOVA test and significance level was set as 0.05. The used primers are listed in Supplementary Table 2. Gene Knockouts M. ruber M7 knockout strains mrpigA, mrpigO, and mrpigP have been described (Xie et al., 2013(Xie et al., , 2015Chen et al., 2017;Li et al., 2020). For the deletion of the mrpigAup1 gene, a gene disruption cassette carrying the hygromycin B resistance gene (hph) flanked by targeting arms (TA) derived from the 5 or 3 regions of mrpigAup1, respectively, was prepared using the double-joint PCR method (Yu et al., 2004). Briefly, the 5 and 3 flanking regions (910 and 893 bp, respectively) of the mrpigAup1 gene were amplified with the primer pairs P1-P2 and P3-P4, respectively. The 2.1 kb hph cassette was amplified from plasmid pSKH (He et al., 2013) with the primer pair P7-P8. The PCR products were purified with a TransGen gel purification kit (TransGen, Beijing, China), the three amplicons were mixed at a 1:1:2 molar ratio, then fused by PCR using primer pair P1-P4. The fused PCR product was purified, cloned into pMD19-T (Takara, Dalian, Japan), and confirmed by sequencing. The disruption cassette was then transferred from the resulting plasmid into the KpnI and XbaI sites of pCAMBIA3300 (He et al., 2013) to generate plasmid pCPIGAUP1. Analogous strategies were used to generate the pCPIGAUP2, pCPIGPDOWN1, and pCPIGPDOWN2 plasmids for the deletion of mrpigAup2, mrpigPdown1, and mrpigPdown2, respectively, using primers listed in Supplementary Table 3. FIGURE 1 | Motif-independent comparative genomic prediction of the MonAzPs BGC boundaries. (A) Homology search against the deduced proteome of T. marneffei ATCC 18224 using the MonAzPs biosynthesis-related proteins of M. ruber M7 as the bait. Dashed lines, gene pairs encoding homologous proteins (e < 1.0e −10 for the encoded proteins). Due to mis-annotation, XP_002149763 is described in NCBI as a separate gene; however, this nucleotide sequence is in fact part of the XP_002149764 gene. (B) Local protein sequence alignment using the Smith-Waterman algorithm. Pairs of contiguous genes encoding MrPigG to MrPigK in M. ruber M7 and from XP_002149762 to XP_002149767 in T. marneffei ATCC 18224 form the seed region (R 0 ) for predicting the MonAzPs BGC. SW scores shown were calculated as described (Takeda et al., 2014). (C) Extension of the gene cluster. The seed region (R 0 ) was extended to include a total of 35 genes (Takeda et al., 2014). The symbols I x and I y represent the stretches of genes added to the seed region in the M. ruber M7 and the T. marneffei ATCC 18224 genomes, respectively. (D) Trimming of the BGC boundaries. i begin and i end , the locations of the genes at the beginning and end, respectively, of the MonAzPs gene cluster in M. ruber M7; j begin and j end , the corresponding gene locations in T. marneffei ATCC 18224. CB values are the maximum cumulative SW scores of the predicted BGCs with the upstream and the downstream boundaries indicated (Takeda et al., 2014). The gene disruption plasmids were individually transformed into Agrobacterium tumefaciens EHA105 (Hood et al., 1993) using the freeze-thaw method (Yu et al., 2003), and used for the transformation of M. ruber M7 to yield the gene knockout strains mrpigAup1, mrpigAup2, mrpigPdown1, and mrpigPdown2, using methods described previously (Shao et al., 2009;Wang et al., 2011). Gene knockouts and the absence of the wild-type allele in the mutants were confirmed by PCR and end sequencing of the resulting amplicons. MonAzPs Analysis Freshly harvested spores (5 × 10 4 ) of representative isolates of the gene knockout strains, mrpigAup1, mrpigAup2, mrpigA, mrpigO, mrpigP, mrpigPdown1, mrpigPdown2, and the wild-type strain M. ruber M7 were spread on cellophane membranes on PDA plates, and cultivated at 28 • C for 10 days. The mycelia were harvested by scraping from the membranes, freeze-dried, and ground in a mortar with a pestle under liquid nitrogen. The mycelia powder (0.05 g) was suspended in 1.5 mL methanol, incubated at 65 • C for 1 h, then centrifuged at 10,000 × g for 10 min to collect the supernatant for analysis. HPLC was performed following the method described by Liu Q. et al. (2016) on a Waters system fitted with an Inertsil ODS-3 C18 column (250 × 4.6 mm, 5.0 µm, GL Sciences). The mobile phases consisted of water (A), acetonitrile (B), and 0.5% phosphoric acid in water (C). The flow rate was kept at 0.8 mL/min. The system was run with the following gradient program: from 40 to 30% A for 3 min, from 30 to 5% A for 22 min, 5% A for 5 min, from 5 to 40% A for 1 min, and 40% A for 5 min. C was kept constant at 5% throughout the program. Absorbance was monitored with a 2487 UV/Vis Detector (Waters) at 190-to 700-nm wavelength. Metabolites were identified based on comparison to authentic standards (Chen et al., 2017(Chen et al., , 2019. Bioinformatic Prediction of the MonAzPs BGC Boundaries MonAzPs BGCs display two characteristically different architectures in the sequenced genomes of Monascus spp. (Chen et al., 2017). In M. ruber M7 and M. purpureus strains NRRL 1596 and YY-1, the MonAzPs cluster is interrupted by the pigL gene that encodes an ankyrin repeat protein with no discernible function in MonAzPs biosynthesis (Figure 2A; Chen et al., 2017). In M. ruber NRLL 1597 and M. pilosus, the pigL gene is replaced by a six-gene sub-cluster encoding proteins with putative transport or regulatory functions, but none of these proteins are predicted to be necessary for MonAzPs biosynthesis (Balakrishnan et al., 2014a;Chen et al., 2017). Since the insertion of this six-gene sub-cluster would easily confuse bioinformatic methods for BGC boundary prediction, we concentrated on the near-identical genomic loci of the M. ruber M7 and the M. purpureus strains. The MonAzPs BGC in M. purpureus was originally delineated by Balakrishnan et al. (2014a) by manually comparing Monascus spp. genomic loci, and described to include 21 genes over 60 kb, extending from mpp-1 (ortholog of mrpigAup1 in M. ruber M7) to mpp-14 (ortholog of mrpigPdown4 in M7) (Figure 2A). The MIBiG (Minimum Information about a Biosynthetic Gene Cluster, Kautsar et al., 2020), a curated repository of BGCs, describes the M. pilosus MonAzPs cluster to extend from AGN71602 to AGN71625, corresponding to mrpigAup2 to mrpigP on the M. ruber M7 genome, but inclusive of the six-gene sub-cluster with no role in MonAzPs biosynthesis (Figure 2A). In contrast, antiSMASH (Blin et al., 2019), the computational method most frequently used to define SM BGCs, delineated the cluster to include only 11 genes (mrpigAup2 to mrpigI), spanning approximately 30 kb in the M. ruber M7 or the two M. purpureus genomes (Figure 2A). PRISM (Skinnider et al., 2020), another SM BGC prediction tool 10 , identified only mrpigA as the sole constituent of the cluster. Unfortunately, neither SMURF (Khaldi et al., 2010) nor CASSIS (Wolf et al., 2016) provided useful output for our comparisons, due to persistent problems with the webservers. Considering these radically different cluster boundary predictions, we sought a more definite workflow to delimit the complex MonAzPs BGC. To do this, we first considered the motif-independent comparative genomics method described by Takeda et al. (2014). In a preliminary step, we compared a 100-kb stretch bracketing the MonAzPs polyketide synthase mrpigA (Xie et al., 2015) on the M. ruber M7 genome with the genome sequences of eight other species of filamentous fungi from the Eurotiales order (Supplementary Table 1). We have not included the genomes of other Monascus spp. in this comparison, considering that the extended MonAzPs loci in these genomes are syntenic apart from the presence of pigL vs. the six-gene sub-cluster, and that the encoded protein sequences are near-identical (95-100% identity) (Chen et al., 2017). Thus, these loci would have furnished no useful information for our analysis. The preliminary comparison of the M. ruber M7 genes identified the highest similarities to the genome of Talaromyces (Penicillium) marneffei ATCC 18224, a thermally dimorphic opportunistic fungal pathogen endemic in Southeast Asia and associated with immunocompromised individuals. Thus, we used the genome sequence of this fungus as the comparator for our subsequent analysis. Using the optimized parameters (Takeda et al., 2014), we assigned the seed region (R 0 ) of the MonAzPs cluster to include mrpigG to mrpigK (XP_002149762 to XP_002149767 in the T. marneffei ATCC 18224 genome, Figures 1A,B). To extend the seed region, 15 genes bracketing R 0 were added to each end (regions I x , Figure 1C), yielding a 35-gene candidate BGC (Takeda et al., 2014). Finally, the boundaries of the candidate cluster were trimmed at both ends, based on the local maxima of the combined scores for the member genes ( Figure 1D). This analysis predicted that the MonAzPs BGC extends from mrpigA to mrpigP in the M. ruber M7 genome (Figure 2A). Transcriptional Analysis of the MonAzPs Locus in M. ruber M7 Analysis of mutants of the pathway-specific Zn(II) 2 Cys 6 regulators has proven to be useful in demarcating the ends of SM BGCs (Zabala et al., 2012;Wiemann et al., 2013a). In M. ruber M7, MonAzPs biosynthesis is governed by the pathwayspecific regulator MrPigB (PigR1 in M. pilosus) (Balakrishnan et al., 2013;Xie et al., 2013). To substantiate the bioinformatic prediction of the boundaries of the MonAzPs BGC, we used quantitative reverse transcription PCR (RT-qPCR) to compare the transcription levels of the 20 genes flanking mrpigA (i.e., mrpigAup2 to mrpigPdown2) at the start of MonAzPs accumulation in the wild-type M. ruber M7 with those of the same genes in the mrpigB mutant. As shown in Figure 2B, all these genes were expressed in the wild-type strain, apart from mrpigI that encodes a transcription factor with no apparent role in MonAzPs biosynthesis (Chen et al., 2017). The expression of mrpigAup2 and mrpigPdown2, the two genes at the edges of the MonAzPs locus were not affected by the deletion of mrpigB, indicating that these genes may lay outside the boundaries of the BGC, as expected from the comparative genomic analysis (Figure 1). The expression of six MonAzPs genes (mrpigA, mrpigH, and mrpigM-mrpigP) was almost completely abolished in the mrpigB strain, as observed earlier (Li et al., 2020). The transcription of a further six genes was also moderately downregulated (p < 0.05). These included mrpigD-mrpigE, all with established roles in MonAzPs biosynthesis. However, the expression of mrpigAup1 and mrpigPdown1 were also significantly reduced in the mrpigB strain, indicating that the transcription of these genes with no known roles in MonAzPs biosynthesis is, nevertheless, activated by MrPigB (Figure 2B). These two genes were predicted to lay outside the MonAzPs BGC by comparative genomic analysis (Figure 1). The transcriptional analysis is further complicated by the fact that the expression of four genes well inside the MonAzPs locus (mrpigC, mrpigJ, mrpigK, and mrpigL) were not affected by the deletion of mrpigB. Among these, the putative ankyrin repeat protein MrPigL plays no role in the production of MonAzPs (Chen et al., 2017). However, mrpigC encodes the C11-ketoreductase, mrpigJ, and mrpigK encode the two subunits of the fatty acid synthase that are all necessary for MonAzPs biosynthesis (Chen et al., 2017). Thus, while mrpigC, mrpigJ, and mrpigK must be part of the MonAzPs BGC, their expression is not directly controlled by MrPigB-a consequence of the complex, mosaic-like structure of this BGC (Li et al., 2020). Combining the comparative genomic prediction with the transcriptomic analysis, mrpigAup2, and mrpigPdown2 are not likely to be part of the MonAzPs BGC, while the cluster can be confidently predicted to extend at least from mrpigA to mrpigP. However, the status of mrpigAup1 and mrpigPdown1 remains ambiguous in light of the disagreement of the bioinformatic prediction and the transcription data (Figure 1 vs. Figure 2). Knockout of the Genes at the Predicted Boundaries of the MonAzPs BGC Since the comparative genomic and transcriptomic analyses still did not provide a definitive answer to the extent of the MonAzPs BGC, we sought additional functional proof for the roles of the genes at the proposed BGC boundaries. Thus, we created four new gene knockout strains, affecting the genes that lay at/beyond the border of the MonAzPs BGC in M. ruber M7. Five hygromycin B resistant, putative mrpigPdown2 knockout isolates were obtained by ATMT (Agrobacterium tumefaciensmediated transformation). The putative disruptants and the wildtype control were characterized by PCR analysis, using primer pairs specific for the hygR transgene (P7-P8); the mrpigPdown2 target gene (P25-P26); the gene replacement cassette (P21-P24); and the gene replacement locus (P33-P8 and P7-P34, Figure 3). All five putative disruptants yielded amplicons whose sizes and nucleotide sequences were consistent with the integration of the hygR gene into the targeted locus. At the same time, these isolates failed to provide amplicons specific for the target gene, consistent with the loss of mrpigPdown2 from the genome. The gene knockouts were validated using analogous procedures for the mrpigAup1 (six verified isolates; Supplementary Figure 1 Phenotypic and MonAzPs Production Analysis Confirms the BGC Boundaries We cultivated the wild-type strain M. ruber M7 and representative isolates of the knockout strains mrpigAup1, mrpigAup2, mrpigPdown1, and mrpigPdown2 together with the previously isolated disruptant strains mrpigA, mrpigO, mrpigP, and analyzed their colony phenotypes and MonAzPs production (Figure 4). The wild-type M. ruber M7 strain is able to produce several major MonAzPs congeners (Chen et al., 2017). Its product spectrum is determined by the culture conditions, including the ingredients of the culture media and the cultivation time. The growth of strain M7 on PDA plates at 28 • C for 10 days afforded colonies of an intense orange color, corresponding to the production of four well-known MonAzPs: two yellow pigments (monascin 1 and ankaflavin 3) and two orange pigments (rubropunctatin 2 and monascorubrin 4, Figure 4). Compared to the wild-type strain, no gross differences in colony growth, morphology, or coloration were observed upon the deletion of the mrpigAup1 and mrpigAup2 genes at the "left" border, and with the knockout of the mrpigPdown1 and mrpigPdown2 genes at the "right" border. The production of MonAzPs was also unchanged with these strains, with the yields and mutual ratios of compounds 1, 2, 3, and 4 indistinguishable from those of the wild-type strain (Figure 4). These results confirm that mrpigAup2 and mrpigPdown2 are not part of the MonAzPs BGC, in agreement with the comparative genomic and transcriptional analyses (Figures 1, 2). However, these results also show that mrpigAup1 and mrpigPdown1 do not play discernible roles in MonAzPs biosynthesis and thus should not be considered to be part of the MonAzPs BGC, in spite of the contrary prediction from the transcriptional analysis (Figure 2). In contrast, the mrpigA and mrpigO strains exhibited significantly different phenotypes and MonAzPs production profiles, compared to that of M. ruber M7. The colonies of mrpigA, with the deletion affecting this gene at the "left" border of the cluster, were off-white with the production of all MonAzPs completely abolished. This is in agreement with the crucial role of the MrPigA PKS in the formation of the polyketide core of all MoAzPs in this fungus. The colonies of the mrpigO strain, with the knockout eliminating the second gene from the "right" border of the predicted BGC, were yellow to pale orange. This strain produced two known MonAzPs derailment products, the yellow pigments acetyl-monasfluol A and B (compounds 5 and 6; Chen et al., 2019, Figure 4C). This is consistent with MrPigO being a deacetylase, and emphasizes the central role of this enzyme in favoring the appropriate intramolecular Knoevenagel cyclization that yields MonAzPs congeners with the characteristic linear tricyclic ring system, instead of the angular system seen in shunt products 5 and 6 (Chen et al., 2019). The phenotype of the mrpigP strain, with the deletion affecting this gene at the "right" border of the cluster, was similar to that of M. ruber M7. The mrpigP strain produced large amounts of MonAzPs, although with a shift in the product ratios toward monascin 1 and ankaflavin 3. MrPigP is an MFS transporter with a role in the export of monasones, the naphthoquinone co-metabolites of this supercluster (Li et al., 2020). However, deletion of this gene was seen earlier not to influence MonAzPs production in liquid media (Li et al., 2020), so this transporter plays, at best, a supplementary part in azaphilone pigment production. Taken together, our comparative genomic, differential transcriptomic, and gene knockout data predict that the MonAzPs BGC of M. ruber M7 extends from mrpigA to mrpigP, with mrpigAup1, mrpigAup2, mrpigPdown1, and mrpigPdown2 laying just outside the borders of the cluster. Bioinformatic Analysis of the Azaphilone Gene Cluster in T. marneffei The gene cluster in T. marneffei ATCC 18224 that we took advantage of to predict the boundaries of the MonAzPs BGC in M. ruber M7 is nearly identical to a genomic locus in T. marneffei PM1. This locus in strain PM1 was described to contain a 5-gene BGC responsible for the production of a variety of soluble redcolored Monascus-type azaphilone pigments (Woo et al., 2014). These compounds are γ-vinylogous pyridines that are produced when the orange azaphilone pigments, rubropunctatin 2, and monascorubrin 4 form adducts to effect an O-to-N substitution with various amines, including amino acids from the cell or the media. The same BGC was also described to be responsible for the biosynthesis of yellow azaphilone pigments such as ankaflavin 3 in strain PM1, as expected (Woo et al., 2014). Comparison of the M. ruber M7 MonAzPs BGC with the genome sequences of T. marneffei ATCC 18224 and PM1 showed that most MonAzPs biosynthetic genes are conserved and localized in a single locus in the two Talaromyces genomes (Supplementary Table 4), although these pigment BGCs in Monascus vs. the two T. marneffei strains are not syntenic. Genes similar to mrpigF and mrpigO of M. ruber M7 are encoded outside of the T. marneffei BGCs, while orthologs of the ankyrin repeat protein-encoding mrpigL are not immediately obvious, nor are they necessary for azaphilone pigment biosynthesis. We propose that the apparent Monascus-type azaphilone BGC in T. marneffei ATCC 18224 includes the genes XP_002149758 to XP_002149772 (corresponding to KFX40792 to KFX45538 in strain PM1, Supplementary Table 4). Together with the additional MonAzPs gene homologs that are outside the BGCs in the Talaromyces genomes, this cluster is the functional equivalent of the MonAzPs BGC of M. ruber M7 (Supplementary Table 4, Woo et al., 2014;Chen et al., 2019). Importantly, our model for the Talaromyces BGCs includes a much more extensive genomic region than the originally delineated five-gene cluster (Woo et al., 2014), again highlighting the difficulties of boundary predictions for complex BGCs in fungi. Functional verification of this more extensive BGC using transcriptomic and gene knockout studies is still necessary to clarify the biosynthesis of Monascus-type azaphilone pigments in the two Talaromyces strains. DISCUSSION A significant aspect of fungal development is the production of SMs that serve as allelochemicals, pigments, intraand interspecies signaling compounds and modulators of metabolic processes. Many of these molecules display a broad range of antibiotic, antitumor, enzyme modulatory, and immunosuppressive activities that can be exploited for drug discovery and development (Hyde et al., 2019). Advances in genomics revealed that the genes necessary for the production of a given SM are typically (although not always) clustered on the fungal genome (Weber et al., 2009). Such BGCs encode core enzymes for SM scaffold biosynthesis; modifying enzymes for scaffold editing, complexity generation, and SM maturation; and transcription factors, self-resistance, and SM export mediators (Keller, 2019). Traditionally, SM BGCs are detected in genome sequence assemblies by identifying the core genes (such as PKSs, NRPSs, or terpene synthases), based on their conserved sequence motifs. Next, the putative BGCs are extended by recruiting flanking genes that are similar to genes frequently found in known SM BGCs, including those that encode hydroxylases, oxidases, methyltransferases, transcription factors, and transporters (Keller, 2019). This process is often facilitated by specialized software tools, such as SMURF, antiSMASH, CASSIS, or Prism, and may utilize curated repositories such as MIBiG (Khaldi et al., 2010;Wolf et al., 2016;Blin et al., 2019;Kautsar et al., 2020;Skinnider et al., 2020). While this workflow met with spectacular success in the last two decades (Khaldi and Wolfe, 2011;Wiemann et al., 2013b;Leadmon et al., 2020), SM BGCs without core genes remain difficult to detect, and cluster boundaries usually remain provisional or even arbitrary. The MonAzPs BGC is a prime example for the latter problem: different prediction methods and different authors delimited this cluster with widely different boundaries (Figure 2A), despite the availability of multiple genome sequences from several MonAzPs-producing fungi, mostly Monascus spp. We decided to investigate this problem by cross-referencing a motif-independent, comparative genomics-based BGC prediction method (Takeda et al., 2014) with differential transcriptional analysis in a cluster-specific regulator mutant, and gene knockouts followed by the analysis of MonAzPs production by the resulting strains. These approaches collectively defined the MonAzPs BGC of M. ruber M7 to extend from mrpigA to mrpigP, encompassing 16 genes. This assignment, supported by multiple lines of experimental evidence, offers a more secure basis for the retrobiosynthetic analysis of MonAzPs production, and provides a better focus for engineering approaches toward designer Monascus-type azaphilone pigments that may be useful to produce nutraceuticals, food and feed products, and specialty chemicals (Tolborg et al., 2020). Additional transcriptional studies utilizing different cultivation conditions may further refine BGC border assignments. The BGC responsible for MonAzPs production in Talaromyces (Penicillium) marneffei ATCC 18224 was also provisionally delimited during our comparative genomics process (Figure 1). T. marneffei was reported to produce MonAzPs-type soluble red pigments (Ogihara et al., 2000(Ogihara et al., , 2001Frisvad et al., 2013;Arai et al., 2015;Jin et al., 2018;Morales-Oyervides et al., 2020;Tolborg et al., 2020), and accordingly, orthologs of most of the M. ruber M7 MonAzPs biosynthetic genes were found to be encoded in the T. marneffei ATCC 18224 BGC, albeit in a nonsyntenic arrangement (Chen et al., 2017(Chen et al., , 2019Pavesi et al., 2021). While the genes corresponding to mrpigF, mrpigO, and mrpigL were absent from the T. marneffei BGC, similar genes to mrpigF and mrpigO are encoded elsewhere in the genome of this fungus (Supplementary Table 4), and a mrpigL equivalent is not necessary for azaphilone pigment biosynthesis. Functional studies with a near-identical genomic locus in T. marneffei strain PM1 have been reported by Woo et al. (2014). Surprisingly, the knockdown of only five genes of this locus was shown to lead to the loss of Monascus-type azaphilone pigment production. Correspondingly, only pks3 (66% identity to mrpigA), rp1 (46% identity to mrpigB), rp2 (58% identity to mrpigK), rp3 (58% identity to mrpigJ), and rp4 (65% identity to mrpigH) were annotated as constituents of the Monascus-type azaphilone BGC in T. marneffei PM1. Further research is necessary to reveal why the knockdown of genes orf3 (65% identity to mrpigN), orf4 (63% identity to mrpigD), orf5 (70% identity to mrpigC), orf7 (71% identity to mrpigG), orf8 (70% identity to mrpigE), or orf9 (57% identity to mrpigP) were seen to be neutral for pigment biosynthesis in T. marneffei PM1 when the knockout of the orthologous genes in M. ruber M7 eliminates or at least severely reduces the production of the classical MonAzPs 1-4 (Liu et al., 2014;Woo et al., 2014;Chen et al., 2017). In addition, gene knockdown experiments in T. marneffei PM1, targeting pks3 (XP_002149769 in strain ATCC 18224, the ortholog of the mrpigA PKS), and rp1 (XP_002149768 in strain ATCC 18224, the ortholog of mrpigB, encoding a transcriptional regulator) seem to also connect the BGC in strain PM1 to the production of citrinin, a mycotoxin with a similar but not identical chromophore to that of MonAzPs (Woo et al., 2014). Importantly, citrinin biosynthesis is governed in Monascus spp. by a BGC different from the one for MonAzPs (He and Cox, 2016), necessitating the further evaluation of the proposed connection of the T. marneffei PM1 BGC (and its equivalent in strain ATCC 18224) to citrinin biosynthesis. Integration of further bioinformatic, transcriptomic, gene knockdown/gene knockout, gene overexpression, and metabolite analyses, as described here for the MonAzPs BGC of M. ruber M7, is expected to answer such outstanding questions. The MonAzPs BGC represents an especially tough challenge for cluster boundary prediction. This is because of the composite, mosaic-like nature of this BGC. First, it is a supercluster with genes necessary to produce two structurally different SM groups: MonAzPs-type azaphilones and monasone-type naphthoquinones (Li et al., 2020). Some genes are involved solely in the biosynthesis of one or the other SM group, while others are necessary for both. Next, MonAzPs production itself is highly complex, with many metabolic branch points opening up shunt pathways, and integrating fortuitous enzymatic and chemical reactions that afford a huge variety of pigment products (Chen et al., 2019). Last, this BGC also straddles genes or even sub-clusters (such as the six-gene sub-cluster in M. pilosus and M. ruber NRRL 1597) with no relationship to the production of MonAzPs or monasones (Balakrishnan et al., 2013;Chen et al., 2017). Any and all of these characteristics could easily confuse BGC border prediction methods that rely on one type of dataset. Thus, during the trimming step of the comparative genomics-based prediction (Takeda et al., 2014), the calculated scores at mrpigG, mrpigK, and mrpigN all showed inflection points ( Figure 1D), potentially misleading a user to call the end of the MonAzPs BGC at any of these genes. When analyzing differential trascriptomic data by comparing the wild-type and the mrpigB strains, the expression levels of the mrpigC, mrpigJ, mrpigK, or mrpigL genes were not significantly downregulated in the strain that is deficient in the cluster-specific regulator ( Figure 2B). This could have been taken as a sure sign that the end of the BGC was already reached. Conversely, although both the mrpigAup1 (at the left border) and the mrpigPdown1 gene (at the right border) were found to be downregulated in mrpigB strains (Figure 2B), neither comparative genomics nor gene knockouts support the inclusion of these genes in the MonAzPs BGC (Figures 1, 4). Finally, gene disruption of mrpigL, a gene well within the BGC boundaries, does not affect the production of MonAzPs (Chen et al., 2017), again giving a false signal for reaching the end of the BGC. Based on these and similar examples, we believe that only the integration of all three methods used in this study (comparative genomics, differential transcriptomics, and gene deletion with subsequent metabolite analysis) could adequately identify BGC borders in fungi, at least in the case of highly complex clusters such as the one for MonAzPs production. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. AUTHOR CONTRIBUTIONS QL designed and performed the experiment, contributed to data analysis, and wrote the draft manuscript. SZ, XW, and SG contributed to bioinformatic analysis. XY and FC contributed to manuscript editing. IM designed the project, analyzed the data, and wrote and edited the manuscript. All authors have read and approved the manuscript prior to submission.
8,743
2021-06-16T00:00:00.000
[ "Biology" ]
Why did the COVID-19 Epidemic Stop in China and does not Stop in the Rest of the World? (Application of the Two-Component Model) The vastly different courses of the COVID-19 epidemic in China and the rest of the world are investigated and explained within two-component epidemic model. The model is based on separate accounting for the contribution to the epidemic from two types of immune response to a viral infection innate and adaptive immunity. Any infected person becomes asymptomatic with probability (1−p) or symptomatic with probability p. In the first case, innate immunity is sufficient to protect a person. In the second case, innate immunity is insufficient, and adaptive immunity comes into play. In the asymptomatic state, the person remains outwardly healthy, mobile and can spread the infection. In the symptomatic state, the person becomes ill, isolated and cannot spread the infection. We assume that the contribution to the epidemic process from asymptomatic carriers is dominant in comparison with the contribution from the usual incubation period in the symptomatic state. The key parameters of the model are the virus lifetime T in the asymptomatic state and the spread rate β. At moderate βT values, the model describes a long, slowly decreasing morbidity plateau, which transforms into wavelike solution at βT ≫ 1. In the case of βT→∞, which corresponds to a stable non-pathogenic strain, the model solution is limited to single wave only. We believe that the spread of such a non-pathogenic strain and its subsequent dominance is responsible for ending the epidemic after the single wave of incidence in China. A way to stop the epidemic in the rest of the world may consist in displacing the circulating pathogenic virus with its stable non-pathogenic strain. Introduction The COVID-19 epidemic has divided the world into two unequal parts -China and the rest of the world. In China, the epidemic developed in the usual way: an exponential increase in the incidence, then a pronounced maximum, then a decline and the end of the epidemic. The whole cycle took about two and a half months. In the rest of the world, the epidemic was completely different. After a stage of exponential growth in daily morbidity, an inexplicable plateau has entered in many countries. Then the second and third waves emerged, the epidemic is currently ongoing, and it is unclear how many more waves may follow. Nothing of the kind is observed in China. 89 In previous [1,2] works, it was shown that the asymmetric course of the epidemic with a long plateau after a pronounced maximum can be explained by the presence of asymptomatic carriers of infection -superspreaders. However, this model did not explain the emergence of repeated waves of the epidemic. In addition, the nature of the asymptomatic state remained unclear. Finally, the question of the difference in the courses of the epidemic in China and in the rest of the world was not raised in any way. In this paper, the division of the infected into asymptomatic and symptomatic is substantiated. We associate this with two types of immunity -innate and adaptive. If the innate immunity is sufficient to limit and subsequently eliminate the virus, then the person becomes the asymptomatic carrier of the virus. If the innate immunity is not enough for this, then the person becomes ill, and adaptive immunity comes into play. The choice of the type of reaction is determined by the probability with which an infected person will get sick. This probability gives an idea of the degree of pathogenicity of the virus. With a probability (1 − ) innate immunity is sufficient, and the infected person remains practically healthy. Then the adaptive immunity is not activated. In this case, the infection remains in the body for a long time, and all this time the infected person can spread the infection. This time can be incorporated in the model either directly by the lifetime of the virus in the body of an asymptomatically infected person (the concept of the lifetime), or by the elimination rate ~1/ (the concept of the elimination rate). The article is structured as follows. In the next part, we look at the symptomatic and asymptomatic states as two modes of immune response to the virus. Response factors relevant to the two-component model are discussed here. In the third part, on the basis of these factors, the two-component model and its dynamic equations are built, the dependence of the course of the epidemic on the parameters of the model is investigated. Comparing the solution with the observed course of the pandemic allows us to determine the values of three parameters of the model. In the fourth part, the factor of the lifetime of the virus, which leads to a sequence of incidence waves, is considered separately. This makes it possible to interpret the incidence waves observed during the pandemic, as well as to explain the significant difference between the Chinese and world cases. The fifth part assesses the possible impact of virus mutations and parameter drift on the course of the epidemic. In the final part, we discuss the results obtained and consider an alternative approach to ending the epidemic, not associated with mass vaccination. Symptomatic and Asymptomatic States The biological characteristics of viruses and their interaction with multicellular organisms, despite numerous studies, remain poorly understood. The complexity of the problem is determined by the huge number of viruses, their constant variability, and the transition from one species of organisms to another, as happened with the virus that causes COVID-19. To create a mathematical model for the development of a pandemic, we used the following provisions on the interaction of the virus and the human body, which, in our opinion, do not contradict existing ideas.  After entering the body, the virus begins to multiply;  Innate/background immunity triggers nonspecific defence agents;  Under certain conditions, this turns out to be sufficient and equilibrium occurs at a safe level, which does not lead to the development of symptoms of infection, i.e., diseases. We call further this hidden infectious period as latency period ( Figure 1). Since this is an equilibrium state, the system organism + virus can stay in it for a long time without any external manifestations. It is this state that we call asymptomatic. In the asymptomatic state, the person is infected but not sick;  If the background immunity turns out to be insufficient to establish a balance at a safe level, then the body begins to prepare an adaptive immune response in advance. After passing a certain threshold level of infection, also shown in Figure 1, symptoms develop, and the person becomes ill. In this state, the main role is no longer played by the background immunity, but by the specific immunity. It is it which is responsible for limiting the number of viruses and recovery. It is this state that we call symptomatic;  The type of immune response is determined at some time point in the development of the infection, which we will call the bifurcation point. After it, either a symptomatic state develops (with a certain probability ), or an asymptomatic state (with a probability 1 − ). For some time preceding the disease, when the background immunity has already exhausted its means and the virus continues to multiply, but the symptoms have not yet appeared, the person is also an asymptomatic carrier. This state is non-equilibrium and therefore relatively short-lived. This is the incubation period (see Figure 1). Thus, it is natural to expect that the incubation period is relatively short, and the latency period is long. Since during the period of illness the persons are isolated and cannot spread the infection, they can do this either in the incubation or in the latency phase. Which of these two phases is critical for the spread of the virus? On the one hand, the latency period is much longer, which increases the contribution of this phase to the spread. On the other hand, in the incubation phase, the number of viruses in the body is much higher, which somehow evens the chances. Therefore, the key to answering the question posed is the balance of probabilities at the bifurcation point, also shown in Figure 1: what is the ratio of the probabilities of transition to the short-term incubation and to the longterm latent phase ( and (1 − ), respectively)? In our work, we assume that the transition to the latency phase is more likely. Comparison of the results obtained in the model with the course of the COVID-19 pandemic fully confirms this assumption a posteriori. The transition to the latency phase turns out to be three or more orders of magnitude more probable than the incubation phase, i.e. ~10 −3 . Thus, in our model, we will neglect the contribution from the incubation period to the spread of infection. Under these conditions, there are only two parameters that describe the development of infection in a person infected with the virus and are important for the further transmission of the virus. These are the probability of the transition to a symptomatic state and the duration of the latency period. Along with conventional spread rate , they form three basic parameters of the model under consideration Two-component Model: Random Decay Approach The two-component epidemic model is based on following assumptions:  Depending on the effectiveness of the nonspecific immune response, the level of infection exceeds the safety threshold and becomes dangerous (with probability ) or does not exceed it and remains safe (with probability 1 − ). In the first case, adaptive/specific immunity turns in play and symptoms of the disease appear. In the second case, this does not happen, the infected person remains healthy, but the infection for a long time remains in the body in the latent form. In the first case, there is a symptomatic, in the second asymptomatic course of the infection;  When symptoms appear, the infected persons become isolated and cannot spread the infection. If they recover, they acquire ideal long-term immunity and cannot be infected anymore;  The asymptomatic infected persons do not become isolated and continues to spread the infection at some rate . They remain active during the entire latency period , until the virus is cleared from the body. The corresponding elimination rate is ~1/ . After removing the virus from the body, the asymptomatic infected persons return to the uninfected state. They do not acquire long-term immunity and can be reinfected;  Due to the appearance of long-term immunity, the transition to the symptomatic state is irreversible. On the contrary, due to the absence of the appearance of long-term immunity, the transition to the asymptomatic state is reversible. This means that after the hidden infectious period, the infected person returns to his original susceptible state. These considerations lead to the epidemic pattern shown in Figure 2. It is like the conventional SIR model and differs from it in the meaning of the variables. A distinctive feature of the two-component model is the bifurcation point, at which it is possible to choose between the symptomatic state (disease) and the asymptomatic state (latent infection). Another feature is the presence of a closed SIS cycle of reinfection with the participation of asymptomatic infected. The reason for the cycle is the absence of long-term immunity after asymptomatic infection. Neither of these two features exist in the conventional SIR model, which ignores the possibility of asymptomatic infection. The dynamic equations of the two-component model directly follow from its epidemic scheme and have the form: Note that in these equations, the elimination of viruses is a random decay process with a decay rate of . We call this random decay approach. An alternative approach using the virus lifetime , when Equation 2 looks different, will be considered in the next section. In view of / > 0 the asymptotic behaviour of any its solution as → ∞ is ( , , ) → (0,0,1), that is, for a sufficiently long time, the entire population must go through the disease and acquire long-term immunity. Like the SIR model [3], the two-component model has an exact parametric solution. In what follows, however, we will only be interested in the limit of the low probability of a symptomatic course, when ≪ 1. Then, to calculate the daily incidence / it is sufficient to use the exact analytical solution in the case = 0: satisfying the initial condition (0) = 0 , (0) = 1 − 0 . When → ∞ the share of asymptomatic carriers ( ) tends to the limit: In the limiting case ≪ 1, the daily incidence / is expressed through ( ) and ( ) by the relation: In this limiting case, the asymptotic of daily incidence has the form: The incidence as a function of reaches maximum at = /2, the corresponding maximal incidence is: The time course of daily incidence depends on the ratio / . When / > 1/2, the incidence monotonously increases to the asymptotic value (8). At / ≤ 1/2, the incidence first passes through the maximum (9), and then decreases to the asymptotic value (8). It is this case, which is of most interest for what follows, is shown in Figure 3. as it should be. According to (9), the characteristic decay time of the epidemic cannot be less than: Now we can compare the results obtained with the observed course of the COVID-19 pandemic. Since the case criteria change over time, the most accurate picture is given by daily mortality values. At the same time, the true incidence is obtained from mortality by dividing it by the fatality rate which is currently 2.2% [4]. and in the rest of the world: Thus, the pandemic data directly confirm the above assumption about the smallness of the pathogenicity parameter , the value of which indeed does not exceed 10 −3 . Together with the markedly lower pathogenicity parameter in China, this suggests that a different virus strain became dominant as the epidemic spread across China compared to the rest of the world. According to the parameters found, this strain should be non-pathogenic and long-lived in the human body. Perets & Perets (2020) [8] already considered the hypothesis of the existence of a low pathogenic virus in the prehistory of the COVID-19 epidemic as the cause of an abnormally low incidence in China. However, there it was assumed only as a temporary factor that led to the activation of adaptive immunity in the population. On the other hand, the existence of non-pathogenic strains of coronavirus is a well-known fact [9,10], and it can be expected among the mutations of the original SARS-CoV-2 virus. Next, in accordance with Equation 6 the asymptomatic carriers are absolute majority of the population. In the world beyond China their equilibrium share (∞) = (1 − / ) is about 80%, and in China it is close to 100%. It should be noted that this value does not have to be reflected in the test results if the PCR testing is tuned to a higher level of virus presence corresponding to the onset of the disease. Apparently, the largest contribution to the spread of the virus is made by healthy, younger infected with stronger innate immunity and more mobility. Using the found values of the spread rate and the pathogenicity parameter, we estimate the expected duration of a pandemic by Equation 11: that is about 50 years. Naturally, this estimation ignores possible mutations of the virus and vaccination. Virus Lifetime and Waves of Incidence The scheme considered in the previous section treats virus elimination as a statistical process with a decay rate . A more detailed description assumes considering the individual lifetime of the virus, determined by some function ( ). It is the probability that the virus remains viable after days in the body of a healthy infected person. In this context, the term "healthy" means the action of only nonspecific innate immunity and the absence of the need for an adaptive immune response. The function ( ) must meet the obvious conditions (0) = 1, (∞) = 0. Its possible shapes are shown in Figure 6: In this approach, the differential Equation 2 of the system (1-3) is replaced by integral one and becomes non-local. In the limiting case ≪ 1 of interest to us, the system of equations is, however, significantly simplified. In this limit, the incidence of / is related to the rate of infection / : The ( ) function now must be found from the single equation: If we restrict ourselves to the simplest case of the step dependence ( ) = ( − ) (see Figure 6), then this equation is reduced to a differential one with retarded argument: This equation is easy to solve numerically. A remarkable feature of its solution is the appearance of incidence waves as shown in Figure 7. These waves appear at large enough values of the dimensionless product . The meaning of the waves is obvious. The incidence is a consequence of cyclical re-infection. After the first wave of infection has passed, the number of susceptible people decreases, and hence the incidence. Over time, , firstgeneration viruses begin to be eliminated and the proportion of susceptible ones increases. As a result, the epidemic process begins again, a second wave of re-infection occurs, new infected appear, infected with second-generation viruses, etc. Simultaneous evolution of both share of asymptomatic infected ( ) and incidence ′( ) in the wave-like solution is shown in Figure 8. As the viruses become eliminated, the number of asymptomatic infected decreases, instead new susceptible arise and therefore the incidence increases. In such a way the share of asymptomatic and incidence are oscillating in antiphase. These fluctuations are possible because the change in the proportion of asymptomatic infections is reversible. The proportion of asymptomatically infected people is difficult to measure, since the level of infection may be below the sensitivity threshold of PCR tests, and the localization of viruses may be outside the nasopharyngeal mucosa. Nevertheless, this result can explain an inversion that is often observed during the COVID-19 epidemic, when, despite the decrease in the proportion of positive PCR tests, an increase in the incidence occurs. This happens when daily testing far exceeds the daily incidence. During this period, testing is mostly random, and the proportion of positive tests is roughly proportional to the share of those infected without symptoms Thus, the two-component model explains the incidence waves by cycles of re-infection of the population, the period of which is determined by the time of virus elimination in asymptomatic infected. A small proportion of those reinfected fall ill, which manifests as a wave of incidence. The wave-like solution gives a simple geometric meaning to all three parameters of the model, as shown in Figure 9. The virus lifetime means the distance between the waves, the width of the waves is inversely proportional to the spread rate , and the area under plot of one wave is directly proportional to the parameter , which corresponds to degree of pathogenicity of the virus. The waves are smoothed out if their width (2/ ) becomes of the order of the distance between them ( ), i.e., at < 2. This tendency is visible in Figure 7. The same effect on the incidence waves also causes the lifetime formfactor ∆ shown above in Figure 6. A similar averaging of waves occurs when adding data from different countries. In this case, the geographical factor acts [11], which can be seen in the example of the total data on the pandemic in the world (see Figure 7). Therefore, it is natural to assume that the averaging of waves in the total data of the pandemic shown in Figure 7 is caused not by the lower value of the parameter, but by the addition of epidemic waves from many regions of the world. Anyway, high value of the spread rate is a necessary condition for the emergence of the incidence waves. Otherwise, the waves are smoothed out and transform into a plateau that appears in the random decay approach considered in the previous section (see Figure 3). The relationship between virus lifetime in the detailed virus lifetime approach and elimination rate in the random decay approach is given by = 1/ . In our opinion, the most important of the obtained family of wave solutions is the simplest single-wave solution corresponding to the → ∞ limit. We assume that it describes the epidemic situation in China (as shown in Figure 7). The absence of repeated waves indicates a high degree of stability of the dominant strain here ( → ∞), and an extremely low morbidity means its non-pathogenicity ( → 0). The question of the nature of such a non-pathogenic strain requires virological research. If detected, it seems reasonable to provide its spread to displace the less stable and more pathogenic strains that are shaping the current course of the global pandemic. Modulation and Drift of Parameters So far, we have assumed all three parameters of the model to be constant. However, over time, epidemic factors change, which leads to a change in the parameters. If the rate of this change is much lower than the spread rate , then the same solutions can be used, simply by replacing constant parameters with the corresponding functions of time, considering the drift of epidemic factors. The values of the parameters (12,13) found above should be interpreted as averages over a certain period. However, these values may change in the future. Epidemic factors are influenced by both measures taken to reduce the incidence [2] and changes in the properties of the virus itself due to mutations [12]. Virus mutations can affect all three parameters , and . In addition, the spread rate can be temporarily reduced due to anti-epidemic measures taken, for example, during lockdown [2]. The pathogenicity of the virus depends on the average level of innate nonspecific immunity in the population. Typically, this level changes cyclically throughout the year, decreasing in winter and rising in summer. This seasonal modulation of the immunity leads to flu epidemics in the fall and winter. Since according to the Equation 9 the maximum value of the incidence is /4, the sanitary measures including lockdown and thereby reducing the spread rate of indeed reduce the incidence. However, the currently observed winter wave is much higher than the previous winter wave. This may be a consequence of an increase in the pathogenicity parameter due to decrease in the average level of nonspecific immunity. If this is indeed the case, and the reason for this effect is lifestyle changes due to isolation, then we must take this into account and limit the use of these types of measures. Because too long lockdowns could cause long-time negative drift in the average immunity level. Discussion The two-component model constructed in our work describes the interaction with infection as a fundamentally random process. The further nature of this process is determined soon after infection, at the point of bifurcation. After passing this point, the infected person either gets sick or becomes an asymptomatic carrier. It is worth to note that virus infection plays a twofold role in the asymptomatic state. On the one hand, it can be transmitted further, which contributes to the development of the epidemic. On the other hand, it protects the asymptomatic infected from an immediate symptomatic infection, that is, from disease, since the innate immunity is already in a mode that keeps the infection at a safe level. In such way, for the latent period nonspecific innate immunity guarantees the same effective protection against disease as antibodies. However, after elimination of the virus and re-infection, the process may go differently, and the person may get sick. Unlike protection by antibodies, the duration of which is determined by the time of their preservation, in this case the duration of protection is determined by the lifetime of the virus itself. In Shaman & Galanti (2020) and Linde et al. (2009) [13,14] studies, the idea of temporary protection from virus by another virus was considered in the case of two alternating epidemics with two different viruses. In our model, there are not two different viruses, but we take into account two types of immune response to the same coronavirus. Of course, the probability of getting sick, as well as two other parameters of the model, depend on the individual state of the immune system of each infected person. In this paper, we consider the simplest approximation, when this dependence is ignored, and the parameters of the model are taken as averages for the entire population. Apparently unique feature of the COVID-19 epidemic is that an infection level that is safe for absolute majority of the population is unsafe for the population as a whole and leads to an epidemic spread of the disease. The reason for this is, on the one hand, an exceptionally low likelihood of a symptomatic course, and on the other hand, a high degree of infectiousness. The two-component model assumes that the vast majority of those infected are asymptomatic. However, any of them, having come out of this state and re-infected, can get sick. Unlike a regular flu epidemic, in which the same people carry the virus and get sick, in the COVID-19 epidemic, some people carry the virus and others get sick. Conclusions Based on the two-component model, we can draw the following conclusions.  During the COVID-19 epidemic, the vast majority of the population is infected and is in a hidden asymptomatic phase. A small proportion of those infected become ill and go through the symptomatic phase;  The COVID-19 virus is permanently circulating in the population, which manifests itself either in the form of a plateau of incidence or in the form of waves. After elimination of the virus from the asymptomatic infected persons, they may become ill as a result of subsequent re-infection;  The two-component model shows the share of asymptomatic infected in the population to be oscillating in antiphase with symptomatic morbidity. This result explains the inversion often observed during the COVID-19 epidemic, when, despite a decrease in the share of positive PCR tests, an increase in morbidity occurs [4];  Without taking any measures, the natural duration of the epidemic can be decades;  The almost complete cessation of the COVID-19 epidemic in China indirectly indicates the widespread in China of a stable, non-pathogenic strain that arose at the early stage of the epidemic due to mutation. Its presence could provide effective protection against infection by the original pathogenic virus. If such a strain were found, it would be wise to use the targeted spread of this strain to effectively end the epidemic in other countries of the world, as has already happened in China. Since now new pathogenic strains appear and vaccination is faced with a number of problems, the proposed method may turn out to be relevant. Therefore, virological research aimed at identifying the non-pathogenic strain is advisable right now. Funding The authors received no financial support for the research, authorship, and/or publication of this article. Acknowledgements We express our gratitude to Prof. Matteo Ferensby for fruitful discussion of the results, and to Dr. Daniel Genin for help in use of Mathematika tools. Ethical Approval The manuscript does not contain experiments on animals and humans; hence ethical permission not required. Data Availability Statement Data was obtained from "Our World in Data" and are available in a publicly accessible repository [4]. Conflict of Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
6,465.2
2021-06-01T00:00:00.000
[ "Medicine", "Economics" ]
Computationally efficient joint species distribution modeling of big spatial data Abstract The ongoing global change and the increased interest in macroecological processes call for the analysis of spatially extensive data on species communities to understand and forecast distributional changes of biodiversity. Recently developed joint species distribution models can deal with numerous species efficiently, while explicitly accounting for spatial structure in the data. However, their applicability is generally limited to relatively small spatial data sets because of their severe computational scaling as the number of spatial locations increases. In this work, we propose a practical alleviation of this scalability constraint for joint species modeling by exploiting two spatial‐statistics techniques that facilitate the analysis of large spatial data sets: Gaussian predictive process and nearest‐neighbor Gaussian process. We devised an efficient Gibbs posterior sampling algorithm for Bayesian model fitting that allows us to analyze community data sets consisting of hundreds of species sampled from up to hundreds of thousands of spatial units. The performance of these methods is demonstrated using an extensive plant data set of 30,955 spatial units as a case study. We provide an implementation of the presented methods as an extension to the hierarchical modeling of species communities framework. HMSC specification in matrix notation The HMSC specification, provided in Materials & Methods section of the main text also could be elegantly and rigorously formally written in matrix form using following notation: Data objects in the model • The ' ( × ' * matrix Y of the recorded species abundances/occurrences 5 67 • The ' ( × ' , matrix X of the covariates : 6; • The ' * × ' . matrix T of the species traits = 7> • The ' * × ' * symmetric positive definite matrix C of the species phylogenetic similarities • The ' ( × ' 2 matrix S of the sites' coordinates A 62 Model parameters • The ' ( × ' * matrix L of latent variables -67 standing for location parameters of the data distribution • The ' , × ' * matrix Β of the D ;7 of the species responses to the covariates • The ' , × ' * matrix Μ of the F ;7 of the trait-expected species responses to the covariates • The ' , × ' . matrix Γ of the H ;> the impacts of trait values on the expected species response to the covariates • The ' , × ' , matrix V standing for the covariance of responses to covariates across species that could not be attributed to available traits. • Scalar J standing for the strength of the impact of phytogenic similarity to similarity in responses to covariates. • The ' 0 × ' * matrix Λ is the matrix of latent factor loadings L M7 • The ' ( × ' 0 matrix Η of the latent factors N 6M • The ' 0 × 1 vector O of the spatial ranges of latent factors. • The ' ( × ' * matrix Z of latent liabilities Q 67 = -67 + S 67 used for implementation of various types of observational data through data augmentation. For theoretical ground see e.g. Albert and Chib (1993) or Zhou et al. (2012), for HMSC-contexed usage see e.g. Ovaskainen et al (2016a) • The ' * × ' * diagonal matrix Σ of residual variances, with diagonal elements U 7 V Matrix-vector notation We denote by vec(•) the operator which stacks consecutive columns of a matrix on top of each other. We denote by small letters in bold font the vectors that are obtained by applying vec(•) to corresponding matrices, so that e.g. ] = vec(Γ), ^= vec(Β), _ = vec(Λ), `= vec(Η), a = vec(L), b = vec(Z). A star in the upperscript indicates that the transpose was applied first to the matrix, so e.g. b * = vec(Z d ) and ` * = vec(Η d ). We denote by 0 f×g the ' × h matrix of ones, by I f the ' × ' identity matrix, by tr(A) the trace of the matrix A, by ⨂ the Kronecker product, and by ∘ the Hadamard product (the entry-wise product). Distributions • We denote by o(p, Σ) the multivariate normal (Gaussain) distribution with mean p and covariance matrix Σ. • We denote by W(V, s) the Wishart distribution with scale matrix V and degrees of freedom parameter s, and by W tu (V, s) the inverse Wishart distribution with scale matrix V and degrees of freedom parameter s. Thus if V~W tu (V w , s), then V tu~W (V w tu , s). • We denote by Γ(x, y) the Gamma distribution with shape x and rate y, which parametrization is common to Bayesian statistics, so that the distribution's mean is x/y. The Hierarchical Model of Species Communities We follow the generalized linear modelling paradigm and model that the observations for the )-th species with a statistical distribution { 7 and link function | 7 that are compatible with the type of observed data for this species. Then }~5 67 = | 7 tu~-67 , ÄxÅ~5 67 = ÄxÅ Ç É~-67 , U 7 , L = XB + ΗΛ, We would like to mention, that while in this study we consider only one level of random factors in the model, which corresponds to level of single observational sites, the proposed model is trivially generalizable to more complex sampling designs with several hierarchical or crossed levels, see e.g. Ovaskainen et al. (2016a). Role of traits and phylogeny To improve the performance of the model with sparse data or rare species, as well as to exploit potentially available information on species-specific traits and phylogenetic relationships, we follow Ovaskainen et al. (2017) and impose a multivariate Gaussian conditional prior for the regression coefficients as ^= áD u⋅ , … , D f Ü ⋅ ä d~o (vec(Μ d ), Θ). The matrix Μ ∈ ℝ f Ü ×f â consists of the elements F ;7 that describe the expected response of species ) to covariate +. The expected response of species ) to covariate + is modeled based on this species' traits as F ;7 = ∑ H ;> = 7> f ê >ëu , where = 7> is the value of traitfor species ) (with = 7u = 1 modeling the intercept), and the parameter H ;> measures the effect of traiton the expected response to covariate +. Matrix Θ models the variation of responses among individual species around the traitbased expectation as Θ = áJC + (1 − J)I f â ä ⊗ V, where ⊗ denotes Kronecker's product, the positive-definite matrix V ∈ ℝ f Ü ×f Ü models the dependency between responses to different covariates, the parameter 0 ≤ J ≤ 1 determines the impact of phylogenetic relationships on species responses to the covariates, and the matrix C ∈ ℝ f â ×f â is a phylogenetic correlation matrix, which is assumed to be known prior to the analysis. The model can be applied without trait data by including the intercept as the only species trait, and it can be applied without phylogenetic data by fixing J = 0. Priors Here we list the families of the priors that we assign, and which are essential for our sampling algorithm. We also provide the "default" values for those, which correspond to weakly informative prior that is recommended for practical use in case when no or very few additional information is available a priori (applied with normalized input data). • The prior for those diagonal elements of Σ that are not fixed due to the selected data distribution { 7 is U 7 tV~G a(x 7 , y 7 ). The prior parameters are the scalars x 7 and y 7 . Their default values are x 7 = 1 and y 7 = 0.3. • For Λ ; Λ ; û , we assume the multiplicative gamma process shrinkage prior suggested by Bhattacharya and Dunson (2011), in which the degree of shrinkage increases with the factor number. Thus, , ¶ u~G a(x u , y u ), ¶ >~G a(x V , y V ) for -≥ 2. • The authors of this method proposed that the parameter s is fixed to £ = 3 and the parameters y u and y V are fixed to y u = y V = 1. The parameters x u and x V are selected by the user to define the level of shrinkage , with x u tuning the basic level and x V > 1 the increase in shrinkage with increasing number of the factor. The default values are x u = x V = 5. In case of many species with sparse data, it may be useful to increase these parameters to increase shrinkage (and thus decrease the estimated number of latent factors), although the original study claims that the method is quite robust to the choice of these hyperparameters. • If the latent factors are not spatially structured, the prior for Η is `~o ¢0, I f´⨂ I f¨ § . If the latent factors are spatially structured, the prior for Η is `~o(0, K AE ), where K AE is the block-diagonal matrix K AE = diag (K u Selecting the number of latent factors Determining the appropriate number of latent factors is fundamental for the HMSC model specified above. While in certain cases, this number can be obtained based on an informed expert opinion guess, generally such information is unavailable before the analysis. Some previous works have suggested methods for a proper Bayesian treatment of ' 0 , and estimating it during the MCMC sampling (Lopes and West 2004). However, such scheme requires changing the domain of the parameter space during sampling and the proposed reversible-jump MCMC does not scale well. Instead, Bhattacharya and Dunson (2011) devised a formulation of infinite factor model, where the number of latent factors is assumed to be infinite, but the latent loadings of higher factors are shrink by assigning multiplicative Gamma process prior. In practice, the authors proposed to use adaptive tuning of ' 0 during the warm-up phase of MCMC scheme, based on discarding the latent factors, which latent loadings do not exceed certain pre-defined threshold. This method was implemented in the HMSC implementation presented in Ovaskainen et al. (2017) and consequently is used in this work. However, our experience with HMSC model indicates that with insufficient adaptation period or too severe shrinkage the estimated number of latent factors can be suboptimal, while with too mild shrinkage, the estimated number of factors can be unnecessarily high. The later generally does not affect the quality of the model fit due to originally infinite number of factors assumed, but is undesirable in spatial models, since it dramatically increases the computational load. An even more robust, although numerically very costly scheme to estimate the proper number of latent factors is to iteratively run several instances of HMSC model in cross-validation manner, while varying the number of latent factors. Then, the model with best cross-validation predictive performance is likely to contain the number of latent factors that is close to the truly optimal (may depend on the of CV fold splitting strategy). Approximate Gaussian process priors for latent factors for big spatial data Gaussian predictive process The Gaussian predictive process (GPP) denoted by ó ∫(A), is constructed from the values of the original GP ó(A) defined at h 'knot' locations ª * = {A u * , … , A g * }. Therefore, the value of the GPP at any site A w is given by ó ∫ (A w ) = }(ó(A w )|ó * ) = ae * ø ¿ * ae ¿ * ¿ * tu ó * , where ó * = [ó(A u * ), … , ó(A g * ) ] d denotes the vector of the original GP values at the knot locations ª * , ae ¿ * ¿ * = á+~A 6 √ * , A 6 ƒ * ä 6 ƒ ëu…g 6 √ ëu…g and ae * ø ¿ * = [+(A w , A u * ), … , +(A w , A g * )]. With this definition, it follows that ó ∫ is itself a GP: is non-stationary but factorizable (Banerjee et al. 2008). As mentioned in the main text, our interpretation of the covariance matrix Ω assumes that the marginal prior distributions of the latent factors is standard normal. However, the GPP fails to fulfill that the marginal distribution of latent factors is standard normal since its marginal variance generally decreases with increasing distance from the knot set ª * . To circumvent this misbehavior, we followed Finley et al. (2009) and applied a correction to the marginal prior variance of the GPP, so that it always equals that of the original GP: , and therefore, K … M is a valid correlation matrix. As far as we are aware, the most similar model to GPP-augmented HMSC was proposed by Ren and Banerjee (2013), where the authors also coupled GPP with factor modelling for analysis of multivariate environmental data under the assumption of Gaussian noise. Nearest Neighbor Gaussian process Nearest Neighbor Gaussian Process (NNGP) builds upon a special sparse approximation of the GP precision matrix that is related to the conditional representation of the original GP (Datta et al. 2016b). Given a specified ordering over the set of sites ª = ÀA u , … , A f¨Ã the process ó(A)~GP~0, +(A u , A V ) over this set corresponds to multivariate Gaussian distribution Õ = Àó u , … , ó f¨Ã d = Àó(A u ), … , ó ¢A f¨ § à d~o (0, ae ¿¿ ) that can be specified in conditional manner: ó u~o (0, ae uu ),~ó 6 |ó 7 , ) < #~o(F 6 , 1 6 ) ∀# ∈ 2 … ' ( , where A is the strictly lower triangular matrix with elements x 67 and D is the diagonal matrix with elements 1 6 . The Nearest Neighbor approach approximates the conditional distribution ~ó 6 |ó 7 , ) < #~o(F 6 , 1 6 ) by conditioning only on the h preceding closest neighbors of A 6 : ~ó 6 |ó 7 , ) < # ≈ ¢ó 6 |ó 7 , ) ∈ o g (#) §, where o g (#) = á' u 6 , … , ' g ∫ 6 ä is the subset of {1, … , # − 1 } of size h ∫ = min(h, # − 1) that contains the indices of at most h closest neighbors of A 6 . This results in the following adjusted formulas: , Àx with sparse matrix A … , which non-zero elements are obtained via the expressions above. Hence the precision matrix K … tu =~I − A … d D … tu~I − A … is also sparse with ‹~' ( h V non-zero entries. Another crucial property of this matrix is that all its non-zero elements tend to be close to the diagonal (the exact measure of how far the non-zero elements could be away from the diagonal depends on the coordinates of the sites and selected ordering; the practical advice is that the ordering should be selected to minimize it in order to enhance performance). This imposes that the Cholesky decomposition ›› d = K … tu + { fi , where { fi is a diagonal matrix with non-negative elements is also sparse. The enhanced computational efficiency of the NNGP method is achieved due to the decreased cost of sparse matrix operations compared to their dense counterparts. A detailed review of how the sparsity of NNGP can be harnessed for numerical speed-up of Bayesian inference we is given in Finley et al. (2019). Recently, Taylor-Rodriguez et al. (2018) proposed a similar blend of NNGP and latent factors to build a 2-stage probabilistic model linking together areal LiDAR data and forest inventory observations. However, the sequential Gibbs updater for latent factors implemented in that work, is principally different from our block implementation that follows the original note on using sparse Cholesky by Datta et al. (2016a). One practical challenge related to NNGP is that the approximation is non-invariant w.r.t. the selected ordering of the set of locations. Vecchia (1992) and Stein et al. (2004) asserted that similar conditional approximations are non-sensitive to the ordering. (Datta et al. 2016b) conducted a numerical experiment that demonstrated that results are practically invariant to the ordering choice in terms of root mean square predictive errors this choice when the ordering is selected along any direction in the spatial coordinate space. However, the maximum-minimum distance ordering, recently proposed by Guinness (2018) resulted in substantially lower Kullback-Leibler (KL) divergence of approximation, compared to the geographical gradient ordering. Further, in multivariate case there is a possibility that the minimum posterior KL divergence between the original GP-based HMSC model and NNGP-based approximation could be achieved using different orderings for different latent factors. Therefore, to keep the focus on practical multivariate ecological spatial data analysis specifics of our study, we left the ordering choice comparisons to other studies and in the presented experiments always ordered the sites according to their longitude from West to East. Gibbs MCMC sampling algorithm We extended Gibbs posterior sampling algorithm for standard HMSC's parameters for the case when latent factors N 6M are assigned with Gaussian predictive process or Nearest Neighbor Gaussian process approximations of their original full Gaussian process prior. We implemented the extended sampling algorithm in the Matlab version of the HMSC package ). Here we only present those steps from the overall sampling scheme that differ from previously published works (Bhattacharya and Dunson 2011, Ovaskainen et al. 2016a, Ovaskainen et al. 2016b, namely the full-conditional updaters for Η and O. Gaussian predictive process Full-conditional updater for Η If the latent factors are assigned GPP prior, then the full-conditional distribution for ` follows: However, the direct calculation does not bring any computational advantage compared to using latent factors assigned with GP prior. Instead, ` could be drawn from the following expressions (see Lemma 1 for details on computations) Nearest Neighbor Gaussian process Full-conditional updater for Η If the latent factors were assigned NNGP prior, then the full-conditional distribution for ` follows: (`| −)~o~F fl , U fl , U fl tu = K … AE tu + Λ d Σ tu Λ ⊗ ‡ f¨, p fl = U fl vec(·Σ tu Λ) Despite of the precision matrix U fl * tu being sparse, neither its inverse, nor Cholesky decomposition are necessarily sparse. This effective negates all potential computational benefits due to sparsity in the K … AE tu . So, instead of sampling ` directly, we obtain it as the permutation of ` * , which could be obtained through following formulas (` * | −)~o~p fl * , U fl * , U fl * tu = Ô fl * fl K … AE tu Ô fl * fl d + ‡ f¨⊗ Λ d Σ tu Λ, p fl * = U fl * vec((·Σ tu Λ) d ) Where the Ô fl * fl is the transposition matrix that transforms ` to ` * : ` * = Ô fl * fl`. Now, the U fl * tu matrix has a special structure -if considering it as a block matrix with ' ( × ' ( blocks of size ' 0 × ' 0 , all of its non-zero blocks are located at the same places as the non-zero elements of ~K … M . So, the non-zero elements of U fl * tu are in the proximity of the diagonal, which allows for a sparse Cholesky factorization U fl * tu = › fl * › fl * d . Exact number of non-zero elements depends on configuration of sites and ordering, but it can be shown that for sites located at the vertices of a uniform square grid the number of non-zero elements would be of order ‹ h' 0 V ' ( The associated computational cost is mainly due to the sparse Cholesky decomposition and double left division of a ' ( ' 0length vector to a ' ( ' 0 × ' ( ' 0 sparse triangular matrix. In our Matlab implementation of this updater we use the Matlab's implementation of Cholesky factorization -function chol(). This function also provides an option to internally perform approximate minimum degree (AMD) permutation in order to get a sparser Cholesky factor. Unfortunately, we cannot report the exact algorithm that is used there, since is not publicly unclosed. We propose to keep this option on, since in our numerical experiments it allowed for an approximately 10-15% additional speed-up for the updater, although it is likely that for certain site configurations it would only generate unnecessary minor overheat (e.g. when all sites are on a single straight line). As a side remark, we would like to mention that applying this function directly to U fl tu does not produce any reasonably sparse Cholesky factor due to heuristic nature of AMD algorithm, hence the block-ordered permutation to U fl * tu is indeed essential. Full-conditional updater for O Conditioning on Η, spatial range parameters O M are independent for ℎ = 1 … ' 0 and can be sampled one by one from the prior values O ñ proportional to their conditional posterior probabilities. We follow the sampling scheme presented in Ovaskainen et al. (2016b), but exploit the special structure of NNGP-induced covariance matrices K … M . For each latent factor this scheme requires ' ¥ µ calculations of the quadratic form N ⋅M d~K … M tu N ⋅M , and additionally ' ¥ µ calculations of ÍK … M Í that are shared among the latent factors. However, in case of large ' ( direct sampling from this distribution is problematic due to the need to invert and decompose the large dense matrix U fl tu . Instead the special form of " can be exploited for enhanced performance. First using the Woodbury identity " tu = (" uV " VV " Vu + {) tu = { tu − { tu " uV (" VV + " Vu { tu " uV ) tu " Vu { tu Then denoting by Ë = " VV + " Vu { tu " uV , and by ‚ = Λ d Σ tu Λ ⊗ ‡ f¨+ { tu , Which can be further expanded with Woodbury identity Matrix Á = Ë − " Vu { tu ‚ tu { tu " uV has the size ' ; ' 0 × ' ; ' 0 , and its inverse Á tu and its Cholesky decomposition › ‰ Â√ could be easily computed once the number of Gaussian predictive process knots is relatively small. The inverse matrix ‚ tu can be also effectively calculated due to its special structure. This has the numerical complexity of ‹ Û~m ax~h' 0 V ' ( , h Ì ' 0 Ì flops, so scaling linearly as the number of sites. Posterior predictive distribution Once the draws from the posterior of HMSC model parameters have been acquired, they can be used for making predictions at any location, where the values of covariates are known. We denote the desired prediction location by ˙ * , the set of covariates included to the fixed effects component of the HMSC model at that location by ˚ * , and the vector of predicted outcomes by ¸ * . We denote the set of all model parameters, specified in the corresponding section above by ˝. which enables to obtain conditional samples of the joint ` * via sampling from univariate conditional distributions for different latent factors ℎ. This allows to retain linear asymptotic complexity in predictive distribution with respect to the number of latent factors ' 0 in the HMSC. Formulas for efficient sampling from univariate conditional variance-corrected GPP and NNGP distributions follow the original strategies (Finley et al. 2009, Datta et al. 2016a). Details on Australian plants case study The data originate from the Victorian Biodiversity Atlas (VBA) (https://www.environment.vic.gov.au/biodiversity/victorianbiodiversity-atlas), which is a state database that collaborates with the Atlas of Living Australia (http://www.ala.org.au). The subset of the VBA used in this study involves the occurrences of 1237 herbaceous species, at 30,955 sampling locations within the State of Victoria, Australia ( Fig. 2A), for which presence-absence were recorded. The data were collected in years 1984-2014 on sampling plots of 3900 m 2 , The number of unique survey teams involved in the collection of these data is not known accurately, but is in the order of 200-300. The dataset combines survey data undertaken for a range of purposes the predominant being: 1. Ecosystem inventory, circumscription and mapping 2. Characterizing the habitats of species of management interest 3. Documenting and describing land subject to development or land-use change Consequently, the data is biased towards sampling public lands, typically less suitable for agriculture and peri-urban areas. We selected four environmental covariates that were considered potentially important to vegetation and plant distribution and were not strongly correlated. These measure: 1. Climatic conditions -Mean maximum temperature in January (the hottest and driest month in south eastern Australia), developed using ANUCLIM (Houlder et al 2000). See Appendix S1: Fig. S1A. 2. Hydrology and landscape position -This a summed and normalized set of 'vertical distance above stream' calculations (Conrad et. al. 2015) for seven different channel networks, each of which satisfy seven separate, monotonically increasing, flow accumulation thresholds (based on catchment size weighted by catchment rainfall). See Appendix S1: Fig. S1B. 3. Soil properties -Here we used the radioelement count of thorium as a general proxy for soil type. Radiometric data is related to soil depth, soil texture and nutrition particularly in surficial landscapes. See Read et al. (2018) and Appendix S1: Fig. S1C. 4. Solar radiation and anisotrophic heating. These data have been derived from the transformation of a digital elevation model to indicate the relative level of terrain illumination when the sun is at 270 degrees (North-West) and 40 degrees above the horizon. We also included available information on 9 species traits as binary indicator variables, describing whether the species (1) is annual or perennial, (2) is pollinated by abiotic or biotic means, (3-4) has propagules that are dispersed by wind, invertebrates, or another agent, (5) forms a seed bank that typically persists for two or more years, and is considered vulnerable to or tolerant of (6) fire, (7) prolonged snow cover, (8) protracted waterlogging, or (9) salinity. These traits were selected from a much larger list of expert-provided traits that potentially govern the species distribution in the studied community. The particular choice of those included to the model was governed by a) amount of collinearity between different available traits and b) availability of trait values for all the studied species. Details on model convergence As described in the main text, we fitted all models with 10,000 MCMC steps, out of which we discarded the first 2,000 steps as burn in. We thinned the remaining samples by 10, resulting in 800 posterior draws. To examine the convergence of the MCMC chains, we repeated model fitting 40 times, randomly selecting initial parameter values from the prior distribution. We assessed the quality of mixing by calculating the effective sample sizes (ESS) and potential scale reduction factors Figure S1. Spatial distribution of environmental covariates included to the model. East East East Average temperature in January Normalized vertical water proximity Radioelement count of thorium (PSRF) for the model parameters (Gelman and Rubin 1992). However, as the prior of Bhattacharya and Dunson (2011) leads to non-identifiable parameters H and Λ, and the number of unique entries in Ω = Λ û Λ is extremely high for model fitted to all data, we restricted our calculations to the B parameters and a randomly chosen 40 ✕ 40 symmetric submatrix of association matrix ˜= cov2cor~Ω + ‡ f â , as those are fundamental in the ecological applications and generally representative for overall model mixing. Hence, for each variant of fitted model we stacked the 40 chains, each containing 800 draws of B and ˜ parameters, and calculated the ESS and PSRF with effectiveSize() and gelman.diag() functions implemented in coda R package. To reduce still enormously high number of quantities, we calculated a single quantity for each model variant -the 5% quantile of the ESS and the 95% quantile of the PSRF point estimates among the D ;7 or ˜7 7 ≤ parameters in this model variant. The resulted values are summarized in the Appendix S1: Fig. S2, Fig. S3, Fig. S4, Fig. S5. As is clearly seen from the visualization of the effective sample sizes and potential scale reduction factors, the algorithm with selected number of steps and thinning demonstrated generally adequate mixing for numbers of training sites up to 1600 and got significantly worse when the number of sites further increased. The drop is especially pronounced for the elements of association matrix ˜7 7 ≤ . On the other hand, the number of species exhibited opposite impact -with higher number of species the effective sample sizes were generally higher than with low number of species. Perhaps somewhat unexpectedly, the mixing in model fits with high number of training sites got insufficient also for non-spatial model. These results suggest that to keep the results of Bayesian analysis properly valid (especially concerning uncertainty quantifications), the number of samples or thinning for models fitted to large data must be increased, which raise the right parts of the expected computation times that are shown in the Fig. 1A-F. Despite of the fact that the bigger model indicated insufficient mixing, in terms of predictive performance our results were quite stable between different chains and qualitatively repeated the results averaged over the chains. This provides an empirical confirmation that the predictive performance is expected to behave as reported in Fig. 1 and would not considerably depend on initial starting position of MCMC chains. Appendix S1: Fig. S6 and Fig. S7 extend the Fig. 1 GHI and Fig. 1 JKL correspondingly by presenting the predictive measures calculated for each of the independent chains. The visualization of performance distribution for each model variant is constructed using beanplot function with standard smoothing parameters from beanplot R package. Based on the complications with mixing that we have encountered when fitting models to the larger datasets, we would like to summarize that the block Gibbs sampling algorithm, presented by Ovaskainen et al. (2017) is insufficiently efficient for modelling big datasets, at least when the outcomes are binary. One potential bottleneck is due to known inefficiencies of the data augmentation of Albert and Chib (1993) that is used for dealing with binary data, which leads to slow mixing for unbalanced outcomes, which probability is close to zero or one. The fundamental problem comes from the great mismatch of marginal posterior and the conditional distribution given the augmented data. Some recent work has been conducted aiming to efficiently deal with this issue ), specifically to "widen" the conditional distribution at the cost of introducing a rejection probability. However, its utility has been demonstrated on a significantly simpler models and transition of those results to HMSC is not devised yet. Another opportunity, which currently seems more promising in our opinion, is to investigate how combination of marginal representation of HSMC's latent liabilities as a Gaussian process could be coupled with approximate methods for dealing with non-Gaussian observations (e.g. Laplace approximation, expectation propagation, variational inference). While the resulted GP would be of dimension ' ( ' * , which in case of our biggest training dataset is over 1.6 ⋅ 10 0 and prohibits full GP fitting approach, the special structure of GP's covariance matrix induced by HMSC structure provides opportunities for much more efficient solutions. For the GPP model, such method would have the same flavor as the approximate inference methods for Gaussian process regression/classification (Hensman et al. 2015). For the NNGP model, such approaches seem to be conceptually more challenging as the matrix sparsity there is tricky to utilize in combination with other marginalized model components. Thus, we believe that the first steps in that direction should be in developing an extension of the collapsed NNGP method, proposed by Finley et al. (2019), which extension would additionally marginalizing out the fixed effects and be applicable to non-Gaussian residuals. To sum up, we would like to mark this research question as a potential area of interest for statisticians and machine learner researchers, specializing in developing methods for multivariate Bayesian data analysis. Figure S2. Visualization of effective sample sizes for covariate responses based on multiple fitted independent chains for each of model variants. The numbers depict the 5% quantiles among the ! "# parameters. Figure S3. Visualization of effective sample sizes for association matrices based on multiple fitted independent chains for each of model variants. The numbers depict the 5% quantiles among the off-diagonal $ ## % parameters. Full 14 Figure S4. Visualization of PSRF for covariate responses based on multiple fitted independent chains for each of model variants. The numbers depict the 95% quantiles among the ! "# parameters. Figure S5. Visualization of PSRF for association matrices based on multiple fitted independent chains for each of model variants. The numbers depict the 95% quantiles among the off-diagonal $ ## % parameters. Figure S6. Extension of Fig. 1EFG
7,840
2019-11-14T00:00:00.000
[ "Computer Science", "Environmental Science" ]
NOVEL BIOINFORMATICS APPROACH DETECTS HUNDREDS OF PREVIOUSLY UNDETECTED SPLICED TRANSCRIPTS DISCOVERED FROM CAENORHABDITIS ELEGANS GENOME CONTEXT: With the completion of genome sequence of several organisms including free-living soil nematode Caenorhabditis elegans, precise genome annotations of this sea of raw information are now of prime importance, as they allow the accurate definition of generic regions. Alternative splicing is seen in nearly all metazoan organisms as a means for producing functionally diverse polypeptides from a single gene. AIM: In this study, we performed a detailed and in-depth analysis of the full genomic sequence of one of the six chromosomes of C. elegans. MATERIALS AND METHODS: In this study, several bioinformatics tools including gene/exon prediction programs, ORF finders, blast analysis tools, and alignment programs were used to analyze the genes/exons encoded by chromosome 1 of C. elegans with special reference to alternatively spliced transcripts. CONCLUSION: Using these tools, we have predicted >200 new alternatively spliced hypothetical transcripts from the genes encoded by chromosome 1 in C. elegans. These new spliced transcripts were identified from unusually large untranslated (UTR) regions and large introns present at the 3’ and 5’ ends of the genes with a maximum number of transcripts predicted from 5’ UTR analysis. Further studies and subsequent confirmation of these alternatively spliced transcripts will enhance our understanding of the genome structure, expression, and in elucidating their role during the development of C. elegans. INTRODUCTION One of the most remarkable observations stemming from the sequencing of genomes of diverse species is that the number of protein-coding genes in an organism does not correlate with its overall cellular complexity.Alternative splicing, a key mechanism for generating protein complexity, has been suggested as one of the major explanations for this discrepancy between the number of genes and genome complexity.Through alternative splicing, the information stored in the genes of complex organisms can be edited in several ways, making it possible for a single gene to specify two or more distinct proteins.Recent analyses of sequence and microarray data have suggested that alternative splicing plays a major role in the generation of proteomic and functional diversity in almost all metazoan organisms. [1]The nematode Caenorhabditis elegans with its rapid life cycle and short lifespan has become a major system for biological study.It is an important, well-studied organism used in biomedical research as a model for human development, genetics, ageing, and diseases.The number of genes in the worm genome is comparable to that in other larger animals, suggesting that although C. isoforms.Since most of the work and studies have been limited to humans and mouse, not much emphasis has been given to study the alternatively spliced transcripts from C. elegans genome.This was the motivational factor to take up this study. The goal here was to use a novel bioinformatics method capable of delineating all possible spliced transcripts of a gene.Our work comprised complete analysis of the unannotated unusually large intronic, 5', and 3' untranslated (UTR) genomic regions of chromosome 1 of C. elegans.Our major thrust was on finding new exons and genes encoded by chromosome 1 using a combination of bioinformatics tool with a special emphasis on finding novel alternatively spliced transcripts arising from various genes.Around 180-200 new, alternatively spliced transcripts and exons were identified during chromosome one analysis.These new coding sequences in the alternatively spliced transcripts were identified mostly from unusually large UTR regions and large introns present at the 3' and 5' ends of the genes.Furthermore, to experimentally validate our findings, we performed real time-PCR using gene specific primers and RNA isolated from mixed population of C. elegans for few of the predicted spliced transcripts of the genes.predicts complete, partial, and multiple genes on both DNA strands.It can be used to identify introns, exons, promoter sites, and polyA signals, among others.It is available at http://genes.mit.edu/GENSCAN.html.c.The FEX (Find Exon) [8] program initially predicts internal exons by linear discriminant function, evaluating open reading frames (ORFs) flanked by GT and AG base pairs (the 5' and 3' ends of typical introns).It is available at http://www.softberry.com/berry.phtml.d.The GeneBuilder system [9] e.The Twinscan is a system [10] for predicting gene structure in eukaryotic genomic sequences.It combines the information from predicted coding regions and splice sites with conservation measurements between the target sequence and sequences from a closely related genome.It is available at http://genes.cse.wustl.edu/.f.FGENESH [11] is also based on the Hidden Markov Model (HMM).It is available at http://sun1.softberry.com/berry.phtml?topic=fgenesh&group=programs&su bgroup=gfind.g.HMM Genefinder [12] is a program for prediction of genes in vertebrate and C. elegans genomic sequences.The program is based on an HMM, which is a probabilistic model of the gene structure and is trained using a criterion called conditional maximum likelihood, which maximizes the probability of correct prediction.It is available at http://www.cbs.dtu.dk/services/HMMgene/hmmgene1_1.php.h.The GeneSplicer is a fast, flexible system for detecting splice sites in the genomic DNA of various eukaryotes. [13]It is available at http://cbcb.umd.edu/software/GeneSplicer/.i.The program UTR scan [14] looks for UTR functional elements by searching through user submitted sequence data for the patterns defined in the UTRsite collection.It is located at http://www.ba.itb.cnr.it/BIG/UTRScan/.j.The ORF Finder is a graphical analysis tool that finds all open reading frames of a selectable minimum size in a user's sequence or in a sequence already in the database.This tool identifies all open reading frames using the standard or alternative genetic codes.It is available at http://www.ncbi.nlm.nih.gov/projects/gorf/. k.The splice predictor implements Bayesian models for splice site prediction. [15]It is available at http:// bioinformatics.iastate.edu/cgi-bin/sp.cgi.l.The NetGene2 server [16] Rearrangement and selection of potential genes for computational analysis: These data were further screened and scaled down in accordance to the size of large introns and UTR regions.Here, our aim was to identify genes with unusually large size of 5' and 3' UTR and intron regions.Thus, we were left with around 875 genes having potentially large intronic and 5' and 3' UTR regions.3. Gene/Exon/ORF Finder analysis: The next task was to explore these large gap regions in the 875 potential genes.Our analysis started with running these unusually large gaps at intronic, 5', and 3' UTR regions on a pre-selected order of tools gene/exon/ OFR finding and several other bioinformatics analysis tools.These programs combine a variety of gene/ exon prediction methodologies including ab initio predictions (Gene finder, FGENESH), EST, proteinbased comparisons (Ensembl), sequence conservation metrics (TWINSCAN), and many more (as mentioned in Materials and Methods).Therefore, the results obtained were much authenticate then those obtained using a single gene/exon predicting tool.When large genomic sequences from large UTR and intron gaps were fed into these tools, they predicted several exons possibly capable of replacing the existing exon(s) and thus creating alternatively spliced variant of a gene. 4. From the several exons predicted above, we selected only the "common exons" capable of replacing the existing exon(s), and thus, creating spliced transcript of the gene without affecting the reading frame of the protein.Furthermore, the possibility of occurrence of that spliced exon/transcript was analyzed by a comparative analysis between the original protein and the new protein formed by addition/exclusion of alternative exons using various alignment tools.Finally, several other parameters such as percentamino acid replacement, codon usage, sense nature (i.e., whether from positive or negative strand), and the probability score of occurrence of that exon were also checked to ensure the accuracy of predicted spliced transcript of the gene.Thus, using the above approach, we were successful in detecting splicing in 160-180 genes and giving rise to possibly 200-250 new, alternatively spliced transcripts/exons from chromosome 1 of C. elegans. 5. Homology, BLAST, and EST analyses: Following the computational predictions of these novel spliced transcripts, the Yuji Khoara's and NCBI dbEST C. elegans EST database and various other relevant databases were searched to look for putative EST/ cDNA support for the possible occurrence of these new exons/transcripts.First, using various alignment tools, we looked for insertions or deletions in ESTs relative to a set of known mRNAs or by aligning the ESTs exactly to their identified genomic sequence in the draft genome to identify potential alternative splices.Second, as intronic sequences at splice junctions are highly conserved (99.24% of introns have a GT-AG at their 5' and 3' ends, respectively), so these splices were identified and intronic splice junction donor and acceptor sites were checked using various splice site predicting tools.Finally, NCBI BLAST search was performed to look for homology or prospective similarity with other polypeptides of these new spliced transcripts. Experimental validation of computationally predicted spliced transcripts After the failure in search for supporting EST/cDNA matches for our new prediction, the next way to confirm our findings was to validate them in lab using any suitable method.RT-PCR is one of the most powerful and directs methods to detect transcript variants due to alternative splicing.The RT-PCR is easier and more popular than microarray techniques in terms of confirmation of alternatively spliced variants of an individual gene.We selected 10 genes randomly from the total set of genes for which we had predicted novel spliced transcripts using our new bioinformatics methodology as detailed above.We performed RT-PCR using gene specific primers and RNA isolated from mixed population of C. elegans for these selected genes.RT-PCR validation confirmed the existence of the computationally predicted transcripts for five out of the total ten selected genes.Detail results for which are either already published, e.g., lfe-2 encoding C46H11.4gene, [5] RhoGEF domain encoding Y95B8A.12gene, [6] cadherin encoding W02B9.1 gene, [4] and few more that are yet to be published. RESULTS AND DISCUSSION Our aim was to use a novel bioinformatics approach involving the use of a wide array of bioinformatics tools and programs capable of delineating all possible spliced transcripts of a gene.We roughly analyzed around 857 genes having unusually large size of 5' and 3' UTR and intron regions and have identified 160-180 genes in which splicing occurs.The total number of transcripts arising from alternative splicing in these genes was found to be around 180-200 new, alternatively spliced transcripts/exons from chromosome 1 of C. elegans [Figure 1].Comprehensive lists of genes in which we found new spliced transcripts are given in Figures 2 and 3. These new coding sequences in the alternatively spliced transcripts were identified from unusually large UTR regions and large introns present at the 3' and 5' ends of the genes.The maximum number of unreported new exons were predicted from 5' UTR, followed by 3' UTRs and introns [Figure 2].All new exons were capable of splicing with the existing gene products generating new splice variants as the splice junctions were conserved at the splice donor and acceptor sites.A number of new, unreported possible splice variants predicted accounted a total of 196 including maximum from 5' UTR (n = 108), followed by 3' UTR (n = 63), introns present toward 5' end (n = 17), and 3' end (n = 8) [Figure 3]. Our results demonstrate that we are still far from completely deciphering these hidden transcripts from the genome of sequenced organism, and most of the studies have probably underestimated the extent of alternative splicing.Thus, the end goal of alternative splicing annotation is to identify and catalog all mRNA transcripts in the cells and develop an exhaustive catalog of alternative transcripts of an organism to fully understand the complexity of eukaryotes.Although we were successful in identifying potentially new spliced transcripts and alternative exons from chromosome 1 of C. elegans, our findings indicate that there could be approximately 1000 or more alternatively spliced transcripts expressed from the genome of C. elegans that have not been annotated or identified earlier. This could be the reason why the number of gene products is suspected to be underestimated.With this experience, we propose that the genome data may be analyzed using combination of bioinformatics tools and programs to predict the full repertoire of a gene product.These new coding sequences and transcripts, not annotated or identified earlier, will be helpful to the biological community in several ways: First, they will not only help in increasing the available database for alternatively spliced genes in C. elegans but also in pointing toward the complex mechanism of alternative splicing in C. elegans genes A comparative study between the number of genes analyzed and number of genes having predicted splice transcripts: A comparison between total number of genes analyzed (having potentially large unusually large untranslated and intronic gaps) in our study and the number of genes having predicted spliced transcripts based on bioinformatics analysis and their role in downstream regulatory steps.Second, their findings indicate the urgent need to develop the more efficient algorithms and methods capable of identifying the full catalog of alternative spliced transcripts of a gene.Moreover, similar exhaustive studies could be taken up in several other finished genomes, especially of humans with whom C. elegans share a close gene homology.Finally, due to limited domain of our work, further studies using more advanced techniques such as the RNA interference (RNAi) could be taken up, which would enhance our knowledge about the biological and functional significance of these spliced transcripts and their possible role in C. elegans gene working and regulation. Figure 2 : Figure2: A comparative study between the number of genes analyzed and number of genes having predicted splice transcripts: A comparison between total number of genes analyzed (having potentially large unusually large untranslated and intronic gaps) in our study and the number of genes having predicted spliced transcripts based on bioinformatics analysis Figure 3 : Figure 3: Predictions of new exons from the selected regions:A comparative picture between gap regions having predicted spliced transcripts.The maximum number of unreported new exons were predicted from 5' unusually large untranslated (n = 108), followed by 3' unusually large untranslated (n = 63), and intronic gap regions 5' end (n = 17) and 3' end (n = 8) from the total number of genes having predicted spliced transcripts(n = 196)
3,166.8
2016-10-10T00:00:00.000
[ "Biology", "Computer Science" ]
Magnetisation and demagnetisation of trapped field stacks in a superconducting machine for electric aircraft This research presents a comprehensive and innovative approach to investigating the magnetisation and cross-field demagnetisation behaviour of high-temperature superconducting (HTS) coated conductors (CCs) in practical superconducting machines. This study introduces several novel contributions, including the operation of the machine in propulsion energy conversion mode, the exploration of harmonics interaction in a real electric machine environment involving CCs, and the extraction of these harmonics as cross-field components. A 2D electromagnetic-thermal coupled numerical model employing the finite element method has been developed and validated against experimental data to simulate a partially superconducting machine. Upon magnetisation, the HTS stacks effectively operate as trapped field magnets, generating rotor fields for motor operation. With a peak magnetic flux density of 462 mT of the trapped field stacks (TFSs) in the air gap, the average values of the fundamental and fifth harmonics of the tangential magnetic flux density experienced by the TFSs were observed to be 25 mT and 1.75 mT, respectively. The research has thoroughly examined the impact of cross-field demagnetisation parameters including amplitude and frequency on the demagnetisation of TFSs. Furthermore, the study has also investigated the magnetisation losses occurring in various layers of HTS tapes, encompassing the HTS layer, magnetic substrate layer, and silver stabiliser at different amplitudes and frequencies. Two tape structures, namely a semi-homogenised model and a multi-layered model, have been analysed in terms of magnetisation loss. Additionally, insights into the shielding effect and skin effect at high frequencies were obtained, offering valuable information on the performance of HTS TFSs exposed to high frequency scenarios especially in high-speed machines for electric aircraft. The research outcomes are anticipated to provide valuable knowledge for the design and optimisation of HTS rotors employing TFSs in superconducting machines, contributing to the advancement of superconducting machine technology. Introduction Electric aircraft are gaining popularity due to growing concerns over energy scarcity and environmental issues.To address the demand for efficient propulsion systems, superconducting motors equipped with high-temperature superconducting (HTS) materials, have emerged as an attractive option.These motors offer compact size, minimal losses, and high-power density, making them well-suited for electric aircraft propulsion [1][2][3].To realise the potential of superconducting motors in electric aircraft, several consortium projects have been launched, including the Advanced Superconducting Motor Experimental Demonstrator project (ASuMED) [4], aiming to construct the first fully superconducting motor prototype capable of achieving the power densities and efficiencies required for future large civil aircraft with hybrid-electric distributed propulsion; and the Advanced Superconducting and Cryogenic Experimental powertraiN Demonstrator (ASCEND) [5], aiming at building a cryogenic electric propulsion system for aircraft.Both of the two projects have raised high-power density goals: 20 kW • kg −1 motor power density for ASuMED [4], and 30 kW • kg −1 for the powertrain system composed of electric motors and power electronics in the case of ASCEND [5].However, no infield tests have been successfully conducted to demonstrate these targets since the series of fundamental challenges remain unsolved, including e.g., significant AC losses in armature windings built with HTS-coated conductors (CCs) [6][7][8][9][10], excessive cooling requirements [11,12], and demagnetisation of HTS CCs [13][14][15][16].Among those unsolved difficulties, the demagnetisation of ReBCO CCs in practical synchronous superconducting machines will be studied in this paper. In synchronous superconducting machines, the traditional rotor's wound field coils or permanent magnets (PMs) can be substituted by superconducting coils [17][18][19][20], superconducting bulks [21-24], or stacked CCs [25][26][27].While wound field coils pose challenges due to the need for continual current sources typically provided by current leads [28] and the utilisation of brushes and slip rings [28,29], stacked HTS CCs offer advantages over superconducting bulk materials for trapped field magnets (TFMs) due to their higher mechanical strength and shape flexibility [9,30].In contrast to bulk HTS materials, stacked HTS tapes exhibit a relatively lower vulnerability to demagnetisation, attributed to their inherent characteristics such as high aspect ratio and ultra-thin thickness [25,31].Nonetheless, despite these advantages, the demagnetisation phenomenon remains a primary challenge that needs to be addressed for the practical application of stacked HTS tapes in electric machines.Therefore, mitigating demagnetisation effects is crucial to ensure the successful integration and optimal utilisation of stacked HTS tapes in electric machine systems. To magnetise the stacks composed of yttrium barium copper oxide (YBCO) CCs for use as TFMs in superconducting machines, two methods are commonly employed: field cooling [32-35] and pulsed field magnetisation (PFM) [21,24,26,27,[36][37][38][39].Among these, the in-situ magnetisation process, PFM is preferred for electrical machine applications due to its portability, affordability, and compactness [40].However, during the magnetisation, the heat produced by PFM will hinder the realisation of the full trapped field and flux potential in CCs [40].While zero field cooling and flux pumping have also been reported as general magnetisation methods for CCs [41-46], they have not been employed extensively in superconducting machines. Cross-field demagnetisation of CCs has been widely reported in superconducting machines [14,25,[47][48][49][50][51][52].This phenomenon is influenced by higher-order harmonic waves present in the air gap due to the stator's alternating arrangement of slots and teeth.These spatial harmonic components are not in sync with the rotor field and can lead to demagnetisation of PMs [53].While previous studies have explored crossfield demagnetisation of CCs, they often focused on explaining the physics involved without considering the practical scenarios in real electric machines [14].For cross-field demagnetisation, the interaction of abundant harmonics in an actual electric machine environment with CCs has not been fully explored. Many studies [25,47,51,52,[54][55][56] investigating the demagnetisation of superconducting bulks or stacked tapebased CCs have employed sinusoidal [25,47,51] or triangleshape [52,[54][55][56] magnetic fields to simulate the harmonics present in the machine air gap.While these approaches provide indirect references to potential demagnetisation effects, they oversimplify the complexity of harmonics in real machines.In actual machines, harmonics can consist of multiple waves with varying magnitudes and directions, making the analysis more intricate.Therefore, a reasonable method involves applying the magnetic field extracted from the machine's airgap to the CCs, which are magnetised within the same machine but isolated from it during demagnetisation process.The extracted magnetic field can be decomposed into vertical and horizontal directions relative to the surfacemounted magnets.It should be noted that this paper specifically focuses on the demagnetisation analysis of rotor surfacemounted magnets in synchronous machines.Other types of magnet arrangements are not considered or discussed in this study. The studied superconducting machine prototype is a synchronous machine derived from a surface-mounted PM motor with fractional slot concentrated winding (FSCW).In the superconducting machine used for this study, surface-mounted PMs have been replaced by TFMs composed of stacked CCs, representing a key characteristic of this superconducting motor design.For the trapped field stack (TFS), it is crucial to consider the effect of cross-field demagnetisation caused by transverse magnetic fields, particularly in FSCW machines where additional sub-and super-space harmonic components exist in the air gap due to winding configurations [57].As a result, PMs mounted on the rotor surface are more susceptible to air gap harmonics in FSCW machines, making the demagnetisation effect of magnetised TFMs a significant consideration in this study.The cylindrical geometry and large diameter of the machine introduce one of the key differences in this study compared to previously published works, namely the utilisation of curved and wide CCs, leading to a different distribution of magnetic fields and current density.This can significantly impact the magnetisation and demagnetisation processes [47]. The HTS machine is constructed based on a conventional machine design, featuring iron yokes in the stator and rotor.While air-cored machines can achieve higher magnetic loading and utilise the full potential of CCs [58], they experience more air gap harmonics caused by the stator compared to conventional iron-cored machines, leading to more pronounced demagnetisation of CCs.However, conventional iron-cored machines face limitations in their magnetic loading capacity due to iron saturation levels. Compared with prior investigations focusing on superconducting motors employing TFMs, this research offers a comprehensive multi-stage approach covering the entire process from magnetisation to demagnetisation of ReBCO CCs within a practical electrical machine.Unlike previous studies that mostly concentrated on either the magnetisation or demagnetisation of ReBCO CCs, this work addresses both aspects alongside motor operation.While two pioneering research works [26,27], have explored CC demagnetisation in a real machine, they mainly operated the machine as a generator driven by a direct current motor, leaving a gap in understanding the demagnetisation during practical motor operation.This paper builds upon the measured data of the same HTS machine reported in [26].Its novel contributions encompass operating the machine in propulsion energy conversion mode, considering the interaction of abundant harmonics in a practical electric machine environment with CCs, and extracting these harmonics as cross field components.This approach reflects real-world scenarios of practical superconducting machines.Furthermore, this work presents detailed insights into the demagnetisation process, encompassing variations in current density fluctuations within HTS layers in response to varying cross field amplitudes and frequencies, as well as the analysis of magnetisation losses in distinct layers of the ReBCO CC through numerical investigations.These aspects, which were previously constrained by the physical limitations of actual superconducting machines in real world measurements, are comprehensively addressed in this study. Methodology The complete workflow of this research is illustrated in figure 1. First, a 2D finite element method (FEM) model of the electric machine was constructed based on the geometry parameters provided in [26], as shown in figure 1(a).The simulation parameters related to individual tape characteristics and stack configuration were adjustable, while machine geometry and superconducting material properties remain constant.The final goal was to establish a well-calibrated model that could accurately trap the field, resulting in the same back electromotive force (emf) as measured in the experiment in [26].This numerical model effectively represented the stack assembly within the superconducting machine. For the magnetisation and validation stage, PFM was employed to facilitate magnetisation.The machine model, referred to as Machine 1, was set up with a single stack aligned to the iron tooth at the 12 o'clock position, around which the magnetisation coil was wound.To investigate the flux penetration and heat generation during magnetisation, a thermal module was coupled to the electromagnetic model. Following magnetisation, the rotor, containing eight superconducting stacks, rotated synchronously with the stator's magnetic field generated by three-phase alternating currents.The challenge lay in maintaining the current density induced by the magnetisation field within the stacks.While Brambilla et al [59] proposed a FEM model coupling an A-formulation containing normal conductors and an H-formulation domain incorporating CCs for modelling rotating machines with CCs, this approach was not applicable to our work.The reason is that in [59], field windings were utilised in the rotor, imposing constant external current densities directly onto the CCs.However, this method did not yield satisfactory results in our complex machine model due to the intricate physics and geometry involved.In our attempt to establish a rotating mesh for the rotor, including the magnetised CCs where supercurrents flow, the simulation solver encountered convergence issues.Even when the rotor remained stationary and the stator rotated in the opposite direction, the same problem persisted.Moreover, the coupling boundary between the Adomain and the H-domain proved problematic for magnetic flux travelling from the magnetising coils to the superconducting stack during magnetisation.Therefore, the H-formulation has been employed for the entire model during magnetisation and demagnetisation. To examine the machine as a motor, a novel modelling approach was adopted in this research.The four winding coils wound on the machine teeth on the horizontal and vertical lines were used for PFM and eight identical HTS stacks were evenly positioned on the rotor surface, forming eight poles with alternate polarities, as shown in figure 1(c).The trapped magnetic field in the radial direction was transferred to a ring PM in a second machine model with identical geometry-Machine 2. In Machine 2, the trapped field underwent rotation using an equation that defines the rotation speed and the initial phase relative to the local cylindrical coordinate Ring shape PM on the rotor surface A Motor/Generator system, as depicted in figure 1(d).In this stage, the primary objective was to extract the major harmonics potentially causing demagnetisation of the CCs.These extracted harmonics were subsequently applied to the magnetised stack as a crossfield demagnetisation source.Table 1 summarises the two submodels of the studied machine in detail. In the demagnetisation stage, PFM was conducted once more as the initial step, as illustrated in figure 1(e).After completing the magnetisation, the magnetic flux density and current density of the CC stack are transferred to a second model in figure 1(f) as initial values for demagnetisation.In this model, only the stack is retained, and all other regions are designated as air.Subsequently, the extracted harmonics of tangential flux densities acquired from motor operation were applied to the azimuthal direction of the airgap boundary within the machine. Figure 2 demonstrates a flow chart presenting the complete process described above. Modelling method for model validation This section describes the modelling method used for model validation, which consists of two steps.In the first step, a magnetisation model was built using the H-formulation.The second step is featured by a rotation model using a self-defined equation to measure the induced back-emf on a machine coil. Machine model with one superconducting stack As the first part of the workflow, magnetisation of a single superconducting stack in the machine was simulated in the model Machine 1.The geometries of the motor and tape stacks have been taken from [26].For the modelling of PFM, a FEM based electromagnetic-thermal coupled model was constructed in the commercial software COMSOL Multiphysics 6.0.For the electromagnetic modelling, the H-formulation [60,61] was utilised for the whole machine following the governing equations where µ is the magnetic permeability, ρ is the resistivity of the materials, E 0 presents the characteristic electric field with , and the field-dependent exponent n(B) was taken from [33].The anisotropic critical current density employed in equation ( 2) is magnetic field and temperature dependent [30,62,63]. Given the non-negligible heat generation during PFM, a complementary thermal model was established to incorporate heat transfer dynamics during the PFM and coupled to the electromagnetic model, forming a bi-directional connection.The thermal model contributes real-time temperature data to the electromagnetic model.The electromagnetic model delivers the current density J and electrical field E of the stack.Figure 3 demonstrates the coupled electromagnetic and heat transfer models for the machine model. The governing equation for the heat transfer module: where ρ m is the mass density, C p denotes the specific heat capacity, k is the thermal conductivity, and Q is the heat generation power density.The heat capacity, thermal conductivity of the CCs were derived from [64].Q is the volumetric heat source for the thermal model, defined by The heat transfer module was specifically applied to the HTS stack domain, as the PFM occurred in a brief timeframe.Consequently, only the heat generated within the stack was considered.Additionally, a convective heat transfer with a heat transfer coefficient of 100 W • ( m 2 • K ) −1 was employed by imposing the convective heat flux boundary on the HTS stack boundaries [48].The ambient temperature was fixed at the liquid nitrogen boiling temperature, namely 77 K.The stator and rotor yokes were composed of silicon iron M270-35A, with the B-H curve from the iron region applied in the magnetisation model.The remaining components of the machine were modelled as air. In previous studies [27, 36], the same stacked CC was treated as a homogenous bulk and its equivalent engineering critical current density was scaled down based on the volume fraction of the superconducting material.However, in high frequency environments, the electromagnetic behaviour of HTS stacks will become more complicated, necessitating the consideration of the multi-layer structure of HTS CCs [65,66].The AMSC tapes employed, as per [30], are Rolling-Assisted Biaxially Textured Substrate (RABiTS) YBCO CCs with Ni-5at.%Wmagnetic substrates.However, given that silver stabilisers have a negligible effect on the magnetisation results [67], a semi-homogenised model was built.For a single tape, both the superconducting layer and the magnetic substrate layer were taken into account, as depicted in figure 4(a).For clarity, the curvature and the real aspect ratio are not presented.A similar structure can be found in [68][69][70][71].However, the thickness of the superconducting layer was artificially expanded to 7 µm, making the total thickness of both superconducting layer and substrate layer equal to the thickness of a single tape.This artificial expansion technique of superconducting layer were also used in [25, 72,73], where superconducting layer's thickness was expanded to 100 µm.This approach aids in achieving better convergence by limiting the smallest element size and faster simulation speeds by reducing the number of mesh elements without sacrificing accuracy [70].The effectiveness of the thickness expansion lies in maintaining a sufficiently large aspect ratio, preserving the behaviour of the CCs as if they were infinitely thin strips, with the current distribution front progressing along the abplane of the superconducting layer [70].With this modification, the critical current density of the HTS layer was scaled down to 6.16 It was assumed that the layers are positioned infinitesimally close to each other, effectively eliminating any air gaps between the layers.This approach serves to reduce the total number of mesh elements and prevent convergence issue arising from tiny elements within air gaps between layers.The fitting function outlined in [74] was utilised to accurately represent the field-dependent behaviour of the relative magnetic permeability of the Ni-W ferromagnetic substrate of the RABiTS YBCO CC.This function is based on the experimental findings reported in [75], written as )) ) . ( Figure 4(b) demonstrates the mesh configuration of the curved nine-layer stack.Structured (mapped) meshes, which have a high aspect ratio and hence are capable of reducing the number of mesh elements in and between CCs [76], have been implemented in superconducting layers and substrate layers. PFM The iteratively magnetising pulsed field method with reduced amplitude (IMRA) [26] has been employed as PFM approach in this study.The PFM process involves the utilisation of a capacitor bank to generate the required pulsed current.The voltage of the capacitor bank is adjustable, consequently enabling control over the pulsed current magnitude.In alignment with [26], a sequence of ten pulses has been applied, wherein the voltage decreases from 50 V to 5 V in 5 V steps.The waveform of the current pulse employed in this work is extracted from [26].To closely mirror the experimental conditions, a simulation duration of 300 s, with a relaxation interval of 30 s following each pulse, has been executed.The simulation design facilitates the implementation of the ten pulses, each generating a reduced magnetisation field. The magnetisation circuit is characterised by a capacitor (C), an inductor (L), and a resistor (R).Considering that the rising and falling edges of the input current pulses are in the order of ms, each pulse has low frequency components (no higher than 100 Hz), and thus the impedance of the entire circuit can all be seen as constant for the varying input voltage (in other words, the frequency components of the studied current pulses with different amplitudes are similar), which enables the current magnitude to be linear to the voltage level of the capacitor bank.For the other voltage levels, the waveform and pulse width are same as those of the 40 V and only the magnitudes are scaled up or down depending on the voltage magnitude. Due to the large number of degrees of freedom (DOF) in the FEM model (about 40 000 for the one eighth model) and the significant small step size, the simulation time for the IMRA lasts for approximately 45 h using a Dell computer featured by Intel(R) Xeon(R) CPU E5-1620 v4 @ 3.50 GHz and 32GB memory. Rotation after magnetisation In terms of numerical modelling, it is very time-consuming to simulate the HTS rotor using moving mesh in COMSOL.To reduce the simulation time, another innovative point in this paper is that the time-spatial magnetic fields generated by the rotating HTS TFSs have been described by derived analytical formulae, which were applied through boundary conditions.Figure 5 demonstrates the rotation machine model in COMSOL 6.0. Figure 5 In the equation, B r,TF (x) is the trapped field along the arc length, which would be shown in the magnetisation result section.B r0 is a constant to control the magnitude of the copied magnetic field, f rot is the rotation frequency, φ 0 represents the starting position of the copied magnetic field in the machine, and φ sys and r sys stand for the angular and radial distances defined in the cylindrical system, respectively. Model validation This section is divided into two subsections.Section 4.1 will present the magnetisation results of the IMRA.Section 4.2 will involve the validation of the numerical model. Magnetisation results Figure 6 illustrates the distribution of current density (figure 6(a)) and temperature (figure 6(b)) within the stack at four different time points: the peak and end of the first pulse, the end of the first flux relaxation, which is also the very beginning of the second pulse, and the end of the whole magnetisation.The critical current density within the expanded 7 µm superconducting layer, denoted as J c0 , is 6.16 Notably, the highest temperature is observed at both edges of the stack, reaching 79 K during the peak of the first pulse, a value akin to the modelled result of 79.1 K in [27].Following each relaxation phase, the stack's average temperature reverts to 77 K. From figure 6 it can be seen that the penetration of the magnetisation flux starts from the both edges to the centre of the stack, which agrees well with the finding of Brandt [77] and the conclusions drawn in [47].In addition, the positioning of the magnetisation coil above the stack leads to the penetration occurring from the top layer to the bottom layer of the stack, resembling the magnetisation process associated with a single vortex coil [78], enhanced by a soft iron yoke [79,80].However, it is important to note that homogenous penetration of the HTS stack from the top to the bottom is hindered by the shielding effect of the upper layers on the lower ones [39].This shielding effect contributes to the absence of a current density distribution, forming a trapezoidal shape of area in the middle of the stack.Notably, at pulse peaks, the temperatures at both edges significantly surpass the temperature at the centre of the stack, resulting in a lower critical current density at the edges compared to the stack's centre.Consequently, the current density at both edges is slightly lower than that the centre of the stack due to the limitation of the J c (T). Figure 7 demonstrates the radial field profiles on an arc situated 1 mm above the stack surface within the airgap at the end of each pulse.The positive field direction points from the centre to the outer edge of the motor and vice versa.The trapped field takes on an M-shaped pattern, corresponding to the under-magnetised scenarios simulated in [36].In cases where the stack is fully magnetised, the trapped field's shape resembles a cone in the middle of the CCs as presented in [40,81].Figure 7 clearly presents that, from the initial pulse to the final pulse, the magnetic field successively decreases along the stack's edge while increasing at the stack's centre.Since the magnetic field emerges due to the circulating current within the stack, the field variation also indicates that the current distribution is driven from the edge to the centre of the stack.The graphical depiction showcases that the fundamental form of the trapped magnetic field establishes itself with the first pulse, which stands as the most crucial pulse within the overall magnetisation process.Subsequent pulses further enhance both the overall magnetic field and the current density. Rotation after magnetisation The comparison between the experiment and simulation results of the induced back-emf is presented in figure 8.The experimental data show asymmetrical positive and negative peaks, indicating that the trapped field in the experiment does not have a symmetry pattern.This asymmetry could be attributed to the cumulative error that occurs during the ten pulses of magnetisation due to the generated axial torques, which necessitate repositioning of the rotor after each relaxation period [27]. Additionally, it is important to highlight that the modelling approach employed in this study, which involves extending the superconducting layer by a factor of seven, can account for the discrepancy between the simulation and experimental outcomes, as illustrated in figure 8.This is particularly relevant when considering the electro-thermal properties of HTS stacks and accounting for the complex dependence of J c (B, T).While the relationship might not follow a simple linear correlation geometrically, the fact that the simulated back EMF magnitude aligns well with experimental results lends support to the usefulness of this simplification strategy.By adopting this approach, the electro-mechanical performance of the superconducting machine can be modelled with reduced computational complexity while maintaining a reasonable level of accuracy.The commendable agreement between the simulation and experimental back-emf data indicates successful validation of the FEM machine model featuring HTS TFSs.Consequently, this model can now be harnessed for further investigations and analyses pertaining to motor performance. Operation in motor mode This section is divided into two subsections.Section 5.1 covers the magnetisation of eight identical stacks using the validated model, while in section 5.2, the trapped field obtained from the magnetisation process was transferred to the ring magnet of the Machine 2 model, which was then operated as a motor with varying current and motor speed.By analysing extracted harmonics, the study assessed their impact on the stack behaviour and overall machine performance. Magnetisation of eight stacks of CCs This study is focused on the demagnetisation of CCs in a practical superconducting machine environment, rather than maximising the magnetic loading or trapped field in the airgap.Since the machine in this study is designed as a 12-slot/8-pole synchronous motor with surface PMs, eight TFMs are evenly placed on the rotor with alternate poles. In preparation for motor operation, a crucial preliminary task involves magnetising the eight superconducting stacks depicted in figure 1(c).A practical magnetisation method utilising the symmetrical structure of the stator geometry to magnetise all rotor stacks is to utilise the four coils directly positioned above the four stacks on both the vertical and horizontal axes.The magnetisation employs the same pulse current profile as in the validation stage and one single pulse with a peak value of 2000 A was applied.Figure 9 depicts the magnetic flux density distribution of the eight poles after magnetisation. Notably, the magnetisation current applied to the eight stacks yields a significantly higher magnetisation field magnitude than the saturation level of typical iron cores.The iron teeth possess a substantially lower relative permeability when saturated, compared to the linear zone of their B-H curve.However, as the relative permeability of the iron approaches and stabilises at one, the magnetic field within the iron core continues to increase in conjunction with the pulse current. When a superconductor is fully magnetised, its trapped field exhibits approximately triangular waveform.The field's magnitude transitions from positive to negative when observed from the centre towards both edges of the superconductor, as shown in figure 10(a).Figure 10(b) depicted the magnetic field distribution at the edges of a stack using red arrows.Notably, To enhance the motor operation performance in the current machine model, The following improvements have been made based on the validated model.: 1. Number of CC layers: adjusted to eliminate any noticeable distortion and reduce the introduction of harmonics caused by CCs themselves.2. Width of CCs: extended so that there is no clearance is between any two adjacent stacks, resulting in no magnetic flux leakage between the stacks and the increase of the critical current in proportion to the tape width. Figure 11 depicts the modifications result in the formation of eight poles with alternate polarities without any flux leakage between adjacent poles.The circular angle is adopted for the x-axis, originating from the 12 o'clock position and progressing clockwise.The four poles characterised by positive flux density correlate with the stacks located along the horizontal and vertical centre axes of the motor, corresponding to angles of 0 • , 90 • , 180 • , 270 • .Irrespective of the tape layer number, these poles have been fully magnetised.In contrast, the four stacks positioned along the motor's diagonal axes, specifically at angles of 45 • , 135 • , 225 • , and 315 • , and characterised by negative flux density, cannot achieve full penetration when the tape layer number exceeds four.The magnetisation of different stack layer number for a stack on the diagonal axis has been illustrated in the inset in figure 11.The partial penetration is attributed to the thin iron closure sheet under stator slots.This closure sheet becomes oversaturated when exposed to the high magnetisation field during magnetisation, which hindered the full penetration of the magnetic field into the stacks.Finally, it has been determined that a four-layer stack of AMSC tapes with a width of 45 mm, as shown in figure 12.According to the magnetisation results illustrated in figure 11, the field trapped capacity for a modified four-layer stack is 0.462 T without significant distortion.This configuration ensures optimal trapping of the magnetic field by the superconducting stacks. Motor operation with trapped flux pattern The 12-slot, 8-pole FSCW machine utilised in this study features non-overlapped coils and a double-layer winding arrangement.The winding connections for the three phases, as per [82], are shown in figure 13(a).To simulate the motor operation, the magnetic flux density pattern corresponding to a stack with a layer number of four, as depicted in figure 11, has been assigned as the remanent flux density to the ring magnet in Machine 2. The remanent flux density pattern rotates in the same manner as described by equation ( 6).The three parameters B r0 , f rot , and φ 0 in equation ( 6) are adjusted accordingly for specific motor operation conditions.Figure 13(b) shows the magnetic flux distribution of the eight poles throughout the machine during the motor run. The harmonics analysis of the motor has been performed under various operating conditions by running the motor at various stator currents and rotor speeds, as listed in table 2, while keeping the rotor remanent flux density constant. Figure 14 showcases the motor torque characteristics at 1000 rpm. Figure 14(a) illustrates the torque waveforms in the time domain, while figure 14(b) depicts the most significant harmonic amplitudes extracted from fast Fourier transform (FFT) analysis-namely, the average value T Ave and the sixth harmonics.The analysis indicates that the torque waveforms at different rotor speeds exhibit similar shapes and magnitudes, as confirmed by the FFT results in figure 14(b).This consistency is attributed to that motor torque is predominantly determined by the stator current.Furthermore, figure 14(b) highlights that the sixth harmonic's magnitude remains consistent across varying rotor speed for a given stator current. When analysing the harmonics in the air gap, the focus is on the radial and tangential magnetic fields experienced by a single stack on the rotor surface.These fields can be decomposed into their radial and tangential components with respect to the stack, as depicted in figure 15.The radial magnetic field in the airgap is denoted as B r and the tangential field (i.e., cross field) as B phi .The size of the stack in the figure is modified for better visualisation. Figures 16 and 17 present the waveforms and harmonic analyses of the average radial and tangential magnetic flux densities experienced by a rotor stack.Similar to torque, the while this percentage rises to about 7% for the tangential flux density. The presence of fifth harmonics is linked to the configuration of the PMs for a synchronous motor or TFMs for a superconducting motor.Notably, optimising the shape of the TFM can lead to the elimination of harmonics [83].However, the primary objective of this paper was not centered on the harmonics' elimination.Instead, our focus was directed towards the behaviour investigation of the existing superconducting motor. Table 3 provides the ranges of the radial and tangential harmonics experienced by a single stack at the speed of 1000 rpm.The harmonics vary with the stator current but they remain constant to the rotor speed.Therefore, only one speed point Cross-field demagnetisation The demagnetisation analysis aims to investigate the effects of harmonics on the demagnetisation process of CCs.Understanding the demagnetisation process is crucial for designing robust and reliable superconducting machines that can effectively harness the benefits of CCs while ensuring their long-term stability.During the demagnetisation stage, it is essential to isolate the magnetised stack from the motor frame and only subject it to the influence of applied cross fields.To achieve this, all elements except the stack should be set as air in the simulation model.To maintain continuity and accuracy in the simulation, the mesh elements from the magnetisation stage should be copied one-to-one to the demagnetisation stage, allowing for the transfer of current densities and magnetic fields from the last step of the magnetisation as initial values for the demagnetisation phase, ensuring a smooth transition and reliable simulation results.Figure 18 depicts the method used for applying the cross field during the demagnetisation process.To reduces the computation load, only one quarter of the entire machine geometry was considered, resulting in fewer elements in the demagnetisation model.To ensure that the applied field was parallel to ab-plane of the stack, the cross field was applied to the boundaries of the airgap using the local base vector systemboundary system.Specifically, the tangential component of the upper and lower boundaries parallel to stack, as well as the normal component of the left and right boundaries, were assigned with the cross-field harmonics.The configuration can be seen in figure 18(a).Before demagnetisation, the magnetic field direction of the magnetised stack remained unchanged, as shown in figure 18(b).During demagnetisation, the applied cross field interacted with the existing magnetic field, resulting in a synthetic magnetic field with oscillating direction, as depicted in figure 18(c).The applied sine waves for tangential flux density harmonics contain the fundamental and the fifth harmonics and can be calculated by From table 3 we can conclude that the amplitude of the fundamental wave of the tangential harmonics is 14 times greater than the amplitude of the fifth harmonics.In the following simulation, this value relationship was always kept between the fundamental and the fifth harmonics. The parallel penetration field of a single CC based on the slab model [84,85], can be calculated through which is equal to 27 mT for µ 0 being the vacuum permeability with µ 0 = 4π × 10 −7 H • m −1 , J c0 = 6.16 × 10 9 A • m −2 , and d = 7 µm.As per [15], the trapped field tends to stabilise and reach an asymptotic value after a substantial number of cycles, when the applied cross field remains below the parallel penetration field of a single tape.In our case, the extracted cross fields provided in table 3 are evidently lower than the calculated parallel penetration field.Therefore, it is reasonable to anticipate a motor operation characterised by long-time stability under the conditions outlined in section 5.2.Two tape structures were considered and utilised in the demagnetisation simulations: the semi-homogenised model and the multi-layered model [65].The semi-homogenised tape consists of a superconducting layer and a substrate layer, while the multi-layered model retains the original structure of the tape, composed of a superconducting layer, two silver stabilisers, and a substrate layer.The semi-homogenised model was used in the previous sections as the silver stabiliser does not have a significant influence on magnetisation and motor operation.However, the presence of the silver layer could have a significant impact on demagnetisation, particularly at high frequencies [47], which are of interest for high speed machines, such as generators for aerospace [86].Therefore, the multi-layered model is examined in this section to investigate its demagnetisation characteristics.For cross field demagnetisation, the critical current J c (B, T) in equation ( 3) contains only perpendicular field dependency as per [30].Therefore, the field orientation of the critical current density should be adjusted to parallel field-dependency as the external field would be applied to the ab-plane of the HTS stack.Due to the absence of specific data for the parallel field orientation, the adjustment was made based on the database in [87,88] as well as the general formula for the magneto-angular dependence of J c0 presented in [89]. To investigate the effect of amplitude and frequency of cross field on the stack demagnetisation, a series of simulations were conducted, as summarised in table 4. In contrast to other studies that commonly employed a fixed number of demagnetisation cycles [16,52], this research adopts a different approach, where a series of simulations with varying frequency and amplitude of the cross field were conducted over a certain time duration.This departure from the conventional approach was motivated by the fact that in previous studies, the applied cross field typically falls within the low-frequency range, where the time required to complete the same number of cycles is comparable.However, as the frequency increases and reaches the kHz range, the period length of a cycle decreases accordingly.Consequently, when a fixed number of cycles is applied, the simulation time becomes significantly longer for lower frequencies compared to higher frequency.This discrepancy in simulation time may lead to misleading conclusions suggesting that demagnetisation is more pronounced at lower frequencies than at higher frequencies.In addition, in engineering, it appears more sensible to quantify the power dissipation per unit time generated in HTS tapes compared to the losses accumulated during a fixed number of AC losses.Considering the relatively slower simulation speed associated with higher frequencies, a fixed duration of 15 ms was set for each simulation, corresponding to a complete cycle at a frequency of 66.7 Hz, but equating to 300 cycles at a frequency of 20 kHz. Demagnetisation with semi-homogenised model Figure 19 depicts the demagnetisation results for the semihomogenised model, where the frequency and amplitude were varied as per table 4. The evolution of the trapped field under different frequency and amplitude is illustrated from figures 19(a)-(e).Figure 19(f) showcases the corresponding demagnetisation rates, defined as the percentage reduction of the trapped field after demagnetisation (B t ) compared to the field after magnetisation (B 0 ). where B 0 is 0.462 T for the four-tape stack. The results indicate that the decay rate of the trapped field increases with both the amplitude and frequency of the applied field, which leads to the fact that the highest decay rate can reach 80% at the frequency of 20 kHz and the amplitude of 150 mT.Furthermore, the growth of decay rate from low to high amplitude is more pronounced at higher frequencies.Notably, the trapped field reduction is not dominated by flux creep decay in the simulated 15 ms, since the stack has gone through a period of 30 s flux relaxation after magnetisation. Compared to studies conducted on a superconducting bulk sample in [54], where the trapped field decayed to only 10% and almost 2% after one single cycle for the ratio of the cross field amplitude B cf to the parallel penetration field B P being equal to 0.98 and 1.53, respectively, our study demonstrates a significantly low decay rate.Specifically, the decay rate remains below 1% for the amplitude of 25 mT (B cf /B P = 0.93), and less than 10% for the amplitude of 50 mT (B cf /B P = 1.9) across all frequency ranges.This confirms that, unlike HTS bulks, stacked HTS tapes exhibit a relatively higher resistance to demagnetisation. The magnetisation power loss (W/m) caused by the transverse field accounts for the majority of the AC loss during the demagnetisation process and it can be calculated by [47] Figure 20 depicts the magnetisation loss characteristics in the HTS layer (Q HTS ) and substrate layer (Q HTS ), as well as the total magnetisation loss (Q Tot ) of the four-tape stack.In figures 20(a) and (b), both Q HTS and Q Sub exhibit a positive correlation with the frequency and amplitude of the applied cross field.This relationship is also evident in Q Tot with respect to the frequency and amplitude, as shown in figure 20(c).However, an important observation is that the proportion of Q HTS in Q Tot decreases with frequency, but increases with amplitude.In simpler terms, at lower amplitudes (e.g., 25 mT), the magnetisation loss is predominantly governed by Q HTS at lower frequencies (⩽1 kHz), while Q Sub dominates at higher frequencies (⩾10 kHz) due to the skin effect.This is consistent with the findings from [47].Conversely, at higher amplitudes (⩾100 mT), Q HTS emerges as the dominant contributor across all frequency ranges.Furthermore, it is noteworthy that high frequencies have a more pronounced impact on Q Sub compared to Q HTS . Owing to the high aspect ratio of the HTS tapes (>6000), visualising the variation in current density distribution using conventional 2D graphs is challenging, which is a common manner in studies employing bulks or narrower CCs.Consequently, the analysis focus shifts to analysing the current density fluctuations in HTS layers.A circulating current is induced in the HTS stack after the magnetisation.In the context of a 2D model, both positive and negative currents flow simultaneously within the HTS layers.During the demagnetisation process, an external cross field aligned parallel to the ab-plane of the HTS stack was applied, subjecting all four HTS layers to the same alternating external field.Consequently, currents are induced in the HTS layers to oppose the change in the external field. To examine the influence of the cross field on the four HTS layers, the positive current density in the HTS layers normalised to the critical current density J c0 , denoted J norm , is employed and calculated as (11) where J z represents the current density component in z direction and J c0 = 6.16 Figure 21 showcases the J norm fluctuation of the four HTS tapes at two different frequencies, 66.7 Hz (figures 21(a)-(d)) and 20 kHz (figures 21(e)-(h)), with various cross field amplitudes in a complete cross field cycle.The four HTS layers are labelled as HTS-1 to HTS-4 in figure 12. Notably, a clear distinction between the results at different frequencies is evident, with higher frequencies exhibiting more significant current density fluctuations.Additionally, the fluctuation of J norm increases in all four HTS layers as the amplitude of the applied cross field increases. However, the top and bottom HTS layers (HTS-1 and HTS-4) exhibit larger variations in J norm values compared to the middle two layers (HTS-2 and HTS-3) throughout the complete working cycle of the applied cross field.At lower amplitudes (25 mT and 50 mT), the fluctuations in the normalised current density values in the inner layers are negligible compared to those in the outer layers.The current density fluctuations in the inner layers increase with the amplitude, yet they consistently remain smaller in magnitude compared to the fluctuations in the outer layers.This phenomenon is attributed to a shielding effect of the outermost layers on the inner layers, aligning with [66]. The shielding effect plays a crucial role in mitigating the impact of the cross field on the inner layers, thereby influencing the distribution of magnetisation loss within the stack.Consequently, the outer layers undergo a higher level of magnetisation loss compared to the inner layers.As the amplitude of the external cross field increases, the fluctuation of current densities within the HTS layers of the outer tapes increases with the external field.However, the normalised current density in the inner two tapes exhibit a noticeable waveform shift compared to the outer tapes.This shift can be attributed to the electromagnetic interaction between the inner and the outermost HTS layers.The combined effect of the skin effect, which drives the current distribution in the silver layer towards the stack's ends [66], and the shielding effect can have an important influence on the cross-field demagnetisation, particularly in high frequency regions. Demagnetisation with multi-layered model During demagnetisation, it is essential to consider the effect of silver stabilisers, particularly at high frequencies.The magnetisation loss, specifically the eddy current loss in the silver layer, can make a non-negligible contribution to the total loss [30].To examine the influence of the multi-layered model on demagnetisation, it is crucial to adjust the simulation model to a four-tape stack with the original structure in [30].To investigate the impact of high frequency on demagnetisation within the multi-layered model, the same set of simulations with varying frequencies and amplitudes was conducted.Figure 22 presents the magnetisation losses in different layers of the superconducting stack, the total magnetisation loss, and the percentage of HTS layer loss in the total loss.A comparison with figure 20 reveals that in the multi-layered structure simulation, Q HTS exhibits a slight increase at lower frequencies (⩽1 kHz) and a slight decrease at higher frequencies (⩾10 kHz).Notably, the silver stabiliser demonstrates a linear relationship between magnetisation loss and the amplitude and frequency of the cross field.The magnetisation loss in silver stabilisers (Q Ag ) is significantly higher than Q Sub , attributed to the much higher (almost 100 times) electrical conductivity of silver compared to Ni-5at.%W at 77 K. Furthermore, Q Tot of the multi-layered structure is higher than that in the semi-homogenised model, particularly at higher frequencies (⩾10 kHz).For example, at 20 kHz and 150 mT, Q Tot of the multi-layered model is 11.8% higher than that in the semi-homogenised model.Additionally, the proportion of Q HTS in Q Tot is lower in the multi-layered model compared to the semi-homogenised model, particularly at higher frequencies, where the rapid increase of with frequency was not considered.Notably, at high frequencies (>1 kHz), the magnetisation loss in the non-superconducting parts of the stack, specifically the silver layer, dominates the total loss due to skin effect. Conclusion This paper presents a systematic exploration of the magnetisation and cross-field demagnetisation of HTS TFSs in a practical partially superconducting machine, by proposing a novel analysis approach based on the FEM modelling.The study introduces novelty and contributions characterised through the exploration of harmonics interaction during practical motor operation and the utilisation of multilayer modelling for the demagnetisation analyses.A unique modelling approach to simplify motor operation without moving mesh is also presented.Lastly, this paper reveals the impact of high-frequency harmonics on demagnetisation. This research work commenced with the validation of a superconducting machine model through experimental results, ensuring the accuracy and reliability of the model.The validated model was then utilised to examine the typical electro-mechanical performance of the studied HTS machine employing TFSs as alternatives to traditional magnets in PM synchronous machines.A novel method was employed to simulate the motor operation by using analytical formulae to reduce the number of DOFs so that the simulation efficiency can be significantly enhanced.Due to the geometry design of the machine, eight stacks, each containing four tapes, were employed for the motor operation, to maximise the trapped field ability of HTS CCs. The demagnetisation analyses were based on the extracted harmonics experienced by the TFSs during the motor operation.For the harmonics extracted from the machine airgap, the fundamental and fifth harmonics of the tangential magnetic field are dominant according to the FFT results and were utilised for the demagnetisation study.The average values of the fundamental and fifth harmonics were 25 mT and 1.75 mT, respectively, for the 462 mT peak value of the magnetic flux density in the air gap. In the demagnetisation analyses, two aspects were examined: field decay and magnetisation loss.A series of simulations with varying frequency and amplitude of the cross field were conducted over a time duration of 15 ms.Below the parallel penetration field of the employed AMSC tape, being 27 mT, the trapped field exhibits minimal changes across all frequencies.When the applied cross field exceeds the penetration field, the demagnetisation decay rate of the trapped field increases in correlation with the frequency and amplitude of the applied cross field.Notably, the highest decay rate reached almost 80% with an amplitude of 150 mT at 20 kHz.Regarding the analyses for magnetisation loss, two tape structures were considered: a semi-homogenised model and a multi-layered model.The semi-homogenised model, disregarding the presence of silver stabilisers, was employed for the model validation and the motor operation due to its negligible impact on the magnetisation and motor operation.However, in the demagnetisation process, the multi-layered structure with silver stabilisers provides a more accurate representation compared to the semi-homogenised model because the skin effect and shielding effect due to the electromagnetic interaction between normal conducting and superconducting layers of the TFSs have to be considered at high frequencies. The magnetisation loss within various layers of the HTS stack has been thoroughly analysed across a wide range of frequencies from 66.7 Hz to 20 kHz and amplitudes from 25 mT to 150 mT as far as the cross field is concerned.The results indicate that magnetisation loss in all layers as well as the total magnetisation loss increases with the frequency and amplitude of the applied cross field.Within the frequency range below 1 kHz, the magnetisation loss in the HTS layer predominantly governs the total loss.However, in higher frequency ranges, (above 1 kHz), the non-superconducting layers, particularly the silver layer, take precedence in determining the total loss due to the skin effect.Furthermore, it is noteworthy that the outermost HTS layers exhibit a shielding effect on the inner tapes, resulting in higher magnetisation loss in the outer layers compared to the inner HTS layers.This shielding effect becomes more pronounced at higher frequencies. The study significantly contributes to the in-depth understanding of the magnetisation and demagnetisation characteristics of HTS stacks in practical superconducting machine applications.The novel approach for motor operation developed in this research provides a time-saving model for accurately modelling superconducting motors with TFSs.By considering the impact of frequency and amplitude of cross field, the analysis offers valuable knowledge of the performance and stability of superconducting stacks, specifically in the context of electric aircraft.Furthermore, the study explores the interaction of different layers within a single HTS stack at varying frequencies, especially at high frequencies, thereby providing crucial insights into the performance of superconductors for high-speed machine applications.Overall, the paper serves as a valuable analysis and design tool for superconducting electric machines, particularly those applied to electric aircraft and other transport systems, which can inspire more relevant research work in the wider energy conversion community. Figure 1 . Figure 1.Working process from the magnetisation to the demagnetisation of the proposed approach. Figure 2 . Figure 2. Working flow of this research work: from model validation and magnetisation to demagnetisation. Figure 3 . Figure 3. Setup of the electromagnetic and thermal modules of the machine for the pulsed field magnetisation. Figure 4 . Figure 4. Numerical model of the superconducting machine with the semi-homogenised model of HTS stack (a) schematic sketch of the nine-layer stack structure (not drawn to scale) and (b) mesh for one eighth of the machine. (a) demonstrates the exact setup and figure 5(b) shows the magnetic flux line distribution, in which a good symmetry between the left and right half sections of the machine is evident.The magnetised stack starts rotating from the original position during magnetisation, aligned to the coil teeth at 12 o'clock, The rotational speed is set to 650 rpm [26].The most important step is to set up a rotation function for the magnetic field of the magnetised stack.The equation should be compatible to a random form of the field.The remanent flux density of the ring-shape PM on the rotor surface is defined as an equation of radius (r), azimuth (φ ), and time (t) set up in cylindrical coordinate system (sys): Figure 5 . Figure 5. Machine model setup for back emf measurement (a) rotation model setup (b) magnetic flux density distribution with the trapped magnetic field copied from magnetised stack at the beginning of the rotation. Figure 6 . Figure 6.Comparison at the first pulse peak, first pulse end, first relaxation end, and end of the magnetisation of the (a) current density and (b) temperature of the stack. Figure 7 . Figure 7.Comparison of the trapped field profiles 1 mm above stack surface at the end of every flux relaxation. Figure 8 . Figure 8.Comparison of experimental result and simulation result. Figure 9 . Figure 9. Magnetic flux density distribution of 8 stacks in the whole machine. Figure 10 . Figure 10.Radial magnetic flux density distribution on the edge of stack after magnetisation (a) a fully magnetised superconductor and magnetic field distribution (b) leaked magnetic flux on the edge of the saturated superconducting stack. Figure 11 . Figure 11.Magnetisation of eight poles in the motor with varying amount of CC layers in each stack. Figure 12 . Figure 12.Stack configuration of the semi-homogenised model for motor operation and demagnetisation study based on the magnetisation results in figure 11.For clarity, the curvature and the real aspect ratio is not presented. Figure 13 .Table 2 .Figure 14 . Figure 13.Operation of the machine as a motor (a) excitation of stator winding and setup of rotor magnet remanent flux density in motor operation (b) flux distribution of the eight poles in motor operation. Figure 15 . Figure 15.Radial and tangential magnetic flux density experienced by a stack. Figure 16 . Figure 16.Radial magnetic field experienced by the four-layer stack in the airgap (a) waveform of Br with stator current varying from 10 A to 60 A at a rotor speed of 1000 rpm (b) FFT of Br with stator current varying from 10 A to 60 A and rotor speed from 1000 rpm to 6000 rpm. Figure 17 . Figure 17.Tangential magnetic field experienced by the four-layer stack in airgap (a) waveform of B phi with stator current varying from 10 A to 60 A at a rotor speed of 1000 rpm (b) FFT of B phi with stator current varying from 10 A to 60 A and rotor speed from 1000 rpm to 6000 rpm. Figure 18 . Figure 18.Application of cross field in air gap (a) setup in the FEM model (b) direction of the trapped field before cross field demagnetisation (c) direction of the trapped field during cross field application. Figure 19 . Figure 19.Cross field demagnetisation for 0.015 s for different frequency and amplitude of the cross field (a)-(e) decay of the normalised trapped field with the frequency and the amplitude listed in table 4 (f) variation of the decay rate of demagnetisation with frequency and amplitude of cross field. Figure 20 . Figure 20.Magnetisation loss (a) in HTS layer, (b) in substrate layer, and (c) total magnetisation loss in the 4-tape stack with semi-homogenised model under the applied cross field with different frequencies and amplitudes. Figure 21 . Figure 21.Fluctuation of the normalised current density in each HTS layer of the four-tape stack with semi-homogenised model under the cross field at the frequency of 66.7 Hz and 20 kHz with different amplitude (a)-(d) current density fluctuation of the tapes HTS-1 to HTS-4 under the cross field with 0.025 T, 0.05 T, 0.1 T, 0.15 T at 66.7 Hz (e)-(h) current density fluctuation of the tapes HTS-1 to HTS-4 under the cross field with 0.025 T, 0.05 T, 0.1 T, 0.15 T at 20 kHz. Figure 22 . Figure 22.Magnetisation loss in (a) HTS layer, (b) substrate layer, (c) silver layer, and (d) total magnetisation loss in the 4-tape stack with multi-layered model under the applied cross field with different frequencies and amplitudes. Table 1 . Description of the two machine model variants. Table 3 . Extracted tangential flux density harmonics experienced by a single stack on rotor surface. Table 4 . Range of parameters variation for cross field demagnetisation.
12,826.8
2023-09-25T00:00:00.000
[ "Engineering", "Physics" ]
Picornavirus Genome Replication All picornaviruses have a protein, VPg, covalently linked to the 5′-ends of their genomes. Uridylylated VPg (VPg-pUpU) is thought to serve as the protein primer for RNA synthesis. VPg-pUpU can be produced in vitro by the viral polymerase, 3Dpol, in a reaction in which a single adenylate residue of a stem-loop structure, termed oriI, templates processive incorporation of UMP into VPg by using a “slide-back” mechanism. This reaction is greatly stimulated by viral precursor protein 3CD or its processed derivative, 3C; both contain RNA-binding and protease activities. We show that the 3C domain encodes specificity for oriI, and the 3D domain enhances the overall affinity for oriI. Thus, 3C(D) stimulation exhibits an RNA length dependence. By using a minimal system to evaluate the mechanism of VPg uridylylation, we show that the active complex contains polymerase, oriI, and 3C(D) at stoichiometry of 1:1:2. Dimerization of 3C(D) is supported by physical and structural data. Polymerase recruitment to and retention in this complex require a protein-protein interaction between the polymerase and 3C(D). Physical and functional data for this interaction are provided for three picornaviruses. VPg association with this complex is weak, suggesting that formation of a complex containing all necessary components of the reaction is rate-limiting for the reaction. We suggest that assembly of this complex in vivo would be facilitated by use of precursor proteins instead of processed proteins. These data provide a glimpse into the organization of the ribonucleoprotein complex that catalyzes this key step in picornavirus genome replication. Picornaviruses are the etiologic agent of numerous diseases of medical and veterinary importance. Poliomyelitis, the common cold, summer flu, hepatitis, and foot-and-mouth disease can all be caused by picornaviruses (1). These viruses have a single-stranded RNA genome of positive polarity that is on the order of 7500 nt 3 in length (1). A protein, VPg (virion protein genome-linked), is covalently linked to the 5Ј-end of the viral genome, the so-called plus-strand, and a poly(rA) tail is present at the 3Ј-end (1). Genome replication occurs in a process that uses the plus-strand as a template for minus-strand synthesis, which, in turn, is used as a template for production of an excess of plus-strands (2). Initiation of both plus-and minus-strand RNA synthesis is thought to be primed by a uridylylated form of VPg, VPg-pUpU (2). Several years ago, Paul and Wimmer made the paradigmshifting observation that an RNA stem-loop structure in the 2C-coding region of the poliovirus (PV) genome was capable of templating production of VPg-pUpU (3, 4) much more efficiently than the poly(rA) tail (5). Since this time, it has become clear that all picornaviruses appear to use a similar strategy for production of VP-pUpU (6 -12). All picornaviruses have a cisacting RNA element capable of templating the production of VPg-pUpU, although the position of this element in the genome varies. This element has been termed oriI (2). Although it is generally accepted that oriI is essential for genome replication (6 -8, 13-15), some suggest that it is only required for production of primers for plus-strand synthesis (16 -18), whereas others suggest that oriI-derived primers are required for both plus-and minus-strand synthesis (12). The VPg uridylylation reaction can be mimicked in vitro by using purified components: the viral RNA-dependent RNA polymerase (3Dpol); the VPg peptide; (bio)synthetic oriI RNA; UTP; and Mg 2ϩ or Mn 2ϩ (3,4,6,11,12). The reaction is also stimulated by a viral precursor protein, 3CD. Protein 3CD has both RNA-binding and protease activities, but only the RNAbinding activity is required for stimulation of the VPg uridylylation reaction (3,4). We have shown that the fully processed viral protein 3C also stimulates VPg uridylylation (19). The mechanism for 3C(D) stimulation is not known. Our previous studies of PV support a model in which 3C(D) binds to oriI and recruits polymerase to oriI (19). Moreover, we suggested that this recruitment was mediated by an interaction between the thumb subdomain of 3Dpol and some undefined subdomain of 3C (19). This model was based on the identification of 3Dpol derivatives that contained wild-type polymerase activity on primed RNA templates and retained basal VPg uridylylation activity that could not be stimulated by 3C(D) (19). Recently, the suggestion was made that our interpretation of a 3Dpol-3C(D) interaction could also be explained as an allosteric effect unique to the 3Dpol derivatives studied (20). In this report, we describe the establishment of a minimal VPg uridylylation system for PV that we use to evaluate the composition, stoichiometry, and functional and structural organization of the active VPg uridylylation complex. Consistent with previous studies (19), PV polymerase is recruited to and retained in the VPg uridylylation complex by a direct, physical interaction with 3C(D) bound to oriI. This conclusion is the same for two other picornaviruses: coxsackievirus B3 (CVB3) and human rhinovirus type 14 (HRV14). Targeting of 3C(D) to oriI is mediated solely by the 3C subdomain, the 3D subdomain serves only to increase affinity on oriI and, in doing so, increases the observed specificity of 3CD relative to 3C when oriI is placed in the context of longer RNA sequence. Protein 3C(D) binds to oriI at a 2:1 stoichiometry; 3C dimerizes in solution in the presence and absence of oriI. One of the 3C subdomains of 3C(D) binds to oriI contacting the stem in a position near the loop that would facilitate appropriate positioning of the polymerase. 3C binds to both single-and double-stranded RNA. In the context of oriI, affinity for the two single strands of the stem individually is higher than that for the annealed stem, suggesting an isomerization step after 3C(D)-oriI complex formation. Finally, association of VPg with this complex is weak and is probably rate-limiting for uridylylation. This study provides the most complete view to date of assembly and organization of the picornavirus VPg uridylylation complex. EXPERIMENTAL PROCEDURES Materials-Deep Vent DNA polymerase and restriction enzymes were from New England Biolabs, Inc.; shrimp alkaline phosphatase was from USB; T4 DNA ligase was from Invitrogen; Difco-NZCYM was from BD Biosciences; QIAEX beads were from Qiagen; RNase A was from Sigma; Ultrapure UTP solution was from GE Healthcare; [␣-32 P]UTP (6000 Ci/mmol) was from PerkinElmer Life Sciences; synthetic PV and HRV14 VPg peptides were purchased from Alpha Diagnostic International (San Antonio, TX); synthetic CVB3 VPg was a gift from Willem Melchers; all other reagents and apparatuses were available through Fisher or VWR or as indicated. Construction of Expression Plasmids-The CVB3 3C coding region was amplified by using oligonucleotides 1-4 (Table 1 lists all oligonucleotides used in this study; oligonucleotides were from Invitrogen or Integrated DNA Technologies, Inc.) to perform overlap-extension PCR using the Knowlton CVB3/H3 cDNA (21) as template. The 3C-coding region was cloned into the pET26Ub plasmid (22) using SacII and EcoRI sites to give the pET26Ub-CVB3-3C-C147G-CHIS plasmid. The HRV16 3C-coding region was amplified by using oligonucleotides 5-8 to perform overlap-extension PCR using the HRV16 cDNA (23) as template. The amplified fragment was cloned into the pET26Ub-CHIS plasmid (19) using the SacII and BamHI sites to give the pET26Ub-HRV16-3C-CHIS plasmid. Cloning of the CVB3 3D is described by van Ooij et al. (12); cloning of HRV14 3C and 3D will be described by Shen et al. 4 PV 3C without a His tag was made by PCR amplification using oligonucleotides 17 and 18 and pET26Ub-3C-C147G-CHIS (19) as template. The amplified fragment was cloned into pET26Ub (22). DNA sequencing at the Penn State Nucleic Acid Facility was used to verify the integrity of the above clones. Expression and Purification of 3C Proteins-All of the 3C proteins were expressed using the ubiquitin fusion system described previously for 3Dpol (22). BL21(DE3)pCG1 cells were transformed with either the pET26Ub-PV-3C-CHIS, pET26Ub-CVB3-3C-CHIS, pET26Ub-HRV14-3C-CHIS, or pET26Ub-HRV16-3C-CHIS plasmids and plated (10%) onto NZCYM plates containing kanamycin at 25 g/ml (K25), chloramphenicol at 20 g/ml (C20), and dextrose at 0.4%. Ten colonies were then used to seed 100 ml of NZCYM medium supplemented with K25, C20, and dextrose at 0.1%. The culture was grown at 37°C until to an A 600 of 1. Cells were chilled to 25°C and induced by the addition of isopropyl-␤-D-thiogalactopyranoside to a final concentration of 500 M. Cells were grown for 4 h at 25°C and harvested. Cell pellets were weighed and stored at Ϫ80°C. Frozen cell pellets were thawed on ice and suspended in lysis buffer (100 mM potassium phosphate, pH 8.0, 20% glycerol, 10 mM 2-mercaptoethanol (␤-ME), 5.6 g/ml pepstatin A, 4 g/ml leupeptin) at a concentration of 4 ml of lysis buffer/g of cell pellet. Cells were homogenized using a Dounce homogenizer; cells were lysed by passing through a French pressure cell at a pressure per square inch of 20,000. Phenylmethylsulfonyl fluoride and Nonidet P-40 were added Restriction sites are shown in boldface type; codons containing nucleotide changes are italicized. The T7 promoter is underlined. The region corresponding to the CVB3 oriI was amplified using oligonucleotides 11 and 12 and the Knowlton CVB3/H3 cDNA as template. The purified oriI PCR product was linearized using XhoI and RNA transcribed as described above. The concentration was measured as described above using the calculated extinction coefficient (0.665700 M Ϫ1 ⅐cm Ϫ1 ) that includes two guanosine residues at the 5Ј-end and a guanosine and adenosine at the 3Ј-end, a consequence of the XhoI digestion. These extra nucleotides do not alter the fold of the CVB3/H3 oriI as predicted by mfold. The cloning of the HRV14 97-nt oriI is described elsewhere 4 ; cloning of the PV 61-nt oriI was described previously (19). Transcription and determination of the concentration for the HRV14 97-nt oriI and the PV 61-nt oriI were done as described above. RNA transcripts from the PV subgenomic replicon, pRLucRA (27,28), were generated after linearization with ApaI. Transcription reactions, typically 20 l, consisted of 350 mM HEPES, pH 7.5, 32 mM magnesium acetate, 40 mM dithiothreitol, 2 mM spermidine, 28 mM NTPs, 0.5 g of template, and 0.5 g of T7 RNA polymerase. Reactions were incubated at 37°C for 3 h followed by removal of magnesium pyrophosphate. DNase treatment with RQ1DNase (Promega) was used to remove the template; lithium chloride precipitation of the RNA was used to remove unincorporated nucleotides. RNA concentration was calculated by measuring absorbance at 260 nm, assuming that an A 260 of 1 was equivalent to 40 g/ml. RNA transcripts of the PV 2C gene were generated after linearization of pET26Ub-2C plasmid with HindIII. Transcription reactions and RNA quantification were performed as described above. The 29-, 22-, and 14-nt oriIs presented in Fig. 3 were chemically synthesized by Dharmacon, Inc. (Lafayette, CO). Each RNA was deprotected prior to use in the VPg uridylylation assays. Deprotection was done by suspension of the RNA in 500 mM acetic acid and incubation at 65°C for 15 min, followed by the addition of an equal volume of 660 mM Tris, pH 8.0, and incubation at 65°C for 15 min. Concentrations were measured as described above. The extinction coefficients used for each are 0.378500 M Ϫ1 ⅐cm Ϫ1 (29 nt), 0.261100 M Ϫ1 ⅐cm Ϫ1 (22 nt), and 0.166400 M Ϫ1 ⅐cm Ϫ1 (14 nt). Footprint Analysis Using an Iodine Cleavage Assay-oriI containing phosphorothioated ATP (ATP␣S) and a 5Ј-OH to be used for footprinting by the iodine cleavage assay was obtained by performing a 1-ml transcription reaction (40 mM HEPES, pH 7.5, 32 mM magnesium acetate, 40 mM dithiothreitol, 2 mM spermidine, 12 mM NTPs, 0.3 mM ATP␣S (10% of the total ATP concentration), 8 mM guanosine, 25 g/ml linearized template, and 25 g/ml T7 RNA polymerase) at 37°C for 2 h. Magnesium pyrophosphate was removed by centrifugation for 2 min. The supernatant was treated with RQ1DNase (1 unit/g of template; Promega) for 30 min to remove the template; two phenol/chloroform extractions followed by a chloroform extraction were performed to deproteinate the RNA. Next, the RNA was precipitated with ammonium acetate and washed with 70% ethanol, and the pellet was suspended in 50% formamide. This provided the starting material for gel purification. The entire volume was loaded onto a 10% acrylamide, 50% formamide gel (18 cm ϫ 24 cm ϫ 2 mm). The gel was run at 25 mA for ϳ4 h (the band corresponding to the full-length oriI had migrated to the middle of the gel by this time, as determined by the migration of bromphenol blue and xylene cyanol indicator dyes). The oriI band was excised from the gel by using UVshadowing using a TLC plate with a fluorescent indicator (PEI Cellulose F; EM Science). The gel piece was cut into tiny squares and placed in an Elutrap electrophoresis chamber (Schleicher & Schuell). The eluted RNA was precipitated with ammonium acetate, washed with 70% ethanol, and suspended in TE. This was then passed over two Sephadex G-25 (Sigma) spin columns. The gel-purified RNA was radiolabeled using [␥-32 P]ATP and polynucleotide kinase (New England Biolabs, Inc.). The labeled RNA was gel-purified again following the same procedure described above. RNA quality was assessed by 15% denaturing PAGE. RNA concentration was calculated by measuring absorbance at 260 nm. The extinction coefficient was calculated for the 61-nt oriI plus three guanidine residues that are transcribed at the 5Ј-end of oriI (0.749400 M Ϫ1 ⅐cm Ϫ1 ) as described previously (19). The 5Ј-end-labeled phosphorothioate oriI (3.3 M) was incubated at 80°C for 5 min in folding buffer (166 mM HEPES, pH 7.5, 66.6 mM NaCl) and then gradually cooled to room temperature for 1 h. At 55°C, MgCl 2 (16.6 mM) was added. The cleavage reaction proceeded as follows. The folded RNA was incubated with varying concentrations of PV 3C-His and incubated at 30°C for 5 min. To initiate cleavage, 2 l of 500 M iodine dissolved in ethanol was added. After incubation at 30°C for 2 min, the cleavage was quenched with 2 l of 20 mM dithiothreitol. The final reaction mixture contained 50 mM HEPES, pH 7.5, 5 mM MgCl 2 , 20 mM NaCl, 1 M RNA, 0 -10 M protein, 100 M iodine, and 20% ethanol. An alkaline hydrolysis ladder was generated by incubating the RNA in 50 mM sodium bicarbonate and 3 mM EDTA, pH 8.0, at 90°C for 5 min. The samples were placed on ice to quench the cleavage reaction. Prior to analysis, an equal volume of 100% formamide was added to each sample. The RNA was denatured at 90°C for 5 min and placed on ice. The cleavage products were separated on a 15% denaturing polyacrylamide gel run at constant power of 90 W. The gel was visualized by using a Typhoon 8600 scanner in the storage phosphor mode and quantified by using ImageQuant software. Fluorescence Polarization Assay-Experiments were performed using a Beacon fluorescence polarization system (GE Healthcare) as described previously (29). Briefly, PV 3C-His (0 -100 M) was mixed with the indicated 3Ј-fluorescein-labeled RNA (0.1 nM) in binding buffer (1 mM HEPES, pH 8.0). NaCl (10 mM) was included in the binding buffer when indicated in the figure legend. Protein-RNA complexes were incubated for 30 s at 25°C. Binding of PV 3C was measured by the change in polarization. All steps were performed in reduced light. VPg Uridylylation Assays-VPg uridylylation reactions were performed essentially as described previously (19) with slight modifications. Typically, reaction mixtures for each virus VPg uridylylation system contained 3Dpol, 3C-His, or 3CD-His as indicated, oriI template as described, and VPg peptide in reaction buffer (50 mM HEPES, pH 7.5, 10% glycerol, 5 mM magnesium or manganese acetate, 10 mM ␤-ME, 10 M UTP, and 0.04 M [␣-32 P]UTP (6000 Ci/mmol)). All reactions were adjusted to a final NaCl concentration of 30 mM. Concentrations of the components are indicated in each figure legend. All components were diluted to working concentrations immediately prior to use. Reactions were incubated at 30°C for the indicated times and quenched with an equal volume of 100 mM EDTA in 75% formamide containing 0.05% bromphenol blue dye. Quenched reactions (5 l) were analyzed by using Tris-Tricine SDS-polyacrylamide gel electrophoresis. Gels contained 15% acrylamide and 0.4% bisacrylamide. The cathode buffer (upper chamber) contained 0.1 M Tris base, 0.1 M Tricine, and 0.1% (w/v) SDS; the anode buffer contained 0.2 M Tris⅐Cl, pH 8.9. Gels were run at 80 W (for a 33 ϫ 39-cm gel) for 2 h. Products were visualized by using a Typhoon 8600 scanner in the storage phosphor mode and quantified by using ImageQuant version 5.0 software. Cross-linking by Using Glutaraldehyde and Western Blot Analysis-Cross-linking reactions were performed as follows. 3D (1 M) and 3C-His (or 3C without a His tag or no 3C, as indicated) (1 M) were incubated in reaction buffer (50 mM HEPES, pH 7.5, 10% glycerol, 5 mM magnesium acetate, 10 mM ␤-ME, 10 M UTP, 60 M ZnCl 2 ) for 15 min at 30°C. Glutaraldehyde was then added to a final concentration of 1 mM, and the reactions were incubated at 37°C for 1 min. All reactions were adjusted to a final NaCl concentration of 30 mM. All components were diluted to working concentrations immediately prior to use. Glutaraldehyde, purchased from Sigma, was provided in 1-ml ampules; this was aliquoted and stored at Ϫ20°C in small, single use volumes. Aliquots were used within 2 weeks to ensure maximal cross-linking efficiency of the glutaraldehyde, which is sensitive to oxidation. Reactions were then quenched by the addition of an equal volume of 2ϫ SDS-PAGE dye (225 mM Tris, pH 6.8, 5% SDS, 50% glycerol, 5% ␤-mercaptoethanol, and 0.05% bromphenol blue) containing 100 mM glycine. Quenched reactions (5 l) were analyzed by Western blot analysis. Proteins were separated by using 8 or 12.5% SDS-PAGE; transfer to nitrocellulose membrane (Osmonics, Inc. and/or GE Healthcare) was performed using the Genie transfer unit from Idea Scientific Company (Minneapolis, MN) for 50 min at 24 V in transfer buffer (25 mM Tris-glycine, 3 mM SDS, 20% (v/v) methanol (pH 8.3)); dry milk (5%, w/v) in TBS-T (20 mM Tris, pH 7.6, 137 mM NaCl, 0.1% Tween 20) was used to block the membrane (the blocking and probing steps were performed at room temperature for 1 h). Polyclonal antisera produced in rabbits against purified recombinant proteins PV 3D, CVB3/H3 3D, HRV14 3D, and PV 3C by Covance Research Products, Inc. (Denver, PA) were used to probe for the 3C-3D cross-linked heterodimer. Antisera were capable of detecting at least 100 pg of purified protein at 1:1000 dilutions in TBS-T (1:1000 for the anti-rabbit horseradish peroxidase secondary antibody). PV 3D (PA 473) antiserum was used at a 1:80,000 dilution in TBS-T; CVB3 3D (PA 700) and HRV14 3D (PA 722) antisera were used at 1:8000 dilutions in TBS-T; PV 3C (PA 484) antiserum was used at a dilution of 1:40,000 in TBS-T. Either horseradish peroxidase-or alkaline phosphatase-conjugated goat anti-rabbit antibody (Santa Cruz Biotechnology, Santa Cruz, CA) at a 1:2000 dilution in TBS-T was used as the secondary antibody. ECL (with horseradish peroxidase) and ECF (with alkaline phosphatase) detection systems (GE Healthcare) were used to visualize the Western blots. The Typhoon 8600 scanner was used in the fluorescence mode for the ECF system, and Eastman Kodak Co. BioMax MR film was used for the ECL system. that employed a 61-nt oriI template (19). These data were reproduced here (Fig. 1). Reactions produced VPg-pU(pU) (Fig. 1B). The observed reaction rate is constant for at least 45 min (data not shown). Product formed in 15 min in reactions containing 3CD or 3C is shown in Fig. 1C. As before, only a 2-fold difference was observed when 3CD or 3C was present at equivalent concentrations. Increasing the concentration of 3C removed this difference, as reported previously (19). The capacity for 3C to substitute for 3CD may not occur in all cases. It is possible that oriI-containing templates employed may alter the capacity of 3C to substitute for 3CD. Because oriI exists in the context of the full-length genome in vivo, we asked whether the presence of additional "competitor" RNA supplied by increasing the length of the RNA would impact the stimulatory activity of 3CD or 3C. Two RNAs were employed. The first spanned the 2C-coding sequence in the PV genome (Fig. 1A); the second was the full-length subgenomic replicon RNA (27,28). Both templates supported VPg uridylylation, yielding VPg-pUpU as the primary product (Fig. 1B). In contrast to results obtained by using the 61-nt oriI RNA, the ratio of 3CD-stimulated VPg uridylylation to 3C-stimulated VPg uridylylation increased as the RNA length was increased (Fig. 1C). This difference reflected a systematic decrease in the effectiveness of 3C stimulation by using the longer RNAs (Fig. 1C, gray bars). These results are consistent with 3C containing the primary determinant for stimulation of the VPg uridylylation reaction; however, the 3D subdomain of 3CD increases the affinity and/or specificity of 3C binding to oriI in the context of "competitor" RNA. A 29-nt oriI Element and Protein 3C Are Sufficient to Direct Specific and Efficient VPg Uridylylation in Vitro-Given the capacity of competitor RNA to interfere with the assembly of the VPg uridylylation complex in vitro, interrogation of the kinetics and mechanism of the VPg uridylylation reaction would be facilitated by using a minimal oriI element. In order to define the minimal template for VPg uridylylation, the 61-nt oriI element (nucleotides 4444 -4504 of the PV genome) was truncated, primarily at the bulges, to yield RNAs that were 51, 29, 22, or 14 nt in length ( Fig. 2A). It should be noted that although only single, stem-loop structures having the same base pairings as the full-length RNA were predicted by mfold for the truncated oriIs, formation of unpredicted structures remains a possibility. In the presence of Mg 2ϩ , truncation of oriI RNA to 29 nt caused an approximately 2-fold reduction in uridylylation activity (Fig. 2B). The 22-nt oriI RNA, however, caused a 15-fold reduction in activity (Fig. 2B). The 14-nt RNA was essentially inactive (Fig. 2B). The truncations will clearly have a large effect on the stability of the stem. Because Mn 2ϩ binds to the phosphodiester backbone more tightly than Mg 2ϩ , stability of the stem should be enhanced by using Mn 2ϩ as the divalent cation in the reaction (30). The use of Mn 2ϩ causes a 3-fold reduction in the efficiency of uridylylation relative to Mg 2ϩ (compare black bars with gray bars for 61-nt RNA in Fig. 2B) that may be due to a decrease in the nucleotidyl transfer rate of the polymerase (31). However, no change in uridylylation activity was observed in the presence of Mn 2ϩ in going from 61 to 29 nt, suggesting that the decrease observed in the presence of Mg 2ϩ was due to the stability of the stem. In the presence of Mn 2ϩ , a condition in which the stability of the stem should be the same for the 61-and 29-nt RNAs, 3C functioned as well as 3CD, based upon the ratio of activity on the 61-nt RNA to that on the 29-nt RNA (Fig. 2C). The approximately 2-fold reduction in the observed activity of 3C relative to 3CD reflects the use of equivalent concentrations of both proteins and a reduced apparent dissociation constant of 3CD relative to 3C (discussed below). Interestingly, footprint analysis by using iodine cleavage of phophorothioate-substituted oriI and 3C shows protection of oriI (i.e. 3C binding) mainly in the upper portion of the RNA stem, corresponding to the truncation made for the 29-nt oriI (compare the protected residues, labeled explicitly in Fig. 2D, with the oriI truncations shown in Fig. 2A). Together, these data show that a 29-nt oriI fragment and 3C are sufficient for uridylylation in vitro, consistent with genetic studies performed previously (32). The Active Uridylylation Complex Contains One 3Dpol and Two 3C(D) Molecules per 29-nt RNA-Evaluation of the VPg uridylylation reaction by using the 29-nt oriI fragment in the presence of Mn 2ϩ permits interrogation of the optimal stoichiometry of the components in the reaction. Intuitively, it is easy to imagine that one molecule of each component (RNA, polymerase, 3C(D) and VPg) would be required for assembly of an active complex. In this experiment, each component was titrated under conditions in which 3Dpol, 3CD, and/or oriI were present at a final concentration of 1 M. VPg was present Titrations of 3CD, 3D, and the 29-nt oriI element are shown in Fig. 3, A-C, respectively. Under the conditions employed, stoichiometric binding rather than hyperbolic binding was observed, suggesting that the dissociation constant for these components in the active complex is less than 0.2 M. Maximum VPg-pU(pU) formation observed with the 3CD titration (3.1 M) was higher than observed for 3D (2.1 M) or oriI (2.2 M), suggesting that more 3CD was required in the active uridylylation complex than either 3D or oriI (Fig. 3, A-C). The points of intersection were 1.1 M for 3CD, 0.62 M for 3D, and 0.56 M for oriI (Fig. 3, A-C, indicated by the arrows). These data are consistent with a stoichiometry of 3CD/3D/oriI of 2:1:1. Stability of VPg within this complex was weak, since the K 0.5 value was 5.4 Ϯ 0.9 M (Fig. 3D). Titration of 3C showed hyperbolic binding, yielding a K 0.5 value of 1.4 Ϯ 0.2 M (Fig. 3E). Therefore, 3C binding to oriI is at least 7-fold weaker than 3CD. Importantly, the extrapolated end point of the 3C reaction (3.2 Ϯ 0.1 M) was equivalent to that observed for 3CD in Fig. 3A. This result argues that the 3D domain of 3CD acts indirectly. If the 3D domain of 3CD participated directly, then the end point of the reaction containing 3C should never reach that of a reaction containing 3CD. Because the end point of 3C is equivalent to that of 3CD, it is likely that in the active 3C-3Dpol-oriI complex, two molecules of 3C are present. PV 3C Can Form Dimers-Whether or not 3C has the capacity to multimerize is not clear. Yeast two-hybrid data and cross-linking experiments performed in the past suggested that 3C does not multimerize (33). However, structural data for PV 3C reveal a very specific interface that predicts the formation of dimers in solution (34). In order to evaluate this possibility, we performed glutaraldehyde crosslinking experiments with purified PV 3C protein in vitro. The initial experiments employed a His-tagged derivative. Monomers, dimers, and multimeric forms of 3C were observed (Fig. 4A). In order to confirm that the observed crosslinking was not due to the His tag, authentic 3C protein was prepared, and the capacity for this protein to multimerize was evaluated and compared directly with the His-tagged 3C protein. As shown in Fig. 4B, authentic 3C formed clear dimers and multimers as well. 3C Binds Single-and Double-stranded RNA-The footprint of 3C on oriI showed greater protection in the double-stranded rather than the single-stranded loop region of oriI (Fig. 2D). Because iodine cleaves phosphorothioates in both single-and double-stranded RNA equivalently, this technique does not permit conclusions to be reached regarding changes in nucleic acid structure induced by 3C binding. To date, only qualitative data have been obtained regarding 3C(D) binding to oriI (32). As a starting point, we performed fluorescence polarization experiments in order to determine if 3C could bind to singlestranded RNA. We determined the K d values for PV 3C binding to 3Ј-fluorescein labeled oligo(rU) RNAs of varying length (rU 5 , rU 7 , rU 8 , rU 9 , rU 10 , rU 12 , rU 15 , and rU 20 ). The data for rU 10 in the presence of 0.1 or 10 mM NaCl are shown in Fig. 5A. PV 3C bound to all single-stranded RNAs evaluated; the minimal site size was in the 9 -10-nt range ( Table 2). 3C binding to rU 10 was quite salt-sensitive, since the K d value increased by 10-fold in going from 0.1 to 10 mM NaCl (Fig. 5A and Table 2). The salt sensitivity was not unique to oligo(rU) RNA, since a GC-containing, 9-nt RNA showed this effect as well (Table 2). 3C also bound to double-stranded RNA (Table 2). In this case, 3C binding to doublestranded RNA was on the order of 10-fold tighter than binding to single-stranded RNA of equivalent length ( Table 2). 3C binding to double-stranded RNA was also salt-sensitive ( Table 2). The capacity for 3C to bind to both single-and double-stranded RNA prevented us from reaching conclusions about binding to cisacting elements like oriI (Fig. 5B) or stem-loop D (SLd) (Fig. 5C), both of which contain single-and doublestranded RNA. SLd is an extensively studied stem-loop structure residing in the 5Ј cloverleaf and serves as a positive control (27,(35)(36)(37)(38)(39)(40)(41)(42). Binding to oriI was salt-sensitive (Table 2). In the presence of 10 mM NaCl, the observed K d value was 1.4 Ϯ 0.2 M (Table 2), a value identical to that inferred from the 3C concentration dependence of the VPg uridylylation reaction (Fig. 3E). However, this K d value was 2-fold higher than observed for a random sequence of double-stranded RNA ( Table 2). We separated the two strands of the oriI stem into the single-stranded components (indicated as "left stem" and "right stem" in Fig. 5B and Table 2). Interestingly, binding of 3C to each of these singlestranded RNAs was tighter than binding to the intact RNA ( Fig. 5B and Table 2), suggesting that 3C may unwind the stem in some postbinding, isomerization step. In contrast, 3C binding to SLd was exceptionally tight, even in the presence of salt, consistent with the capacity of investigators to observe this interaction by gel mobility shift analysis among other methods ( Fig. 5C and Table 2) (27,35). Functional Interaction between Picornavirus 3C and 3D Proteins-We suggested previously that an interaction between 3C and 3Dpol is required for VPg uridylylation (19). This conclusion was based on a study in which Arg residues 455 and 456 of 3Dpol were changed simultaneously to Ala, Ser, or Asp. VPg uridylylation catalyzed by these 3Dpol derivatives was not stimulated by 3CD to levels observed for WT 3Dpol. Trivial explanations for the observed uridylylation phenotype, such as differences in the linearity of reactions and K 0.5 values for UTP, were ruled out (19). Importantly, none of these mutants supported PV replication in tissue culture (19). In order to obtain additional evidence for the existence of a 3C-3Dpol interaction and to probe the universality of this interaction among picornaviruses, we evaluated the 3Dpol interaction surface of all of the picornaviruses to identify viruses that encoded polymerases with changes at or near positions 455 and 456. If an interaction exists between 3C and 3D, then 3C proteins from viruses with a 3Dpol thumb substantially different from PV 3Dpol would not be expected to stimulate PV 3Dpol- containing a carboxyl-terminal His tag (1 M) was incubated in reaction buffer at 30°C for 15 min. Glutaraldehyde (1 mM) was added, and the reaction was incubated at 37°C for 1 min, quenched with 2ϫ SDS-PAGE dye, and analyzed by using 12.5% SDS-PAGE followed by Western blot analysis. B, cross-linking of "authentic" 3C. The experiment was performed as described above. An 8% gel was employed to resolve the multimers. Lane 1, 3C-His; lane 2, "authentic" 3C. catalyzed VPg uridylylation as well as PV 3C. As shown in Fig. 6, 3Dpol sequences from CVB3 and HRV14 and -16 show clear differences relative to PV in the region of the thumb spanning residues 453-457. We established in vitro uridylylation systems for CVB3 and HRV14. The sequences for VPg and oriI and the purified proteins employed are shown in Fig. 7. In all cases, maximal VPg uridylylation activity of the polymerase was observed when the cognate 3C was utilized regardless of the origin of the VPg and oriI (Fig. 8). For example, CVB3 3D was stimulated by CVB3 3C to a greater extent than by any of the other 3C proteins (Fig. 8B, row 5) even when the oriI and VPg were from PV (Fig. 8A, row 2) or from HRV14 (Fig. 8C, row 8). Similar observations were made for the HRV14 3D and 3C pair (Fig. 8, rows 3, 6, and 9) and the PV 3D and 3C pair (Fig. 8, rows 1, 4, and 7). We predicted that CVB3 and HRV14 3C proteins would substitute for PV 3C better than HRV16 3C, because the 3Dpol enzymes from CVB3 and HRV14 only contained one substantial difference relative to PV 3Dpol, whereas HRV16 3Dpol contained three differences. This prediction was supported by the data (Fig. 8). These data are all consistent with our previous findings that an important functional link exists between 3Dpol and 3C, perhaps a physical interaction. Physical Interaction between Picornavirus 3C and 3D Proteins-In order to demonstrate a physical interaction between PV 3C and 3Dpol proteins, we performed glutaraldehyde cross-linking experiments. After cross-linking, crosslinked species were resolved by SDS-PAGE and identified by Western blotting. Inter-and intramolecular cross-links were observed for purified PV 3Dpol (Fig. 9, Ai, lane 1). Crosslinking of PV 3Dpol (52 kDa) in the presence of PV 3C (21 kDa) produced two unique cross-linked bands that were recognized by the PV 3Dpol antibody and migrated at a position consistent with the molecular weight of cross-linked 3C-3Dpol and 3C-3C-3Dpol (Fig. 9, Ai, lane 2). By performing the analogous experiment using a PV 3C antibody (Fig. 9, Aii, lane 2), we demonstrated that these cross-linked species contained 3C. Cross-linking efficiency of PV 3Dpol to 3C proteins correlated with the capacity of the 3C proteins to stimulate VPg uridylylation (compare Fig. 9, Bi with Fig. 8A). The 3C-3Dpol interaction observed for PV was also observed for CVB3 (Fig. 9, Bii, lane 1) and HRV14 (Fig. 9, Biii, lane 1). In addition, the efficiency of cross-linking for these proteins correlated with uridylylation efficiency (compare 9B, ii and iii, with FIGURE 5. RNA binding by PV 3C measured by using fluorescence polarization. A, 3C binds single-stranded RNA and is sensitive to salt concentration. 3Ј-Fluorescein-labeled rU10 (FL-rU10) RNA was titrated with 3C in the presence of 0.1 and 10 mM NaCl. Binding was detected as a change in fluorescence polarization (mP) in a Beacon instrument. The data were fit to a hyperbola. B, 3C binds to single-stranded oriI RNA fragments better than to intact oriI. The intact oriI employed is shown. The rectangles on the left and right define the RNA sequences of "stemL" and "stemR" single-stranded RNAs, respectively. All RNAs were labeled at the 3Ј-end with fluorescein. Experiments were performed as described above in the presence of 10 mM NaCl. C, 3C binding to SLd is tight. The 5Ј-end of PV RNA contains a cis-acting element (oriL) that contains a high affinity binding site for 3CD termed SLd. The sequence and secondary structure are shown on the left. SLd RNA was labeled at the 3Ј-end with fluorescein. Experiments were performed as described above in the presence of 0.1 and 10 mM NaCl. Fig. 8, B and C). Together, these results provide compelling evidence for a physical interaction between a 3Dpol monomer and a 3C dimer that is required for VPg uridylylation. DISCUSSION Picornaviruses are positive-stranded RNA viruses with a single-stranded genome. This family of viruses contains numerous human and veterinary pathogens. Picornavirus genomic (plus-strand) and antigenomic (minus-strand) RNAs contain a protein (VPg) linked to the 5Ј-end. Attachment is thought to occur by using uridylylated VPg (VPg-pUpU) as primer for the virus-encoded RNA-dependent RNA polymerase (3Dpol), an enzyme that it is quite inefficient at initiation of RNA synthesis de novo (30). VPg-pUpU can be produced by the polymerase by using a genome-specific RNA stem-loop structure (oriI) (3,4) or a poly(rA) tail (5). oriI-templated uridylylation of VPg is much more efficient than observed for poly(rA)-templated uridylylation reactions (3,4). Substantial genetic and biochemical evidence exists supporting the use of oriI for VPg uridylylation during picornavirus genome replication (3, 4, 6, 8, 11, 12). FIGURE 7. VPg uridylylation components employed in this study. A, VPg peptides. Only five of the 22-23 amino acids are strictly conserved. One of these is Tyr-3, the nucleophile employed for the first nucleotidyl transfer reaction. B, oriIs. Although structural differences are apparent among the three stem-loop RNAs, the oriIs of enteroviruses and rhinoviruses have been shown to be functionally exchangeable (3,8,10,32), suggesting that structure is not the primary determinant for recognition by the initiation complex. The templating "A" located in the loop is in an enlarged font. C, purified 3C and 3Dpol proteins. SDS-PAGE analysis of 1.5 g of each 3C and 1 g of each 3D is shown (left and right polyacrylamide gels, respectively). However, there is some debate regarding the use of VPg-pUpU produced by using oriI for both plus-and minusstrand synthesis (12, 16 -18). Model for Assembly and Organization of the VPg Uridylylation Ribonucleoprotein Complex-The VPg uridylylation reaction has been reconstituted in vitro from purified components (3,4). However, the structural organization, corresponding assembly mechanism, and functional roles for some factors are largely unknown (2). We provided a glimpse into the structural organization several years ago by revealing an interaction between protein 3C(D) and 3Dpol that was required for VPg uridylylation, leading to the model proposed by Pathak et al. (19). The experiments reported here are consistent with the model for VPg uridylylation presented in Fig 10. A dimer of 3C(D) binds to the stem of oriI in the vicinity of the loop directed by 3C binding to the intact, double-stranded stem. This complex isomerizes to separate the strands of the oriI stem, opening the loop to facilitate binding of 3Dpol. 3Dpol is then recruited to and retained on the "open" loop by a proteinprotein interaction with one or both subunits of the 3C subdomain of 3CD. Association of VPg with this complex is weak and could either be prebound to the open loop, enter with 3Dpol, or enter after. Specificity Determinants of oriI-oriI has three domains: a loop and upper stem (solid lines in Fig. 10) and a lower stem (dotted lines in Fig. 10). The loop contains adenylate residues critical for templating uridylylation and facilitating processive uridylylation by using a slide-back mechanism (6,9). The upper stem contains determinants for uridylylation, since mutations in this region reduce uridylylation efficiency (Fig. 2B) (8,32). Loss-and gain-of-function mutations for uridylylation map to this region (10). An oriI chimera that contains the upper stem from HRV-14 and the lower stem from PV functions in the context of the HRV-14 genome (10). The reciprocal chimera does not support replication (10). These data are consistent with the upper stem providing the specificity for binding to proteins required for uridylylation and the lower stem serving a more structural role. For example, the number of mispairings leading to misfolded oriI structures would be reduced by adding the lower stem. In addition, the presentation of the stem in the context of the structured genome may be enhanced. 3C(D) Binding to oriI-Our capacity to establish a minimal system for uridylylation that was quite robust by employing only a 29-nt oriI and 3C (Fig. 2B) suggested that 3C interacted with the upper stem. This interaction could be demonstrated directly by footprinting (Fig. 2D). These data are consistent with filter binding data that correlated integrity of the upper stem with 3CD binding efficiency (32). Importantly, binding of 3C to the stem does not have a substantial affect on the accessibility of the loop (Fig. 2D). On minimal templates, 3C and 3CD function essentially equivalently (Fig. 1C); however, on longer templates, 3CD is superior (Fig. 1C). This difference probably reflects the capacity for 3CD to bind more tightly to oriI than 3C (Fig. 3). We conclude that 3C determines specificity and 3D enhances affinity. Consistent with this observation is the finding that mutations in 3C can change oriI specificity (10). Titration experiments suggested that two molecules of 3CD were required per molecule of oriI and 3Dpol for maximal uridylylation (Fig. 3). Dimers of 3C could be observed in solution (Fig. 4). 3CD was not evaluated in this way, because it is well known that the 3D domain promotes multimerization in solution, confounding interpretation of any observed cross-linking (43). In support of 3C dimerization is the observation that PV 3C crystallizes as a dimer (34). The inability of the 3C dimer to be scored by using the yeast two-hybrid system or cross-linking in the past is unclear (33). We conclude that a dimer of 3C(D) binds to the upper stem of oriI. . Evidence for a 3C-3Dpol interaction: Physical interaction between 3C-3Dpol pairs determined by cross-linking correlates with VPg uridylylation efficiency. Physical interaction between 3C and 3Dpol proteins was probed by glutaraldehyde cross-linking experiments as described under "Experimental Procedures." A, interaction between PV 3C and 3Dpol. i, cross-linking reactions contained 3Dpol alone (lane 1) or in the presence of 3C-His (lane 2). Cross-linked products were resolved by SDS-PAGE and identified by Western blotting using an antibody against PV 3D. ii, exactly as described in i using an antibody against PV 3C for detection. B, 3C-3Dpol cross-linking efficiency correlates with VPg uridylylation efficiency. A 3C-3Dpol heterodimer can be trapped by cross-linking for PV (i), CVB3 (ii), and HRV14 (iii). Experiments were performed as described above, and the crosslinked product was detected by using the indicated 3Dpol antibody. Shown is the band corresponding to the 3C-3Dpol cross-linked species. Lane 1, cognate 3C-His; lane 2, HRV16 3C-His; lane 3, no 3C. In each case, the use of HRV16 3C-His shows reduced cross-linking, consistent with the inability of this 3C protein to support robust uridylylation for these 3Dpol proteins (Fig. 8). Step 1, two 3CD molecules bind to oriI with the 3C domains contacting the upper stem (solid lines) and the 3D domains contacting the lower stem (dashed lines). Step 2, the 3C dimer opens the RNA stem by forming a more stable interaction with single strands forming the stem. Step 3, 3Dpol is recruited to and retained in this complex by a physical interaction between the back of the thumb subdomain of 3Dpol and a surface of one or both 3C subdomains of 3CD.
9,724.4
2007-06-01T00:00:00.000
[ "Biology" ]
Mutations in the melanocortin-1 receptor (MC1R) gene have no influence on the distinct patterns of melanic plumage found in the manakins of the genus Antilophia (Aves: Pipridae) The melanocortin-1 receptor gene is the most widely-used marker for the investigation of the genetic determination of melanic plumage patterns. Studies of a number of wild bird species have shown an association between non-synonymous mutations of the MC1R gene and the presence of melanic variants. The genus Antilophia (Pipridae) includes only two manakin species (A. galeata and A. bokermanni), which are distinguished primarily by the differences in the pattern of melanic coloration of the plumage of the mantle in the adult males. In A. galeata, this plumage is black, while in A. bokermanni, it is predominantly white. This study investigates the possible association between mutations of the MC1R marker and the variation in plumage coloration observed in the two species. The MC1R sequences of the two species was analyzed, and the observed nucleotide variation was compared. Six polymorphic sites were identified, representing seven distinct genotypes. Five of these polymorphic mutations were non-synonymous, but were not related to the different phenotypes. Neutral evolution and the absence of any systematic association between the variants of the MC1R and plumage coloration in the Antilophia species indicate that alternative mechanisms regulate the expression of the coloration of the plumage in the adult males. INTRODUCTION The evolution of plumage coloration in birds may be driven by a range of factors related to both natural and sexual selection, constituting an important speciation mechanism in many taxonomic groups (Dunn et al. 2015).Part of the variation in plumage color within and between species is known to have a strong genetic component (Mundy 2005, LEILTON W. LUNA et al.Roulin and Ducrest 2013), although the genetic mechanisms involved in this process are still poorly-understood, especially in wild species. The melanocortin-1 receptor (MC1R) gene is the most popular genetic marker for the analysis of melanic patterns in bird plumage (Uy et al. 2009, Bourgeois et al. 2012).In most cases, mutations of the MC1R gene that cause an increase in its receptor activity are dominant, and are associated with black coloration (Bourgeois et al. 2012, Roulin andDucrest 2013).However, a growing number of studies have shown that variations in the MC1R sequence may not always be associated with distinct patterns of coloration in birds (Haas et al. 2009, Bourgeois et al. 2012). The genus Antilophia is formed by two species of manakin (Pipridae, Passeriformes).The principal difference between the two species -Antilophia galeata and Antilophia bokermanni -is observed in the coloration of the plumage of the adult males.In A. galeata, the cover feathers are black, whereas in A. bokermanni, these feathers are white (Coelho and Silva 1998), contrasting with the black tertial and rectrice flight feathers (Figure 1). Recent studies have shown that the genetic profiles of the two species are virtually indistinguishable (Rêgo et al. 2010, Luna et al. 2017), and that the mixture of genetic features observed in the two species is likely the result of the incomplete separation of the two lineages following a very recent process of diversification (Luna et al. 2017).In this context, a single mutation in the MC1R gene might have resulted in a change in the melanin levels of the feathers (Uy et al. 2009), resulting in a distinct phenotype, which would be fixed rapidly in small populations, such as that of A. bokermanni. In the present study, we investigated whether possible mutations in the codifying region of the MC1R gene are associated with the process that determines the difference in the principal phenotypic characteristic that distinguishes the two Antilophia species.To this end, we attempted to resolve two principal questions: 1) Is the inter-and intra-specific variation in the MC1R sequences associated directly with the different melanic patterns observed in the plumage of the two Antilophia species?and 2) Does the variation in the MC1R nucleotide sequences reflect selection pressures driving the fixation of the non-synonymous mutations that determine the phenotypes of the two species? SAMPLING AND GENETIC SEQUENCING Samples of the tissue of A. bokermanni (n = 16) were obtained from specimens captured in the northeastern portion of the Chapada do Araripe, in the state of Ceará, Brazil (Figure 2).Samples of the sister species A. galeata were obtained from researchers at the Federal University of São Carlos (UFSCar, n = 9) and the University of Brasília (UnB, n = 5). Genetic material was extracted using the Wizard® Genomic DNA Purification kit (Promega) according to the maker's protocol.Part of the MC1R of each specimen was obtained using PCR reactions based on the following primers, described by Cheviron et al. (2006).We used standard methods described elsewhere (Carneiro et al. 2012, Luna et al. 2017) to amplify and obtain sequences for MC1R in an ABI 3500 XL autobmatic sequencer (Applied Biosystems). The alignment and possible correction of the codification of the observed insertions and deletions were run in the Bioedit sequence editor (Hall 1999).All the sequences generated in the present study were deposited in GenBank (www.ncbi.nlm.nih.gov) under access numbers MG601527 through MG601534 (Table I). GENETIC ANALYSIS The MC1R sequences obtained were aligned with those available in GenBank for the species RESULTS AND DISCUSSION No evidence was found of selection or association between the mutations of the MC1R gene and the melanic variation in Antilophia.This reinforces the fi ndings of previous studies, which found no association between phenotypic differences and the variation in the MC1R gene (Cheviron et al. 2006, Haas et al. 2009, Bourgeois et al. 2012), which rejects our fi rst hypothesis, that changes in this codifying region may determine the diff erences in the melanic coloration of A. galeata and A. bokermanni. In the present study, 702 base pairs, corresponding to positions 156-858 of the 945 Lepidothrix coronata (access numbers DQ388308 and DQ388310), L. serena (DQ388331), and Gallus gallus (NM_001031462).This comparative analysis permitted the identifi cation of variation in the sites of nucleotides known to be associated with melanic plumage in birds. The genotype network was prepared using the maximum likelihood method in Haploviewer (Salzburguer et al. 2011).Fu's Fs test (Fu 1997) was applied to evaluate the distribution of the genotype frequencies in comparison with that expected according to a neutral model, while Tajima's D (Tajima 1989) was used to assess the diff erence between the number of segregating sites and the mean number of nucleotide differences, which were calculated in DNAsp 5.1 (Librado and Rozas base pairs of the MC1R sequence of G. gallus, were sequenced (Kerje et al. 2003).Ten sites were polymorphic, providing a total of six MC1R alleles in Antilophia.All the variable MC1R sites known to be associated with melanic patterns were conserved in the Antilophia specimens analyzed here (see the gray shaded column in Table I: the numeration corresponds to that of the MC1R sequence of G. gallus: Kerje et al. 2003).While non-synonymous mutations were recorded in some specimens, they were not shared by other individuals of the same species.In addition, the fact that the H_1 genotype was found at high frequencies in both species (Figure 3) confirms the lack of any systematic influence of the MC1R on the distinct plumage patterns.These fi ndings indicate that the changes in the MC1R gene may not have resulted in functional changes in the expression of the melanic coloration found in Antilophia.A similar situation was observed by Uy et al. (2009) in the Chestnutbellied monarch, Monarcha castaneiventris, from the Solomon Islands. In the case of Antilophia, this alternative mechanism may have arisen from changes in the interaction of the MC1R gene with associated regulator pathways.The alterations in these mechanisms may have resulted in the diff erential expression of the coloration pattern in the closelyrelated Antilophia species (Cone et al. 1996, Steiner et al. 2007, Manceau et al. 2011).In the specifi c case of A. bokermanni, this would account for the restriction of the black pigment to the fl ight feathers (Figure 1). Due to the intense sexual selection that is characteristic of the manakins, the coloration of the male plumage tends to evolve rapidly and independently, with little convergence among taxa (Prum 1997).In the context of the very recent divergence of the two Antilophia species (Luna et al. 2017), a small population, such as that of A. bokermanni, would be subject to intense selection pressures due to its isolation, which may have contributed to the rapid fi xation of a new plumage coloration pattern.The achromatic variation and that in the brilliance of the plumage may be one of the most important attributes determining female mate choice and competition between males (Doucet et al. 2005, Stein and Uy 2006, Laczi et al. 2011).However, it is still necessary to determine the role of these plumage characteristics in the reproductive success of these species, in particular in A. bokermanni. Even so, the changes in the pattern of melanic coloration may be associated with a number of diff erent selection pressures.In the present study, the results of the neutrality tests for the MC1R sequences did not contradict the hypothesis that the gene may be evolving in a neutral fashion.The values of Fu's Fs and Tajima's D, calculated for each species, found no evidence of non-neutral variation in either A. bokermanni (D: -1.6965 and Fs: -0,5109; p > 0.05) or A. galeata (D: -1.2782 and Fs: -0,5109, p > 0.05). The result of the McDonald-Kreitman (MK) test did not indicate any evidence of evolutionary adaptation, given that none of the mutations were fi xed, even though the majority were nonsynonymous in relation to the amino acids codifi ed.This reinforces the conclusion that the variability found in the MC1R gene was not the result of functional changes, but rather that there has been some differential modification in the regulatory mechanism of its interaction with the melanin receptors it controls in the two Antilophia species. numbered according to their position in the MC1R sequence of the chicken (G.gallus -access number NM_001031462).LEILTON W. LUNA et al. 2009).The McDonald-Kreitman test (McDonald and Kreitman 1991) was used to examine the possible occurrence of positive selection, and to identify putative sites of functional importance in the two species (these analyses were also run in DnaSP). Figure 2 - Figure 2 -Geographic distribution of A. bokermanni and A. galeata.The geographic ranges of the two species were adapted from NatureServe. Figure 3 - Figure 3 -Genotype network generated from the sequences of the MC1R generated in the present study for A. bokermanni and A galeata.The genotypes are represented by circles, the size of which is proportional to their frequency.The white shading represents the samples of A. bokermanni and black shading, the samples of A. galeata.
2,465.4
2018-07-01T00:00:00.000
[ "Biology" ]
Time-Resolved 2 PPE and Time-Resolved PEEM as a Probe of LSP ’ s in Silver Nanoparticles The time-resolved two-photon photoemission technique (TR-2PPE) has been applied to study static and dynamic properties of localized surface plasmons (LSP) in silver nanoparticles. Laterally, integrated measurements show the difference between LSP excitation and nonresonant single electron-hole pair creation. Studies below the optical diffraction limit were performed with the detection method of time-resolved photoemission electron microscopy (TR-PEEM). This microscopy technique with a resolution down to 40 nm enables a systematic study of retardation effects across single nanoparticles. In addition, as will be shown in this paper, it is a highly sensitive sensor for coupling effects between nanoparticles. INTRODUCTION The interaction of light with metal nanoparticles has attracted considerable attention for many centuries.One of the most impressive examples of their magic colourful properties is the artistic adoption of the light scattering and absorption by small noble metal particles in the church windows of Marc Chagall.In modern science, the optical properties of nanostructures have raised to a topic of high interest for fundamental physics as well as for technical applications.Silver and gold nanoparticles with typical sizes of 5 nm to 150 nm can exhibit particularly strong optical extinction in the visible spectral range due to resonantly driven electron plasma oscillations, termed as localized surface plasmons (LSP).The resonance energy of the LSP depends critically on the size and the shape as well as on the material of particle and embedding environment [1][2][3].This enables spectral tuning of the resonance, an ability which is of considerable interest in the context of future electronic and optical device applications.Due to the rapid advances in the fabrication of small particles [4] and nanowires [5], their optical properties are now used in a wide range of applications, including biosensors [6,7], near field microscopy [8], and new optical devices [9][10][11].Furthermore, since plasmons are associated with large electromagnetic fields near the particle surface, they play an important role in surface-enhanced Raman scattering (SERS) [12], second harmonic generation [13,14], and multiphoton photoemission [15][16][17][18][19].The limiting factor for applications is the energy loss of the collective electron oscillation due to the damping of the LSP, which is manifested in the plasmon decay time τ p1 [20]. The fundamental microscopic mechanisms of collective electron excitations in small particles as well as their decay are still far from being completely understood.As a pioneer in this field, Gustav Mie developed a first theory based on Maxwell's equations to explain the optical properties of spherical nanoparticles.Mie's theory easily describes red shifts and the lifetime broadening of the dipole plasmon resonance as the particle size is increased.It also explains the appearance of resonance contributions of higher multipolar order [1].However, this theory is strictly valid only for single particles with a spherical geometry.Therefore, during the last decades, lots of theoretical studies focused on the properties of LSP in nanostructures of different shapes in order to gain insight, for example, into their optical response, the field distribution of the resonant modes as well as relevant decay channels, and the coupling between neighbouring particles [21][22][23]. A simple oscillator model describing the interaction of a light field and a nanoparticle can be discussed as follows [26].[24].The light field couples occupied and unoccupied single electron states which are separated by the photon energy hυ.The induced polarizations of these different, coherently coupled transitions superpose to a macroscopic polarization which represents the collective response of the electronic system.This induced polarization field adds to the incident light field and causes a modification of the particle internal field (Figure 1).The relation between internal field and light field is described by the frequency-dependent field enhancement factor f (ω) [15].Figure 2 displays the phase shift φ(ω) of the induced polarization field with respect to the light field and the amplitude of the field enhancement factor f (ω).For frequencies below the evident resonance peak, the internal field is small because the π-shifted polarization field adds destructively to the light field.While passing the resonance frequency, the polarization response undergoes a phase shift by π.The extraordinary field enhancement at λ p1 (corresponding to the LSP resonance) is determined by the resonant response of the electron collective to the light field adding up to an extremely large polarization field.Finally, in the short wavelength regime, the amplitude of the polarization field decreases as now the electron collective is too inert to follow the oscillating light field. The damping of the plasmon excitation is basically governed by two different decay channels (see Figure 3).First, the plasmon energy can be returned coherently to the external radiation field (radiation damping), as the oscillating polarization field must emit electromagnetic radiation.Within the optical far field theory, this decay channel corresponds to the elastic scattering of the incident exciting light.The signals exploited by pure optical far field detection techniques, such as second harmonic generation [13,14] and extinction spectroscopy [25], are due to the coupling to this radiation damping channel. Furthermore, the decay of a plasmon is possible by the creation of electron-hole pairs and a subsequent transfer of energy to the internal degrees of freedom inside the particles (internal damping).This process results in a complete loss of coherence to the exciting light field.In the far field, this damping channel is recognized as absorption.The involvement of single electron excitations in this process suggests that also electron emission techniques such as photoemission may be useful as probes for plasmonic properties.In this paper, we demonstrate that particularly two-photon photoemission (2PPE) is highly sensitive to plasmon excitations in metallic nanoparticles.A striking example is the study of particle shape characteristics of the plasmon damping in elliptical nanoparticles as probed by means of the time-resolved 2PPE.Furthermore, we show that 2PPE in combination with the photoemission electron microscopy technique (PEEM) allows to map local near field variations associated with plasmonic excitations with subdiffraction (<40 nm) resolution.In contrast to Cinchetti et al. [27] who first investigated LSP excitation in special moon-like tapered silver nanoparticles (around 400 nm) on silicon, we concentrate on arrays of smaller particles of different shape.Observed effects that will be discussed are the field retardation in large nanoparticles and the plasmongoverned coupling of neighbouring nanoparticles. EXPERIMENTAL Figure 4 shows the basic scheme of the time-resolved twophoton photoemission technique applied to a metal surface D. Bayer et al. 3 [28].A first ultrashort laser pulse (pump) in interaction with the electronic subsystem at a given time t 0 populates an intermediate excited electron state E I below the vacuum level.A second laser pulse (probe) incident at the time t 1 couples this excited state population to a detection state above the vacuum level, where it is addressed by an electronsensitive detector such as an electron energy analyzer or a photoemission electron microscope.A successive and controlled increment of the time delay between both pulses enable to record the time-evolution of the depopulation of the intermediate state.For the electron gas of a metal, the depopulation is governed by inelastic electron-electron scattering processes and is characterized by the inelastic lifetime τ ee .A typical experimental time-resolved 2PPE trace as a function of the time delay between cross-polarized pump and probe pulses is shown in Figure 5.The shape of this cross-correlation trace is a convolution of the two laser pulses and the exponential decay of the probed intermediate state E I determined by the inelastic lifetime τ ee .A deconvolution of τ ee can be performed by a fit of simulated correlation traces to this data set.For bulk electron excitations, these simulations are performed within a rate equation model which corresponds to the solution of the Liouville-von Neumann equations of a three level system within the density matrix formalism in the limit of rapid dephasing [29][30][31].For qualitative statements on τ ee and for comparing studies, it is, however, often sufficient to analyse the broadening (Δ FWHM) of the cross-correlation trace, which increases linearly with the life times of the intermediate state τ ee (Δ FWHM ≈ a * τ ee ; a∼1.13). So far, only single electron states have been considered for the description of the 2PPE process.In the following, we will discuss to what extent the 2PPE process is also sensitive to the collective electron excitations in nanoparticles (LSP). 2PPE is a second-order process and, therefore, the measured electron yield is proportional to the fourth power of the electric field ( j 2PPE ( r) ∝ | E 4 int |) acting on the electrons.In the case of plasmon resonant excitation of a nanosized particle, this (particle internal) field is determined by the local field enhancement f (ω) as governed by the properties of the LSP.It is this relation which makes the twophoton photoemission a versatile tool in the investigation of plasmonic excitations.Later, we will see that besides a high field enhancement, an efficient transfer of energy from the LSP resonance to the single electron excitation spectrum is a necessary condition to generate a high-photoemission signal. The participation of the plasmon excitation in the 2PPE process has also direct consequences for the shape of the cross-correlation trace obtained from the timeresolved experiments.Next to τ ee , the inelastic lifetime of single electron excitation, also the LSP-lifetime τ LSP has to be considered for a correct deconvolution.A correct quantitative deconvolution of these two quantities from the cross-correlation trace is a rather complex task as has been shown, for instance, in reference [24].However, as we will see in the following, a clever use of the different experimental degrees of freedom will already enable interesting qualitative statements about LSP and single electron dynamics, again by considering the FWHM of the cross-correlation only. The setup of the time-resolved 2PPE experiments used for our studies is shown in Figure 6.Pump and probe laser pulses are delivered from the frequency-doubled output (photon energy of 3.1 eV) of a femtosecond Ti: Sapphire laser system (repetition frequency 80 MHz, temporal pulse width 20 femtoseconds).A Mach-Zehnder interferometer allows to adjust the difference in optical pathway between pump and probe pulse with an accuracy of better than 100 nm corresponding to a timing accuracy of <0.3 femtosecond.The collinear pulse pair is then focussed onto the sample surface and excites the electrons in a 2PPE process which are subsequently detected by a suitable detection unit. For our studies, two different types of electron detectors have been used.Spectroscopic measurements have been performed with a cylindrical sector electron energy analyzer (Focus CSA 200) with an energy resolution of better than 100 meV.For the plasmonic systems under consideration, it enables us to investigate the energy dependence of electronhole pair excitation subsequent to a LSP decay.However, this detector does not provide any spatial resolution. The second detector used for our studies is an electrostatic photoemission electron microscope (Focus IS-PEEM), which is described in detail elsewhere [32,33].The lateral resolution of this instrument is better than 40 nm and enables us to focus even on a single nanoparticle.For this experiment, a mercury vapour UV source (high energy cutoff 4.9 eV) is available in addition to the femtosecond laser.It allows imaging of the surface by linear photoemission near the work function threshold which is located at about 4.5 eV photon energy for silver. The investigated samples have been prepared by electron beam lithography in a lift-off process.This technique enables a controlled and flexible design of metal nanoparticles.The shape and size of the particles used in our studies were tuned such that their LSP resonance frequencies match the experimental setup conditions of the laser system.Figure 7 shows SEM images of the different silver nanostructures deposited on ITO-covered glass substrates as they are used in this study.The dimensions of the elliptical-shaped silver nanoparticles in Figure 7(a) are 140 nm (long axis), 60 nm (short axis), and 50 nm (height).They constitute a versatile sample for the investigation of the dependence of the LSP lifetime in respect to resonant or off-resonant excitation.The silver nanodot array (Figure 7(b) diameter: 200 nm, height: 50 nm) as well as the silver nanowire array (Figure 7(c) length: 1.6 μm, width: 60 nm, and height: 50 nm) will be used in the time-resolved PEEM experiments to map retardation effects associated with a plasmon excitation at a nanometer resolution.Studies of the plasmon-induced particle-particle coupling are possible with nanodot pairs of varying centreto-centre spacing.Figure 7(d) shows an example of 50 nm dimers (height: 40 nm) at an interparticle spacing of 130 nm (grating constant: 740 nm). RESULTS AND DISCUSSION Figure 8 shows measured (black line) and calculated extinction spectra of the array of elliptically-shaped silver nanoparticles (Figure 7(a)).The experiments were performed at normal light incidence using unpolarized light.The calculations are based on a numerical model described in [34].We identify three different resonances at 431 nm, 450 nm, and 795 nm corresponding to plasmon excitations along the z-axis, the in-plane short axis, and the in-plane long axis, respectively (see Figure 8 for details).However, the experimental configuration (perpendicular light incidence) allows a coupling to the in-plane resonances, only.The resonance energies of these two modes are almost perfectly reproduced by the calculations, whereas the broadening of the resonances is somewhat underestimated.This indicates the presence of damping mechanisms in the nanoparticles which are not taken into account in the simulation, for example, the interaction between particle and substrate and an enhanced internal damping due to a finite defect density in the particle itself [16]. The 400 nm laser-light used for the TR-2PPE experiment couples almost resonantly to the in-plane short-axis mode of the particle.In contrast, far off-resonant conditions are given for the case of an excitation of the long-axis mode.At perpendicular incidence of the laser light, a rotation of the polarization vector of the light enables to switch between this resonant and off-resonant excitation conditions.An offresonant excitation is possible for a polarization vector (electric field) oriented along the long axis since the resonance is at 795 nm, resonant excitation is possible for an orientation along the in-plane short axis. The difference between off-resonant and resonant excitation becomes evident from the polarization dependence of the photoemission current which is a sensitive probe of the field enhancement as discussed above.The black line in Figure 9 shows the measured 2PPE yield as a function of the polarization angle of the incident laser light.We observe a clear variation in the yield, where the yield maxima and minima correspond to orientations of the electric field Figure 8: Measured and calculated extinction spectra of the investigated array of elliptical nanoparticles (see Figure 7). vector along the short and long axis, respectively.The same periodicity is observed if time-resolved 2PPE data are considered.The red line in Figure 9 displays the variation of the FWHM of the cross-correlation traces as obtained from sech 2 fits as function of polarization angle.As discussed above, differences in the FWHM of the correlation trace are an indirect measure for the lifetime of the LSP.For resonant (short-axis) excitation, we measure a maximum in the LSP lifetime, for complete off-resonant excitation, the LSP lifetime is minimum.Interestingly, the variations of yield and FWHM with polarization angle show a fixed phase shift of about 30 • .This observation indicates that the excitation conditions resulting in the highest photoemission yield do not coincide with conditions resulting in the maximum LSP lifetimes.The reason for this unexpected mismatch is not clear, yet. TR-2PPE experiments at varying intermediate state energies E-E F of the single electron excitations allow to highlight the different origins of τ ee and τ LSP .For metals, τ ee exhibits a characteristic energy dependence as has been shown in the past in several theoretical and experimental works [24,[35][36][37].As an example, Figure 10(b) shows lifetime results obtained in the TR-2PPE of single electron excitations in a polycrystalline silver sample.τ ee increases monotonously as the intermediate state energy decreases.In contrast, the lifetime of the plasmon τ LSP should be independent of (E-E F ). Figure 10(a) shows the FWHM of the cross-correlation curves at varying intermediate state energies (E-E F ) measured for the elliptical nanoparticles under resonant and off-resonant conditions.Both curves exhibit an energy dependence characteristic for the single electron decay τ ee in silver.At the same time, the resonant and off-resonant curves keep a constant displacement along the abscissa (time axis) representing the energy-independent broadening caused by the LSP decay.The offset is about 3 femtoseconds and is of the same order of magnitude as the plasmon decay time determined from line width analysis of the optical extinction spectrum (1/Γ ≈ 2 femtoseconds). In order to study local variations in the electron dynamics on nanometer scales, a technique capable of a high lateral resolution is required, such as photoemission electron microscopy.In combining the high temporal resolution of the time-resolved 2PPE technique and the high lateral resolution of the PEEM, we succeeded to map local variations in the LSP dynamics even within a single nanoparticle.Figures 11(a) and 11(b) show PEEM images of a 2D array of silver nanodots (diameter: 200 nm, height: 50 nm, and grating constant: 650 nm) recorded with a mercury vapour lamp in 1PPE and the second harmonic of the laser in 2PPE, respectively. The homogeneous response of the nanoparticle array to the UV excitation as visible in Figure 11 for the accurate lithography process.In contrast, the 2P-PEEM image (Figure 11(b)) exhibits a distinct brightness variation among the particles pointing in the first instance to considerable variations in the LSP excitation conditions.However, a detailed analysis of the data and the comparison of images taken at different excitation wavelengths show that these inhomogeneities are caused by the internal defect structure of the different particles rather then differences in the collective electron response [16].For further information about the fundamental behaviour of the LSP, we plan to perform energy resolved measurements as, for example, done by Cinchetti et al. by combining the time of flight technique with PEEM [38]. Figure 11(c) shows the result from a time-resolved PEEM scan of the identical area of the sample.In this depiction, the colour-coded FWHM-value of the cross-correlation trace of every image pixel is plotted as a measure for the local femtosecond dynamics.This lifetime map allows in an intuitive way for the identification of local variations in the ultrafast response between the particles as well as within a single particle itself.The quantitative analysis of a selected particle out of the array is displayed in Figure 12. The open squares correspond to the measured FWHM of the TR-2PPE correlation curve along a section through the centre of the particle.As a guide to the eye, the corresponding 2PPE yield is plotted, roughly reproducing the particle profile.Surprisingly, FWHM-trace and profile do not match each other; the FWHM-trace is shifted to the left in respect to the 2PPE yield trace.It seems as if the LSP lifetime varies considerably across the particle.This view is confirmed by the data plotted in Figure 12(b).Here, sections of three cross-correlation traces from the particle are displayed together with an exponential fit on a logarithmic scale.For reference, the corresponding locations within the particle are shown in the inset.Quantitatively, we find in this area a systematic local variation in the ultrafast electron dynamics of more than 10 femtoseconds.This observation is against the intuition that the LSP as a collective excitation of the electron gas is a global and characteristic property of the entire nanoparticle.In locally probing, the LSP excitation with the external field, however, we create a situation of laterally varying interference conditions between external an internal fields, quite similar to the explanation given by Meyer zu Heringdorf et al. [39] for the observation of stationary emission maxima along self-assembled nanowires. In their study, they image silver wires of varying length by 2PPE induced by the second harmonic of a Ti : Sa oscillator in PEEM using a setup very similar to our experiment.In the PEEM images, emission maxima occur along the wire whose positions do not change with wire length.The authors explain the occurrence as well as the position of the emission maxima by taking into account interference between a surface plasmon wave inside the wire with the external light field which is incident at an oblique angle of 74 • with respect to the surface normal.Since the surface plasmon wave inside the wire propagates at a lower velocity than the light field, the phase between the two fields changes along the wire, forming a stationary beating pattern of alternating constructive and destructive interference.In our case, the situation is similar to that extent that we observe photoemission from local fields which result from the superposition of the external light field and the polarization field within the nanoparticle.Since the particles are much smaller than the wires discussed above, we do not see a considerable variation in the photoemission yield, since the second emission maximum occurs only about 2.5 μm away, which is already far outside the particle.What we do observe, however, are traces of a locally varying phase between light and plasmon fields.At very high temporal resolution, one finds a lateral variation of the fringe patterns in phase-resolved 2PPE correlation traces which clearly show the varying phase difference between the two fields across the particle [17].The data presented here was collected with a less elaborate setup which is not capable of resolving the fringe patterns.However, the overall shape of the phaseaveraged correlation traces is influenced by the varying phase between light and plasmon field and changes systematically across the particle, which in turn results in a systematic variation of the best-fit sech 2 FWHM parameter.The colourcoded display of the FWHM in the rightmost image of Figure 11 shows that it is indeed a systematic effect, since each of the imaged nanoparticles shows a distinct red-blue contrast from left to right.To put it differently, what we observe here is a result of electromagnetic retardation.Since the particles are not far smaller than the wavelength of the exciting light (in fact, they are about half a wavelength wide), not every position inside the particle is excited with the same phase.Such retardation effects are expected to be even more pronounced for structures elongated along the direction of incidence of the laser light, as was already shown in the abovementioned study by Meyerzu Heringdorf et al. [39].Figure 13 shows 1P-PEEM and a 2P-PEEM images and a corresponding lifetime map of the nanowire array introduced in Figure 13(c). The nanowires as mapped in the 1P-PEEM image show clear internal intensity variations indicating a structural inhomogeneity of the nanowires.Also, the 2PPE image shows distinct brightness variations along the wires.Note that for the 2PPE measurements, the nanowires have been aligned perpendicular to the direction of incidence of the exciting light.Strikingly, the endings of the wires always exhibit the highest photoemission yield.In addition, most of the wires show three further emission maxima.Local variations are also reproduced by the lifetime map of the nanowire array (Figure 13(c)).The FWHMs of the crosscorrelation traces vary along the wire on a scale of about 100 nm, a value which corresponds roughly to the width of the wires.Perpendicular to the long wire axis, the FWHM variations are very similar to those observed for the nanodot array in Figure 11.We conclude that the present sample does not actually consist of homogeneous nanowires.Instead, during the preparation process, the wires obviously decomposed into conglomerates of small silver particles.The brightness variations originate from LSP excitations in the individual small islands as well as variations in the defect density along the wire.This result is in good agreement with the findings in Kubo et al. [40] and Chelaru et al. [41].In this context, the observed FWHM characteristics can be interpreted in terms of the model introduced before.These findings are corroborated by SNOM measurements of Ditlbacher et al. [42], who could show that propagating modes leading to standing waves patterns are only supported in single crystalline nanowires.Finally, we would like to discuss the coherent coupling of LSP modes excited in two neighbouring particles by the same external laser field.The local interference conditions determining this coupling depend on the relative phase of the contributing LSP fields and, hence, on the interparticle distance and the angle of light incidence.Here, we investigate how secondary fields from neighbouring particles in an array of particle dimers modulate the local 2PPE signal.Figure 14 shows 1P-PEEM (Figure 14(a)) and 2P-PEEM (Figure 14(b)) images of silver particle pairs (diameter: 50 nm, height: 40 nm, and grating constant: 740 nm) with a centre-to-centre distance of 130 nm.The 2PPE data were collected under resonance conditions with respect to the particle LSP at a photon energy of 3.1 eV (400 nm, p-polarized light along the dimer axis). In the 1P-PEEM image, the individual particles are clearly resolved.For the 2P-PEEM image, we observe again a random local variation of the photoemission signal as observed before for the nanodot sample.We assign these inhomogeneities to particle variations in the defect density (Figure 11(b)).To be still able to deduce the systematic local variations arising from the particle-particle coupling, we performed a statistical analysis of the photoemission yield from the particle dimers.The average brightness values in a 7 × 7 pixel region of interest centred on the left and on the right particle within each pair is extracted from the image.In a further step, we calculate the relative count rate difference between left and right particle ΔI rel = (I r − I l )/I l .Figure 15(a) shows the frequency distribution of ΔI rel as of the sample area.A positive value of ΔI rel corresponds to the situation that the right particle is brighter than the left particle.The histogram gives evidence that in average, the right particle which is located towards the direction of light incidence shows a 28% yield enhancement in comparison to the left particle.Calculations based on a dipole model can qualitatively reproduce this observation.Further, measurements have been conducted for particle dimers at varying intrapair distances between 100 nm and 140 nm (Figure 15).For all particle pairs, we observe an asymmetry in the photoemission yield exhibiting a positive ΔI value.However, as function of the particle distance, we also observe a monotonous variation in ΔI with a local flat maximum at about 120 nm.In general, for excitation at a fixed wavelength (λ = 400 nm), a periodic distance dependence is expected, since the conditions for constructive and destructive interference of the particle fields at the location of the right particle will change alternately with increasing distance.The periodicity should be of the order of the wavelength.At sufficient large distances, the differences in the particle yields should disappear due to the decrease of the particle field strength with distance.The experimental data shown in Figure 15(b) display a section out of the periodic coupling modulation close to an oscillation maximum.It is noteworthy that this coupling effect could be mapped on the basis of a statistical analysis of the particle array.This approach delivers obviously very reproducible and significant results, even though the photoemission signal is considerably blurred by sample inhomogeneities.The 2PPE yield analysis gives direct evidence for the dipole-induced coupling between neighbouring particles. CONCLUSIONS Time-resolved 2PPE is a well-established method to investigate the relaxation dynamics of optically excited electrons.In contrast to pure optical methods, the 2PPE directly addresses the electronic system and is therefore well suited to investigate the complex interplay between collective and single electron excitations on a microscopic level.Using special elliptically-shaped nanoparticles switching between resonant and off-resonant excitation conditions is possible by rotating the polarization vector of the perpendicular incident light.The presented laterally integrated time-resolved 2PPE results allow extracting the lifetime of collective electron oscillation.The results presented in this context confirm the model developed by Merschdorf et al. [24], which treats the plasmon resonance as a modification of the internal electric field with respect to amplitude, phase, and temporal structure. The combination of TR-2PPE and a photoemission electron microscope permits a spatial resolution much below the optical diffraction limit as well as a femtosecond timeresolution.The direct imaging gives access to the spatiotemporal dynamics of the plasmon resonance-enhanced electric fields in and around metal nanostructures.In comparison to other microscopy techniques such as SNOM, the PEEM lacks the need to scan the sample surface, enabling a parallel data acquisition.The presented data underlines the possibilities of TR-PEEM in visualising the ultrafast dynamics of energy flow through nanoscopic devices.Investigations of the near fields around single particles, nanowires, and particle pairs were discussed.In the case of large single particles, direct observation of the phase propagation of a plasmon mode through an extended nanoparticle was demonstrated.In polycrystalline nanowires, the method was used to identify the observed patterning caused by structural defects.Finally, a LSP field energy distribution influenced by the coupling in particle pair structures with a centre-to-centre distance of 130 nm was demonstrated.This shows the possibilities of channelling light through resonant metallic nanostructures into areas well below the diffraction limit.In conclusion, we would like to stress that TR-PEEM as characterization tool for the metal nanostructures shows the potential to become a key technique in the field of nano-optics. Figure 1 :Figure 2 :Figure 3 : Figure 1: Influence of a LSP resonance on the amplitude and duration of the laser pulse. Figure 9 : Figure 9: TR-2PPE measurement of the 2D nanoparticle array with perpendicular light incidence showing the 2PPE count rate and the FWHM of the corresponding cross-correlation trace with respect to the polarization angle (in respect to the direction of the long axis). Figure 10 : Figure 10: (a) Measured FWHM on a 2D silver nanoparticle array of elliptically-shaped particles (right); (b) inelastic lifetime τ ee of excited electrons measured on a polycrystalline silver.bulk crystal. Figure 11 : Figure 11: PEEM images of the investigated silver nanodot array (diameter: 200 nm, height: 50 nm, and grating constant: 650 nm): (a) the PEEM image is taken with the mercury vapour lamp (4.9 eV); (b) 2P-PEEM image of the same sample excited with a photon energy of 3.1 eV; (c) corresponding lifetime map. Figure 12 : Figure 12: (a) 2PPE yield and FWHM (calculated with a sech 2 fit) across a single particle; (b) correlation traces at three different points at the right edge of the particle with a time constant fitted to the exponential decay. Figure 13 :Figure 14 : Figure 13: PEEM images of the investigated silver nanowires with a length of 1.6 μm: (a) 1P-PEEM image taken with the mercury vapour lamp; (b) 2P-PEEM image excited with a femtosecond laser; and (c) corresponding TR-PEEM lifetime map. 3 A = 15 Figure 15 : Figure 15: (a) Distribution of the relative count rate difference between two neighbouring particles with a distance of 130 nm (left); (b) dependence of the relative count rate difference with respect to the distance in between the particle pair.
7,068.6
2008-09-28T00:00:00.000
[ "Materials Science", "Physics" ]
Identification of the functional pathways altered by placental cell exposure to high glucose: lessons from the transcript and metabolite interactome The specific consequences of hyperglycaemia on placental metabolism and function are incompletely understood but likely contribute to poor pregnancy outcomes associated with diabetes mellitus (DM). This study aimed to identify the functional biochemical pathways perturbed by placental exposure to high glucose levels through integrative analysis of the trophoblast transcriptome and metabolome. The human trophoblast cell line, BeWo, was cultured in 5 or 25 mM glucose, as a model of the placenta in DM. Transcriptomic analysis using microarrays, demonstrated 5632 differentially expressed gene transcripts (≥± 1.3 fold change (FC)) following exposure to high glucose. These genes were used to generate interactome models of transcript response using BioGRID (non-inferred network: 2500 nodes (genes) and 10541 protein-protein interactions). Ultra performance-liquid chromatography-mass spectrometry (MS) and gas chromatography-MS analysis of intracellular extracts and culture medium were used to assess the response of metabolite profiles to high glucose concentration. The interactions of altered genes and metabolites were assessed using the MetScape interactome database, resulting in an integrated model of systemic transcriptome (2969 genes) and metabolome (41 metabolites) response within placental cells exposed to high glucose. The functional pathways which demonstrated significant change in response to high glucose included fatty acid β-oxidation, phospholipid metabolism and phosphatidylinositol phosphate signalling. 37" Supplementary"Tables" 38" Suppl. Table 1: Gene changes identified by microarray analysis. BeWo cells were cultured in 5 39" mM for 24 hrs before being switched to culture in either 5 mM or 25 mM glucose for 48 h (n=6). RNA 40" was pooled from BeWo cultured in 5 mM or 25 mM and analysed using Affymetrix exon arrays. 42" analysis were performed in Bioconductor. The signal intensity of each transcript for each glucose 43" condition is shown. The ratio and fold change in signal intensity of each transcript in BeWo cells 44" cultured in 25 mM compared to 5 mM glucose is displayed. 45" Suppl. Table 2. Biological functions of modules generated using the ModuLand algorithm on 46" interactome networks of BeWo cells response to culture in 25 mM compared to 5 mM glucose. 47" The number of significant modules generated from both inferred and non-inferred interactome 48" networks are shown. Genes from each of the significant modules were entered into Reactome, to 49" identify possible biological functions associated with these genes. The biological function of the four 50" modules with the most significant p value from each interactome network are shown. 51" Suppl. Table 3: Biological functions of modules generated using the ClusterOne algorithm on 52" the interactome networks of BeWo cells response to culture in 25 mM compared to 5 mM 53" glucose. The number of significant modules generated from both inferred and non-inferred 54" interactome networks are shown. Genes from each of the significant modules were entered into 55" Reactome, to identify possible biological functions associated with these genes. The biological 56" function of the four modules with the most significant p value from each interactome network are 61" overnight and then switched to 5 mM or 25 mM glucose conditions and culture media collected 62" following a further 48 h. Samples were analysed using UPLC-MS and putatively named metabolite 63" features that were significantly differently present in BeWo cell conditioned media following culture in 69" were analysed using UPLC-MS and putatively named metabolite features that were significantly 70" differently present in BeWo cells cultured in 25 mM compared to 5 mM D-glucose are shown (Kruskal- 71" Wallis; p ≤0.01). to an electrospray LTQ-Orbitrap hybrid mass spectrometer (ThermoFisher Scientific, Hemel 127# Hempstead, UK). A Hypersil GOLD column (100 x 2.1 mm, 1.9 µm; ThermoFisher Scientific, Runcorn, 128# UK) with a column temperature of 50°C was used. The mass spectrometer was used in negative 129# electrospray (ES-) and positive (ES+) ion modes separately, thus each sample was analysed twice, 130# once in each ion mode. Two solvents were applied to the samples during the run in each ion mode. 131# The solvents (A and B) contained 0.1% formic acid in water (vol/vol) and 0.1% formic acid in methanol 132# (vol/vol), respectively. Both solvents were applied at a flow rate of 400 µL/min. Solvent A was held at 133# 100% for 0.5 min followed by a linear increase to 100%; solvent B was then held at 100% for another 134# 15.5 min. At 20.5 min a step change to 100% solvent A was performed and held for 1.5 min to 135# equilibrate. Column eluent from the first 90 seconds was transferred to waste, after which it was 136# transferred to the mass spectrometer. 137# The Orbitrap mass analyser (mass resolution 30,000 at m/z = 400) was applied to collect a full scan
1,099
2018-03-27T00:00:00.000
[ "Biology", "Medicine" ]
Inside the Shark Nursery: The Evolution of Live Birth in Cartilaginous Fish A new study in Genome Biology and Evolution reveals that egg yolk proteins may have been co-opted to provide maternal nutrition in live-bearing sharks and their relatives A new study in Genome Biology and Evolution reveals that egg yolk proteins may have been co-opted to provide maternal nutrition in live-bearing sharks and their relatives. While giving birth to live young is a trait that most people associate with mammals, this reproductive mode-also known as viviparity-has evolved over 150 separate times among vertebrates, including over 100 independent origins in reptiles, 13 in bony fishes, 9 in cartilaginous fishes, 8 in amphibians, and 1 in mammals. Hence, understanding the evolution of this reproductive mode requires the study of viviparity in multiple lineages. Among cartilaginous fishes-a group including sharks, skates, and rays-up to 70% of species give birth to live young ( fig. 1); however, viviparity in these animals remains poorly understood due to their elusiveness, low fecundity, and large and repetitive genomes. In a recent article published in Genome Biology and Evolution, a team of researchers led by Shigehiro Kuraku, previously Team Leader at the Laboratory for Phyloinformatics at RIKEN Center for Biosystems Dynamics Research in Japan, set out to address this gap. Their study identified egg yolk proteins that were lost in mammals after the switch to viviparity but retained in viviparous sharks and rays (Ohishi et al. 2023). Their results suggest that these proteins may have evolved a new role in providing nutrition to the developing embryo in cartilaginous fishes. According to Kuraku, who now works as Professor of Molecular Life History Laboratory at the National Institute of Genetics in Mishima, investigators have long wanted to learn more about the evolution of viviparity in sharks and their relatives. "Reproduction is one of the most fascinating features of cartilaginous fishes because they show a broad spectrum of reproductive modes." Among viviparous species, this includes a range of mechanisms for providing nutrients to the developing embryo, from relying solely on nutrients present in the embryo's yolk sac, to feeding the embryo unfertilized eggs, secreting nutrients from the uterus ("uterine milk"), or transferring nutrients via a placenta. To better understand these various mechanisms, the authors searched genomic and transcriptomic data from 12 cartilaginous fishes for homologs of vitellogenin (VTG), a major egg yolk protein synthesized in the female liver in egg-laying species. Regardless of their reproductive mode, all cartilaginous fish species had at least two copies of VTG, while all copies of VTG have been lost from mammals (although the authors did identify a copy in the Tasmanian devil, a marsupial, which was not previously known to harbor a VTG gene). Next, the authors searched for homologs of the VTG receptor; while mammals retain a single copy of this receptor, Kuraku and his colleagues identified two ancient tandem duplications giving rise to three copies of the receptor in cartilaginous fishes. The authors note that this finding was unexpected. "We predicted the retention of egg yolk protein genes in the shark genomes because live-bearing sharks rely partly on nutrition supply from the egg yolk," says Kuraku. "What surprised us the most was that cartilaginous fish including sharks have more copies of the egg yolk protein receptor genes." This suggested that these proteins may provide a novel function in this viviparous lineage. To shed light on the functions of VTG and its receptor in these species, the authors compared tissue-by-tissue transcriptome data from one egg-laying shark (the cloudy catshark) and two viviparous sharks. The frilled shark is a viviparous species that provides no maternal nutrients to the developing embryo, while the spotless smooth-hound has a placenta. In the egg-laying cloudy catshark, VTG is primarily expressed in the liver, and its receptors are primarily expressed in the ovary. In contrast, in the two viviparous sharks, VTG was expressed not only in the liver but also in the uterus. Interestingly, the VTG receptor was also expressed in the uterus in these species. This suggests that VTG proteins may not only function as yolk nutrients but may also be transported into the uterus, where they may play a role in providing maternal-based nutrition in some cartilaginous fishes. As noted by the authors, this intriguing possibility remains to be confirmed through functional studies. They also hope to expand this analysis to a genome-wide survey of factors associated with the various reproductive modes of cartilaginous fishes. Unfortunately, such experiments are difficult in these species given the challenge in obtaining biological samples. Kuraku and his collaborators, however, hope to change this. "This study was enabled by networking among individuals with various types of expertise who recognize the biological potential of cartilaginous fishes," says Kuraku. "It also led to the launch and development of the Squalomix consortium," an initiative launched in 2020 to promote genomic and molecular approaches specifically targeting shark and ray species (Nishimura et al. 2022). The consortium aims to make its resources publicly available, including a cell culture technique that may help enable functional assays of molecules (Uno et al. 2020), facilitating future research into the reproductive modes of these elusive and fascinating creatures. 1.-A developing embryo of the frilled shark, which has a unique mode of live-bearing and is thought to exhibit a long gestation time of no less than three years. Photograph by Frilled Shark Research Project.
1,248.8
2023-03-01T00:00:00.000
[ "Biology" ]
Calculation of Seven Photovoltaic Cells Parameters Using Parallelized Successive Discretization Algorithm Some parameters must be calculated with very good accuracy for the purpose of designing, simulating, and evaluating the performance of a photovoltaic system. The seven parameters of the photovoltaic cell and panels for the two-diode model are determined using a parallelized metaheuristic algorithm based on successive discretization. The parameters obtained for a photovoltaic cell and four panels using the proposed algorithm are compared with the ones calculated through over twenty methods from recent research literature. The root mean square error is used to prove the superiority of the Parallelized Successive Discretization Algorithm (PSDA). The smallest values for root mean square error (RMSE) in both cases, photovoltaic cell and panels, are obtained for the algorithm presented in this paper. The seven parameters for three panels known in the specialised literature, Kyocera KC200GT, Leibold Solar Module LSM 20, and Leybold Solar Module STE 4/100 are determined for the first time using PSDA. Introduction Fossil fuel is inevitable exhausting, and its price has continually increased in the last decades. There is also an increasing concern on the environmental pollution and on the climate changes related to the use of fossil fuel. It is strongly believed that the renewable energy is the clean alternative solution of today and for the future. Many researchers have focused on renewable solar energy in general and, in particular, on the generation of electric power using photovoltaic cells which is desired to become in the near future one of the most important energy sources. Nowadays, there are many photovoltaic cells types, some of them relatively new, such as Perovskite or multijunctions-triple or four junctions, which must be analyzed and characterized to optimize their efficiency. Very good photovoltaic panels are achieved if the photovoltaic cells used are "twins." This can be realized if the photovoltaic cells are characterized before the fabrication process of the photovoltaic panels, and they are selected so that the parameters have the same values. The characterization process must be very fast so as to increase the productivity. Some parameters must be calculated with very good precision for the purpose of designing, simulation, and evaluation of the performance of a photovoltaic system. Nowadays, the estimation of these parameters is an important research topic, for which researchers develop new methods and algorithms. The parameters of the photovoltaic cells and panels can be determined using the current voltage characteristic, called I-V characteristic in the following, equivalent circuit, and the mathematical model [1,2]. The one-diode model is generally used to determine the parameters of the photovoltaic cells, due to the simplicity and good agreement for the parameters results [3]. Lately, the researchers are using the two-diode model to determine the photovoltaic cells parameters, especially for the monocrystalline and polycrystalline silicon photovoltaic cells and panels. In this case, both mechanisms, the diffusion and the generation and recombination, are taken into account. Using the two-diode model leads to growth accuracy with which the parameters of the photovoltaic cells are determined. The parameters of the photovoltaic cells and panels were determined using graphical analysis and numerical analysis with different approximation [1]. The metaheuristic algorithms based on natural phenomena were used lately with success because are suitable for nonlinear multimodal as is the problem for photovoltaic devices. The parameters of the photovoltaic cells and panels using the metaheuristic algorithms are calculated with high accuracy due to avoiding the errors made through approximation in the other methods, and the necessary time for their determination is reduced continuously due to the optimization of the algorithms and use of the hybrid algorithms [3]. We shall briefly present below the best currently known algorithms for determining the solar cell parameters. They are taken into consideration to compare their performance against the Parallelized Successive Discretization Algorithm (PSDA) presented in this paper. Most of these algorithms are metaheuristic methods inspired from the behavior of populations of different groups of people or from the behavior of swarms of animals or insects. Population classification evolution algorithm (PCE) is an evolutional algorithm, with a fast convergence speed and a very good accuracy [4]. Simplified TBLO (STBLO) [5] and generalized oppositional TBLO (GOTBLO) [6] are improved versions of teaching-learningbased optimization (TLBO) algorithm which is a population-based method. Improved shuffled complex evolution (ISCE) is an improved version of complex evolution strategy (SCE) which was applied with success to calculate the parameters of one-diode and two-diode models for photovoltaic cell and for panel [7]. Rcr-IJADE has better performance than other classic or adaptive differential evolution algorithms [8]. MABC is a modified version of artificial bee colony (ABC) algorithm with better results than the regular ABC [9]. EHA-NMS and NM-MPSO are hybrid algorithms and are based on Nelder-Mead and MABC [10] and, respectively, particle swarm optimization PSO [9]. Cat swarm optimization algorithm (CSO) is developed by analyzing the behaviors of cats. The performance of CSO surpasses that of PSO [11]. Bird mating optimizer (BMO) tries to imitate the mating strategies of bird species, without premature convergence, and thus, its solution is close to the global one [12]. Chaotic whale optimization algorithm (CWOA) is an algorithm developed on the hunting mechanism of humpback whales, and it can refine complex and multimodal objective functions [13]. Artificial bee swarm optimization algorithm (ABSO) is based on the intelligent behaviors of honey bees such as collection and processing of nectar [12]. Innovative global harmony search (IGHS) and grouping-based global harmony search (GGHS) algorithms are improved versions of the harmony search algorithm trying to imitate the improvisation process of musicians [14]. Simulated annealing (SA) is a single-searcher algorithm with less chance to find the global minimum [15]. Guaranteed convergence particle swarm optimization (GCPSO) [16] is an improved version of the PSO algorithm, and it has the ability to avoid premature convergence and to determine the parameters of the photovoltaic panels quickly and accurately. Time-varying acceleration coefficients particle swarm optimization (TVACPSO) is an improved PSO. WDOWOAPSO is a hybrid method based on PSO and the wind-driven optimization (WDO) technique [17]. Artificial Bee Colony-Differential Evolution (ABC-DE) resulted from ABC, and its performance is improved for finding the global optimum and for convergence speed [18]. Flexible particle swarm optimization (FPSO) improves the PSO algorithm through increasing the ability of global search [19]. In this paper, a metaheuristic algorithm based on the discretization process of functions that systematically try to improve the approximate solution is presented. Discretization is defined in Mathematics as the continuous functions, variables, models, and equations transferred into discrete counterparts. It usually represents the first stage of achieving proper values for numerical evaluation or implementation on digital computers. Such techniques are implemented in order to solve miscellaneous problems [20], among the optimizing ones [21]. If continuous data is discretized, discretization error appears in various amounts. The aim is decreasing the errors down to a negligible value in terms of modeling scope. The idea of successive discretization algorithm (SDA) applied for parameter estimation of PV cells is to compute a set of approximate solutions selected from values obtained by a discretization process and having the minimum root mean square error. Around each of these selected values, a more refined discretization is performed and a new set of approximate solutions is selected and so on. The successive discretization process performed a preset number of iterations or until no better solution is found than the best solution found so far in the previous iteration. In this paper, we adapt and implement SDA in parallel for the two-diode model, and we compare the obtained solution with the solutions given by the best known methods at the moment. The novelty and the contributions of the paper are (i) the new algorithm PSDA is presented in the paper (ii) the results obtained for the RMSE for all devices under analysis are the best in comparison with the other algorithms from the specialised literature (iii) the time needed to extract the parameters of the photovoltaic cell and panels for the two-diode model increases, and the PSDA algorithm is the proper tool to reduce it; the duration is reduced 6 times using the PSDA in comparison with SDA algorithm (iv) three datasets are analyzed for the first time using the two-diode model: Kyocera KC200GT, Leibold Solar Module LSM 20, and Leybold Solar Module STE 4/100. Method 2.1. Models for Photovoltaic Cell and Panel. The equivalent circuit of the photovoltaic panel is presented in Figure 1, and the mathematical model is given by Eq. (1). The following seven parameters can be determined: the photogenerated current I ph , the reverse saturation current I od and ideality factor of diode n d which correspond to diffusion mechanism, 2 International Journal of Photoenergy the reverse saturation current I or and ideality factor of diode n r which correspond to generation and recombination mechanism, and the parasitic resistance-series resistance R s and shunt resistance R sh [1]. N s is the number of the photovoltaic cells connected in series. N s is one in case of the photovoltaic cell. 2.2. Parallelized Successive Discretization Algorithm. SDA algorithm was successfully applied for the one-diode model in [3]. It was compared with other methods used to determine the parameters of the photovoltaic cells and panels. SDA proved to be the best among the known methods at the moment the article was published. In this paper, we adapt SDA to solve the more complex problem of estimating the seven parameters for the two-diode model. Since the amount of calculations is considerably higher than for the one-diode model, a direct adaptation of SDA for the two-diode model leads to a very slow algorithm or to an algorithm that finds a low-quality solution. Therefore, a parallel implementation of SDA is needed, fast enough and able to compute very good solutions. In the production process of photovoltaic panels, it is very important to accurately and quickly estimate the cell parameters. Discretization Process. The PSDA algorithm is based on discretization, which transfers continuous functions and equations into discrete counterparts. When the continuous data are discretized, some errors can appear, and the target is to reduce them [3]. We denote by Using Equations (1) and (2), the following equation is obtained: The root mean square error (RMSE), given by Equation (4), has to be minimized for the photovoltaic cell and panel parameters, I pH , I od , n d , I or , n r , R s , and R sh calculated for the given pairs of current and voltage ðI, VÞ. where p represents the number of the given ðI, VÞ pairs. The intervals J i = ½a i , b i ði = 1, 2, ⋯, 7Þ are considered for the definition of domains for the seven parameters enumerated before. The function This 7D interval is the given boundaries for the searched solutions. For The values v i j (j = 1, ⋯, d i ) are calculated using Equation (6) in order to obtain a good uniform distribution of the points in the interval J i : where l i is calculated using the following relation: Using Equations (6) and (7), the following relations are obtained: For the photovoltaic cell or panel, the following set of septets of parameters is considered: (3) can be improved using a successive discretization-refining initial solution. The best s ≥ 1 ("s" is established at the beginning of the algorithm) septets are considered g j = ðv 1 1, ⋯, s) in the set G, having the smallest RMSE errors (Cotfas D.T., [3]). For each septet g j , the solutions are refined for Equation (3) in the neighborhood of g j : where International Journal of Photoenergy A total of s septets are calculated for each interval from Equation (11) (j = 1, ⋯, s). From these s 2 values, the best s solutions are selected, and the refining discretization is applied again. A predefined number of refining iterations are performed, and, in the end, the best overall septet is considered the final solution for Equation (3) (having the smallest RMSE error). PSDA for Double-Diode Solar Cell Model. The idea of the algorithm is as follows. Discretizations are performed on given 7D intervals (see (11) and (12)). At the end of each iteration of the algorithm, "s" good solutions are constructed. In the vicinity of each of these solutions, a new discretization is applied to refine that solution. These discretizations can be performed in parallel on different machines or on different parallel threads on the same machine. The flowchart of the parallel implementation of the algorithm is presented in Figure 2. Every discretization is executed t times on parallel threads. In the first iteration, there is only one thread, but starting from the second iteration, there are s parallel threads. Before starting the threads, all the values in the vector "finished" are set to the value false. Each discretization gets a 7D interval I j , constructs s solutions, and returns them in the set S j . When the j-th discretization is finished, the j-th component of the vector "finished" is set to the value true. After all the values in the vector "finished" are true, it means that all the discretization threads are terminated, and all the sets S j (j = 1, ⋯, t) are constructed. On an 8-thread I7 processor by performing in parallel the discretization calls the algorithm finishes more than 6 times faster than the nonparallel implementation of the algorithm that computes the discretization on one thread. The above implementation of the algorithm proved to bring the processor utilization across all cores of the computer almost to the maximum (over 98%). Results and Discussion The PSDA algorithm is applied for five datasets-one for RTC photovoltaic cell and the others for four photovoltaic panels: PWP201 [22], Kyocera KC200GT [23], Leibold Solar Module LSM 20 [24], and STE 4/100 [24]. The seven parameters for three out of these four panels are determined for the first time in this paper. The current voltage (I-V) characteristic for RTC is measured at 1000 W/m 2 , and the temperature of the photovoltaic cell was 33°C; PWP201 panel is measured at 1000 W/m 2 , and the temperature of the photovoltaic panel was 45°C; Kyocera KC200GT panel is measured at 1000 W/m 2 irradiance and 25°C temperature; for LSM 20 panel, the I-V characteristic is measured at 360 W/m 2 , and the temperature of the photovoltaic panel was 24°C. The STE 4/100 panel is measured at 900 W/m 2 irradiance and 22°C temperature. The limit intervals for the seven parameters of photovoltaic cell and panel are chosen to be similar with the ones used in the other algorithms [7] for the RTC photovoltaic cell and PWP201 panel. For the Kyocera KC200GT panel, Leibold Solar Module LSM 20, and STE 4/100 panel, the two-diode model is applied for the first time. 3.1. RTC Photovoltaic Cell. The points of RTC photovoltaic cell and the current calculated using PSDA algorithm are shown in Table S1. The seven parameters of the RTC photovoltaic cell and the root mean square error determined by the PSDA algorithm and the values calculated by other algorithms are presented in Table 1. The I-V pairs of the RTC photovoltaic cell measured and calculated with PSDA algorithm are compared (see Figure 3). A very good matching can be observed, also proven by the lowest value of the root mean square error (see Table 1). The absolute current error for RTC photovoltaic cell obtained with two algorithms, PSDA and ICSE, is presented in Figure 4. The PSDA absolute current error is lower than for ISCE ones for all measurement points without a uniform distribution. The highest values of the absolute error are around the open circuit voltage cell. The same distribution of the absolute current error is reported in [25]. In the absolute current error for RTC photovoltaic cell calculated with the two-diode model-the seven parameters and with the one-diode model-five parameters are presented in Figures 5 and 6. It can be observed that the absolute current error calculated with the two-diode model is lower International Journal of Photoenergy than the ones determined with the one-diode model. This shows that the best solution for the photovoltaic cell parameters is obtained using the two-diode model. PWP201 Photovoltaic Panel. The points of PWP201 photovoltaic panel and the current calculated using PSDA algorithm are shown in Table S2. The seven parameters of the PWP201 photovoltaic panel and the root mean square error determined through the PSDA algorithm and the values calculated through other algorithms are presented in Table 2. The seven parameters of the PWP201 photovoltaic panel, the root mean square error determined through the PSDA algorithm, and the values calculated through other four algorithms are presented in Table 2. PSDA gives similar solutions to other algorithms but with the lowest root mean square error. By comparing the five algorithms taken into account for PWP201 photovoltaic panel, it can be observed that all seven parameters fluctuate in the function of the applied algorithm. These fluctuations are higher than the ones for RTC photovoltaic cell. The number of I-V pairs is the same for the two photovoltaic devices, but the voltage interval is much higher for the panel than for the photovoltaic cell, and the calculations are made for fewer points. The I-V characteristics of the PWP201 photovoltaic panel measured and calculated with PSDA algorithm are compared (see Figure 7), and the matching is very good. The absolute current error for PWP201 photovoltaic panel calculated with two algorithms, PSDA and GCPSO, is presented in Figure 8. The PSDA absolute current error is lower than for GCPSO ones for the majority of the measurements. The seven parameters for the next three panels are determined for the first time using PSDA. In the literature, there are no other values for the two-diode model to compare with, and so, the solutions are compared with the best ones obtained for the one-diode model. Table S3. The seven parameters of the Kyocera KC200GT photovoltaic panel and the root mean square error determined through the PSDA algorithm are presented in Table 3. The calculated I-V characteristic of the Kyocera KC200GT photovoltaic panel using PSDA algorithm, which consists 54 multicrystalline silicon photovoltaic cells connected in series, is matching with the I-V points measured [23] (Figure 9). The comparison between the absolute errors of the current for the one and two-diode models shows that the last model gives for the majority of the I-V points better results ( Figure 10). The two-diode model has to be used when the accuracy in calculation of the photovoltaic panel parameters is crucial. Table S4. The seven parameters of the LSM 20 photovoltaic panel RMSE determined through the PSDA algorithm are presented in Table 4. The I-V characteristics of the Leibold Solar Module LSM 20 measured and calculated with PSDA algorithm are compared (see Figure 8). The matching for the measured and calculated points is very good; this is confirmed by the root mean square error which is very small (Table 4) (Figure 11). Figure 12 illustrates the absolute error for the current, obtained for the LSM 20 photovoltaic panel, using the oneand two-diode models. The RMSE obtained with the twodiode model is smaller than the one obtained with the onediode model, which is 17.64E-4. This is confirmed by the absolute error obtained through both models. The higher values of the absolute error are splited for the two models used. 3.5. Leybold Solar Module STE 4/100. The points of STE 4/100 photovoltaic panel and the current calculated using PSDA algorithm are shown in Table S5. The seven parameters of the STE 4/100 photovoltaic panel and the root mean square error determined through the PSDA algorithm are presented in Table 5. The I-V pairs of the Leybold Solar Module STE 4/100 measured and calculated with PSDA algorithm are compared (see Figure 13). A very good matching can be observed, also proven by the lowest value of the root mean square error (see Table 5). The absolute current error for STE 4/100 photovoltaic panel calculated with the two-diode model and with the Figure 5. It can be observed that the absolute current error calculated for the majority points with the two-diode model is lower than the ones determined with the one-diode model. The RMSE obtained with the one-diode model is 4.0853E-4 ( Figure 6). PSDA algorithm was executed with s = 10 and with a total of 4 iterations on an 8-thread I7 processor at 1.9 GHz. It ended in 46 seconds. The nonparallel implementation took almost 5 minutes to finish (more than 6 times slower). Conclusions The estimation of the parameters for the solar cells and panels is very important for designing, simulation, and evaluation of the performance of photovoltaic systems. This problem is currently intensively studied by many researchers. In this paper, the Parallelized Successive Discretization Algorithm is developed and presented to deal with this problem. It is a metaheuristic approach that proved to be a powerful tool for approximating the solution of the implicit transcendental equation problem of finding the seven parameters for the two-diode model. The problem is transformed into a multidimensional optimization problem of minimizing the root mean square error and then solved using multiple and increasing in refining discretizations. The parallel implementation helped to speed up the algorithm since the amount of computations is high and complex. PSDA proves to be fast and reliable that gives very good (the best) estimation of parameters among the current known algorithms used for the two-diode model. In this paper, the results given by PSDA for one photovoltaic cell and for four panels were given for the two-diode model. The parameters were obtained for the first time for three panels for which only the parameters of one-diode model were determined in the literature. The sensitivity analysis of the PSDA algorithm is one of the future research directions, and another is to use the PSDA algorithm to extract the parameters of the other photovoltaic cells and panels, for example organic photovoltaic cell using a proper equivalent circuit and the corresponding mathematical model. Data Availability Data in supplementary information files. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper. Acknowledgments This publication was funded by the University Transilvania of Brasov. 12 International Journal of Photoenergy Supplementary Materials Table S1: the current calculated using the PSDA algorithm and the relative error, calculated by subtracting the calculated from the measured current values for RTC photovoltaic cell. Table S2: the current calculated using the PSDA algorithm and the relative error for PWP201 photovoltaic panel. Table S3: the current calculated using the PSDA algorithm and the relative error for LSM 20 photovoltaic panel. Table S4: the current calculated using the PSDA algorithm and the relative error for LSM 20 photovoltaic panel. Table S5: the current calculated using the PSDA algorithm and the relative error for LSM 20 photovoltaic panel. (Supplementary Materials)
5,342.4
2020-12-10T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
BIOPOLITICS IN THE BRIEF AND FRIGHTENING REIGN OF PHIL, BY . In this article, I analyze George Saunders’s novella The Brief and Frightening Reign of Phil (2005) from the perspective of biopolitics, following the theoretical frameworks established by Michel Foucault, Giorgio Agamben, and Frédéric Lordon. I focus on two aspects of biopolitics: one linked to the extermination of the enemy, and the other to the manipulation of the passions of the governed. Regarding the first aspect, I discuss the strategies that Phil, the sovereign dictator in the story, uses to kill or displace the Other. In the second aspect, I study the use of desire and affect as biopolitical technologies that contribute to maintaining and reinforcing power. Finally, I examine the critical position of various characters in relation to Phil’s biopolitical regime and the political dimension of the novella itself. 1. INTRODUCTION George Saunders (Amarillo, Texas, 1958) explores various technologies of power in his literary works.These technologies and the resulting sense of alienation constitute one of the major topics of his oeuvre 1 .Here I propose interpreting his writing as a literary approach to the multiple ways in which biopolitical power operates in contemporary US society 2 .By presenting in his work "dystopic scenarios that are both deeply unsettling and alarmingly familiar" (Huebert 106), Saunders makes visible the theories of power that Foucault, Agamben, Lordon and others have explored in their writings.The purpose of this article is to analyze the presence and function of biopolitical technologies of power in The Brief and Frightening Reign of Phil (2005). It may seem challenging to justify a biopolitical interpretation of this novella due to various reasons.First, the novella is considered to be "speculative, experimental fiction" ("Kirkus Reviews") and, hence, some may think, disconnected from factual reality; second, it is a humorous "mind-bending work" ("Kirkus Reviews") and, therefore, some may believe, should not be taken seriously.Third, Saunders himself has claimed that the main purpose of his fiction is "to provide a wild ride for the Some of the funniest things in South Chicago were also the most deeply truethese sort of over-the-line, rude utterances that were right on the money and undeniable.Their truth had rendered them inappropriate; they were not classically shaped, not polite, and they responded to the urgency of the moment.(Sacks) Then, as Hayes-Brady explains, for Saunders "humor is not only compatible with difficult reality, but integrally connected to it, a medium that allows the expression of truths that are perhaps otherwise inexpressible" (32).It follows from this that it is just natural that satire and humor inform the tone of The Brief and Frightening Reign of Phil, as this novella deals with complex issues such as oppression, exploitation, and genocide. As regards the third concern mentioned above, it is true that Saunders has repeatedly declared that the primary objective of his fiction is to entertain the reader 5 .In an interview with Deborah Treisman, he claimed that his stories are the result of an intuitive creative process, and that their political or moral implications are simply an unintended byproduct of the writing process ("This Week").However, Saunders still believes that literature has the potential to transform readers, and that they must be altered by the stories they read.Discussing the impact of Kurt Vonnegut's oeuvre on his own work, Saunders suggests that the function of short stories must be precisely to change their audience: [Before reading Vonnegut,] I'd understood the function of art to be primarily descriptive: a book was a kind of scale model of life, intended to make the reader feel and hear and taste and think just what the writer had.Now [after reading Vonnegut] I began to understand art as a kind of black box the reader enters.He enters in one state of mind and exits in another […] What's important is that something undeniable and non-trivial happens to the reader between entry and exit.("Mr.Vonnegut" 78.Italics my own) The fact that a story has the power to transform its reader, suggests that it has some political potential.Therefore, Saunders acknowledges that his stories are not just for entertainment purposes, but rather they should be meaningful and relevant to the readers' lives, in addition to any "wacky invention" they may include: especially with a story like this one [he refers to his short story "Ghoul" (2020)] -a speculative story, built around a comedic, over-the-top premise-I think the reader is always asking, somewhere in the back of her mind, "But what does this have to 5 When discussing his writing process and the impact he tends to have on the reader, Saunders has stated the following: "Another aspect of my approach is simply to allow that my goal is to compel a reader to keep reading.And/or to wildly entertain/amaze/outrage/engage the reader.I see myself, very much, as an entertainer.I am trying to entertain you by talking about something important and urgent that we have in common.By the above method, I am trying to find out, for that particular story, what that common thing we share is going to be.I don't know at the outset and try not to know -well, ever.I am trying to design a thrilling roller coaster for you without knowing the exact effect riding it will have on you.My mind is on the thrill.I know that the story will have an effect, and I know where it will" ("Office Hours"). PAPER ACCEPTED do with me?With now?With my real life?"As I'm writing, I'm continually asking the same thing -trying to get the story to swerve in a direction that takes it away from being just a wacky invention and toward being a wacky relevant invention.(And that imperative might be even more pressing when the real world seems to be falling apart).(Treisman, "George Saunders on Surprising Himself") It is obvious to me that The Brief and Frightening Reign of Phil is not just a "wacky invention," but a "wacky relevant invention."The novella offers a privileged insight into the forms of biopolitical power that permeate contemporary Western societies.By this, I mean both its racist and genocidal dimension and the way passions are captured and made productive by political power.The interpretation that follows is a close analysis of how these aspects are portrayed in the story.On the one hand, I explain the genocidal logic that is at play.On the other, I study the ways power seizes the desires and emotions of the governed. BIOPOLITICAL POWER In Society Must Be Defended, Michel Foucault explains that the emergence of "biopolitical power" took place in the 18 th Century.According to him, this is a new "technology of power" focused on the "man-as-species," that is to say on the "population."(242).Therefore, from the end of the 18 th Century, Western societies have two technologies of power which were established at different times and which were superimposed.One technique is disciplinary; it centers on the body, produces individualizing effects, and manipulates the body as a source of forces that have to be rendered both useful and docile.And we also have a second technology which is centered not upon the body but upon life: a technology which brings together the mass effects characteristic of a population, which tries to control the series of random events that can occur in a living mass (249) This new form of power, then, in Foucault's words, took "control of both the body and life, or […], if you like, [took] control of life in general -with the body as one pole and the population as the other" (253).If modern power, as we see, is primarily concerned with life, Foucault asks himself how is the right to death (that was an attribute of sovereign power in the Classical period) exercised after the 18 th Century.He answers that "the emergence of […] biopower" "inscribes [racism] in the mechanisms of the State" with the result that "the modern State can scarcely function without becoming involved with racism at some point" (254).In modern societies, "killing, or the imperative to kill," is acceptable only if it results not in a victory over political adversaries, but in the elimination of the political threat and the improvement of the species or race.There is a direct connection between the two.In a normalizing society, race or racism is the precondition that makes killing acceptable.When you have a normalizing society, you have a power which is, at least superficially, in the first instance, or in the first line a biopower, and racism is the indispensable PAPER ACCEPTED precondition that allows someone to be killed, that allows others to be killed.Once the State functions in the biopower mode, racism alone can justify the murderous function of the State.( 256) Giorgio Agamben argues that the implementation of the "state of exception" enables modern biopolitical regimes to enact racism, as examined by Foucault.Agamben thinks that the "state of exception", that is, "that moment -which is supposed to be provisional-in which the legal order is suspended, has become during the 20th century a permanent and paradigmatic form of government."(Costa,"Introducción" 5. My translation).In this way, as Flavia Costa explains, the "state of exception" is actually the original structure that founds -gives origin and foundation tomodern biopolitics: that is, the politics that includes natural life [zoé] within the calculations of state power.By including the living being, as bare life, within the law through its exclusion (to the extent that someone is a citizen, he is no longer a mere living being; but at the same time, to be a citizen he puts his natural life, his bare life, at the disposal of political power), politics becomes biopolitics.And the state of exception, insofar as it creates the legal conditions for power to dispose of citizens as bare lives, is a biopolitical device of the first order.("Introducción" 7. My translation) George Saunders has also reflected on biopolitical power and he believes that the "genocidal impulse" is a natural human tendency.He has explored this impulse in his fiction and often refers to two historical moments where the human inclination to reject, enslave or kill the Other had disastrous consequences in real life: Nazi Germany and the Southern United States before the Civil War.For him, the idea of a group of people taking over and abusing others has always been a matter of reflection and concern.As he stated in 2005: I am interested in this strange time of tranquility in between great genocides that we are currently living in.I try to understand where the genocidal impulse hides itself when there is no actual genocide going on.I actually doubt if humans have radically changed since 1944 and, of course, Bosnia or Rwanda make clear that they have not.I am curious for the US (or British or Spanish) features, now silent, that can reach their heads and make one of these sensible and pacific countries to start a killing.That is to say, I am suspicious that the hate that is necessary to start a genocide is latent and that there could be revealing signals, in the language and in the behaviors, that alert us about their imminence and about their origin.(Saunders qtd. by Rendueles) Saunders argues that the propensity to discriminate against those who are different is a constant tendency in the United States.This tendency may remain dormant most of the time, but it is always present and could emerge at any moment.According to Saunders, this inclination is manifested by those citizens who are "suspicious" of "the Other" and want to "dominate it, deport it, exploit it, enslave it" or "kill it as needed" ("Who Are All?").In the following chapter, I explore how this "genocidal impulse" is depicted in the novella. PHIL AND THE "GENOCIDAL IMPULSE" The Brief and Frightening Reign of Phil is an account of the rise of a "sovereign dictatorship" led by Phil where the provisional legal order is based on a logic of "association" and "dissociation" 6 ("Carl Schmitt").This means that citizenship is based on a biopolitical distinction between two groups of people, the Outer Hornerites and the Inner Hornerites.Phil, the dictator in the story, invents the enemy for personal reasons 7 , out of sheer hatred, and transforms the border area into a thanatopolitical space where the only possible fate for the Other -the Inner Hornerites-is to be annihilated. To justify and legally carry out his actions, Phil calls a state of exception under the name of "Border Area Improvement Initiative."He determines that the Inner Hornerites represent an external threat that has to be purged to ensure that the Outer Hornerites' lives and political existence remain appropriate and healthy.The Inner Hornerites represent a social excess that has to be eliminated; at the same time, they are the enemy that allows the Outer Hornerites to form, by contrast, their own identity. Phil has to convince the Outer Hornerites that the Inner Hornerites are their opponents to make them participate in the genocide.The first strategy that Phil uses is to explain the difference between the Inner Hornerites and the Outer Hornerites as the result of divine will, in a similar logic to that of the US Puritan tradition that conceives earthly rewards as a result of God's generosity: I've been thinking about our beautiful country!Who gave it to us?I've been thinking about how God the Almighty gave us this beautiful sprawling land as a reward for how wonderful we are.We're big, we're energetic, we're generous, which is reflected in all our myths, which are so very populated with large high energy folks who give away all they have!If we have a National Virtue, it is that we are generous, if we have a National Defect, it is that we are too generous!Is it 6 "A sovereign dictator is a dictator who does not defend an already existing constitution but attempts to create a new one and who does so not by his own authority but in the name of the people" ("Carl Schmitt").Phil, the sovereign dictator in the story, puts aside the previous legal order represented by the old President and creates a "novel positive legal or constitutional order, together with a situation of social normality that fits it" ("Carl Schmitt").In order to create this new situation of normality, the sovereign dictator assumes that "it is possible to speak of the existence of a people in advance of the creation of any positive constitutional framework" ("Carl Schmitt").A people can exist before a specific constitutional order because of the nature of the political.As Schmitt says, "the specific political distinction […] is that between friend and enemy," and this distinction refers to the "utmost degree of intensity […] of an association or dissociation": "the utmost degree of association is the willingness to fight and die for and together with other members of one's group, and the ultimate degree of dissociation is the willingness to kill others for the simple reason that they are members of a hostile group" ("Carl Schmitt"). 7Phil had been in love with and rejected by Carol, one of the Inner Hornerites.Also, Phil's father had been humiliated by a guard of Inner Horner called Smitty.Nayebpour and Varghaiyan (852) and have pointed out Phil's personal motivation to become a dictator. PAPER ACCEPTED our fault that these little jerks have such a small crappy land?I think not! God Almighty gave them that small crappy land for reasons of His own.It is not my place to start cross-examining God Almighty, asking why He gave them such a small crappy land, my place is to simply enjoy and protect the big bountiful land God Almighty gave us! (9-10) Even if the differences between the Inner Hornerites and the Outer Hornerites in body shape and habits are the result of their spatial and social contexts, Phil essentializes them and uses them to justify their racial superiority.As he tells his followers: "Friends, take a look at these losers!If they are as good as us, why do they look so much worse than us?Look how they look!Do they look valorous and noble and huge like us, or do they look sad and weak and puny?" (10).Similarly, when the Outer Horner Militia finds the Inner Hornerites piled one on top of the other to avoid paying taxes, the Militia does not understand that the reaction is a consequence of the difficult situation the Inner Hornerites have been forced into.Instead, they essentialize their behavior and consider them "animals": "My God, look at those people," said Melvin. "Animals," said Melvin."How do they live with themselves?" "I mean, look at us," said Freeda."You don't see us piling on like that.""They seem sort of imprisoned by their own dark urges," said Larry.Everyone looked at Larry, impressed. "No wonder we treat them so unfairly," said Melvin, trying to counter Larry. "Not that we treat them unfairly, Melvin," said Phil a little sternly. "Oh, we treat them fairly," said Melvin."I'm just saying, you know, think how fairly we'd treat them if they didn't behave like uncouth animals imprisoned by their dark surges.""Urges," corrected Larry.(36) Despite the ongoing genocide, Phil blames the Inner Hornerites for the mistreatment they are subject to.Moreover, he thinks they should be grateful for the immense generosity of the Outer Hornerites, who are lending them the tiny "Short-Term Residency Area:" "You people," Phil shouted in the stentorian voice, "via shiftlessness and inertia, have forced us, a normally gentle constituency, into the position of extracting water from the recalcitrant stone of your stubbornness, by positing us as aggressors, when in fact we are selflessly lending you precious territory, which years ago was hewed by our ancestors from a hostile forbidding wilderness!" (61) The goal of Phil and the Outer Horner Militia is to make Inner Horner and the Inner Hornerites completely disappear.When he explains Phase III of the "Border Area Improvement Initiative" it is clear that he wants to establish a "permanent PAPER ACCEPTED peace" by eliminating them (107-108).According to this plan, women and kids must perish too, as the biopolitical logic makes invisible any distinctions based on gender or age: the only operative division is that between friends and enemies, between us and them.This has also been noted by Nayebpour and Varghaiyan, who say that "Phil's discourse functions based on a fixed, uncompromising duality -a we/they (Outer/Inner) dichotomy-fueled by an ideology of racism" (853).When Phil and his Special Friends are implementing Phase III of the Initiative to destroy Inner Horner, Phil explains it this way: "All of them, sir?" Jimmy the Special Friend was saying at that very moment."Even the ladies?""Even the kid?" said Vance."I do not see any lady or kids!" shouted Phil."I only see some curvier Inner Hornerites with longer hair, and one smaller Inner Hornerite with two freakish brains!With Inner Hornerites there is no lady, there is no kid, there are only evil, which must be dealt with harsh, before it spread [sic]!Hurry, boys!Seize all remaining national asset, lift said national asses out from the Peace-Encouraging Enclosure plonto [sic]!" (117) Phil uses medical vocabulary to refer to the enemy, who is considered to be an infection.The "evil" that the Inner Hornerites represent must be "dealt with" "before it spread [sic]."In another intervention, Phil claims that he wants to keep out from Outer Horner the democratic habits of the Inner Hornerites to avoid the possibility of a contagion.Phil's discourse frequently involves racism and eugenics because he uses them to justify his genocidal agenda.To the citizens of Outer Horner, it seems completely natural that to keep their population safe and healthy the Other must be eradicated.The use of health vocabulary in Phil's speeches and the consideration of the Inner Hornerites as animals represent that moment in modern biopolitics, when, as in Nazi Germany, "the care of health [of the body, of the nation] and the fight against the enemy become absolutely indistinguishable" (Agamben,Homo Sacer 85). From what I have explained, it is evident that Phil's rule is based on a biopolitical distinction according to which the lives of the Outer Hornerites are worth living because those of the Inner Hornerites must be eliminated.This discrimination is thanatopolitical because the self-perception of the Outer Hornerites is determined by the qualities and character of the opposite group and because the annihilation of the enemy and its habits is a matter of national survival.As Foucault explained in relation to modern biopolitical regimes: "the death of the other, the death of the bad race, of the inferior race (or the degenerate, or the abnormal), is something that will make life in general healthier: healthier and purer" (255). THE DESTRUCTION OF THE BODIES In his book Discipline and Punish, Michel Foucault highlighted the centrality of the body in contemporary punitive systems: PAPER ACCEPTED we can surely accept the general proposition that, in our societies, the systems of punishment are to be situated in a certain 'political economy' of the body: even if they do not make use of violent or bloody punishment, even when they use 'lenient' methods involving confinement or correction, it is always the body that is at issue -the body and its forces, their utility and their docility, their distribution and their submission.( 25) For this reason, he proposed to analyze the body, the imprisoned body, as a "microphysics of power" and to consider "penal practices less as a consequence of legal theories than as a chapter of political anatomy" (Discipline 28).In the context of modern biopolitics, the body is the preferred place for power to exert its force and therefore a privileged space to understand its logic. The abuse and subjection of the human body are present in many of Saunders's short stories.As David Huebert has explained "George Saunders's fiction finds astonishing ways to manipulate and subdue the human body": "These spectacles of physiological confinement, coupled with Saunders's recurring interest in prosthetics and physical deformity, present a consistently unsettling vision of the human and its persistently pathological relationship to the synthetic environments in which it is housed" (105).Huebert specifically refers to "Escape from Spiderhead", "My Flamboyant Grandson" and "The Semplica Girls Diaries," but the presence of the body in The Brief and Frightening Reign of Phil must also be underscored and analyzed. In this novella, the destruction of the bodies of the Inner Hornerites and of "dissidents" like Freeda is Phil's preferred method to discipline and punish the governed.The fact that in Outer Horner people are killed by dismantling their bodies has to do with the continuity between body and bare life that is constitutive of modern biopolitical power.According to Agamben, the first register of bare life in the constitution of the modern political subject is implicit in the Writ of the Habeas Corpus (1679): [The new subject of politics] is not the free man and his statutes and prerogatives, nor even simply homo, but rather corpus that is the new subject of politics.And democracy is born precisely as the assertion and presentation of this "body": habeas corpus ad subjiciendum, "you will have to have a body to show." (Agamben,Homo Sacer 73) From then onwards, the body becomes the foundation of the political, and the material that the sovereign will have to shape through biopolitical technologies of power.The fact that the bodies of the populations presented in the story are artificial and constructed with different pieces points precisely at the biopolitical control of the body that is part of political modernity.In the novella, the disassembling of the bodies represents the destruction of bare life and its political potential.This conception of the body is evident, for example, when Phil proposes to the Inner Hornerites to sell to him a member of their group, Carol, for twelve "smolokas" -an amount that would make up for three days' taxes.As a response to this proposal, Cal (Carol's husband) crosses the border and hits Phil.Cal asks the other Inner PAPER ACCEPTED Hornerites to start a violent revolt, but his compatriots do not move because they are fearful of Phil's Special Friends.Cal's body parts are disassembled by the Outer Hornerites "for the good of the nation, in the interest of preventing further violence" (63).Phil then asks Leon, the guard, "to incarcerate the various parts of Cal at several discrete locations across the length and breadth of Outer Horner, in the interest of national security" (64) but Cal's blue dot, the upper part of his body, is made visible to the remaining Inner Hornerites as a way to discipline them: "as per Phil, [Cal's blue dot] was placed in a glass case a few hundred feet from Inner Horner, as a warning and a reminder to the other Inner Hornerites, who all night long, from the Short-Term Residency Zone, watched the sad blue dot that had formerly been Cal's torso expand and contract, as if hyperventilating, or sobbing" (64).Also, Freeda, one of the members of the Outer Horner Militia, has her body disassembled to prevent further disloyalties to Phil or to his initiative at the border.She is taken to pieces precisely because she is "not sure" about Phil's plan to make Inner Horner disappear.Phil interprets that Freeda is being "Disloyal" (108) to the "Certificate of Total Approval" (88) that she had previously signed and her body is disassembled to discourage any betrayals and to prevent the spread of any democratic tendencies in Outer Horner: "What a sad thing, that Freeda should prove to be a traitor!" Phil said."Well, let this be a lesson to all!A lesson that the disgusting traits that make those Inner Hornerites so disgusting, such as Disloyalty, such as undermining one's leaders via constant questioning, can even take root in us Outer Hornerites.I wouldn't be surprised if some of us didn't start getting smaller and doing mathematical proofs.We'll have to watch for that.We'll have to be vigilant.Jimmy, Vance, please help Freeda remind us to be vigilant, by constructing an attractive yet sobering display of the components of Freeda, so people can witness Freeda's components, and thus learn from them!What a wonderful thing for Freeda, to be so very educational!In this way, her life will not have been a total waste!" (110.Italics my own) Before having her body disassembled, Freeda has a dream that involves the body of her daughter.This dream reveals her fears about what is going on in the Border Area.In it, Freeda is a dog and her daughter, Gertrude, is a tall vase that Phil holds looking for flaws until he shatters it: "Put her down, put her down," Freeda barked at him."Why do you want to be so bad?" "I am not bad," said Phil. "I am totally good.What I do, benefits all." Then Phil found a flaw and threw Gertrude against the wall, breaking her into a thousand pieces. Freeda woke and rushed to Gertrude's room.Relieved to find that Gertrude was not a broken vase and that her pink shelving was still intact, she gave Gertrude a kiss on the middle of her three rosy cheeks.(68.Italics my own) As we see, the destruction of Gertrude's body in Freeda's dream is a consequence of an imperfection found by Phil.It is clear then that Phil's destruction of the bodies refers to the racist logic that, according to Foucault, is characteristic of modern PAPER ACCEPTED biopolitical power.As Phil says in the passage just quoted: "I am totally good.What I do, benefits all" (68.Italics my own).The bodies of Cal and Freeda were destroyed so that their political attitudes could not "take root" or "infect" the Outer Hornerites in any possible way.The eugenic logic of Phil's dictatorial state justifies the killings to remove any excesses that could menace the health and security of the nation.In the totalitarian regime led by Phil, the perfect body is the equivalent of the perfect nation, and any physical or political deviation must be removed.This is why in his inaugural speech as President of Outer Horner, Phil defines the Outer Hornerites by saying that "everything about us is as it should be" (86).Phil's destruction of his enemies' bodies represents the annihilation of bare life, which Agamben considers to be the condition of the political.This annihilation is a consequence of Phil's efforts to keep the "beautiful animal" 8 (Rancière, "Le malentendu" 51.My translation) of the political order in agreement with itself. DESIRE AND AFFECT AS BIOPOLITICAL TECHNOLOGIES In Willing Slaves of Capital, Frédéric Lordon analyzes the role of affect and desire as technologies of power to achieve consent in contemporary societies.Lordon studies how the dominant subjects today -notably business owners but also masters in general-manage to align the desires of the employees and subordinates with their desires.Following Spinoza, Lordon argues that human essence, which is the power of activity -but generic and, as such, intransitive, a pure force of desire but as yet aimless-only becomes a directed activity due to the effect of a prior affection -something that happens to it and modifies it.It is the affection that points the desire in a particular direction and gives it an object for its concrete exertion.From this follows a radical reversal of the ordinary understanding of desire as the pull of a preexisting, desirable object.It is rather the push of the conatus that invests things and institutes them as objects of desire.And these investments are entirely determined by the interplay of affects.(14)(15) This implies that individual subjects are not in control of themselves or of their own will.On the contrary, they are alienated because "the real chains are those of our affects and desires," and these are generated externally (Lordon 17).Neoliberal 8 According to Rancière, both politics and aesthetics imply a certain "distribution of the sensitive [partage du sensible]" because they both intervene in the "distribution and redistribution of spaces and times, of places and identities, of words and of noise, of what is visible and what is invisible" ("Politique" 12. My translation).Based on this notion, Rancière explains that, in the French Classical period, the perfect political order was conceived as "a beautiful animal, constituted as a harmony between members and functions in an organic totality" ("Le malentendu" 51.My translation).As he says: "This model of the beautiful animal is also a paradigm of proportion between bodies and significations, a paradigm of correspondence and saturation: there must not exist in the community bodies in excess, bodies that circulate in excess of the real bodies; there must not be floating and supernumerary names, susceptible of constructing new fictions capable of dividing the whole or of unmaking its form and its fictionality" ("Le malentendu" 51.My translation). PAPER ACCEPTED corporations and other institutions hold that the gap between the desire of the master and that of his or her subjects is always too large.For this reason, they try to achieve the maximum possible alignment between the desires of the subordinate and those of the master.Following a logic that can be considered totalitarian, corporations and other institutions do this by totally mobilizing individuals to their service.This opens the possibility for the "unlimited commitment" of the subjects who are pushed "to enter a regime of total vocation" (Lordon 38). Phil, the dictator of Outer Horner, manages the affects and the desires of the governed subjects to align them with his goals.In one of his speeches to the Inner Hornerites, Phil explains that he and his people will achieve political control through war and love: "But shoulder that musket we must, that musket of subduing you, and this we will, using our usual indomitable methodology and excellent creativity and spirit of love" (61).Therefore, at the same that he exterminates the Inner Hornerites, Phil gains adepts to his totalitarian regime by manipulating the passions of his population. In the story, Phil uses language as a means of manipulating the Outer Hornerites and maintaining his power over them.Phil utilizes languages to enact his biopolitical agenda and to secure the support and compliance of those he governs.Saunders follows a Foucauldian concept of discourse, which suggests that the language used by subjects conforms to a pre-existing "order of things," determined by power.Foucault thinks that "in every society the production of discourse is at once controlled, selected, organized and redistributed according to a certain number of procedures, whose role is to avert its powers and its dangers, to cope with chance events, to evade its ponderous, awesome materiality" (Foucault,Archaeology 216).In many of his short stories, Georges Saunders's presents language as an ideological prison that reflects how power has become embedded in the identity of the characters, shaping it from within and leaving no possible exit.This is precisely what Hayes-Brady has noted in regard to Saunders's conception of language: The ironic quasi-metafictional devices with which Saunders peppers his voices move away from being an artistic commentary, becoming a socio-political analysis of how language and linguistic control liberates and restricts society.[…] By immersing the reader in the protagonists restricted or incomplete vocabulary, by highlighting its absurdity, the reader is necessarily attuned to the absurdity of the struggle for linguistic independence, and by extension the struggle for independence from the commercial and political forces that govern and prescribe available vocabularies.(37.Italics my own) In the same way, Saunders thinks that there is a correlation between the use of language, quality of thought, and high political standards -a view that he shares with George Orwell.In "Politics and the English Language" (1946) the British writer explained that "modern English, especially written English, is full of bad habits.[…] If one gets rid of these habits one can think more clearly, and to think clearly is a necessary first step towards political regeneration" (3).For Saunders too, language has the power to found reality.As he explains in "The Braindead Megaphone," "we PAPER ACCEPTED consider speech to be the result of thought (we have a thought, then select a sentence with which to express it), but thought also results from speech (as we grope, in words, towards meaning, we discover what we think)" (4).And he thinks that political and moral problems result from an inappropriate use of language.Therefore, the solution to these problems can be linguistically induced: "The world, I started to see, was a different world, depending on what you said about it, and how you said it.By honing the sentences you used to describe the world, you changed the inflection of your mind, which changed your perceptions" ("Esther Forbes" 62).Considering Saunders's belief in the precedence of language over thought, it is no surprise that Phil's control of Outer Horner comes with a corrupt and untruthful language that he uses to manipulate his followers.As Hayes-Brady says, Phil "alters the political landscape of the imaginary Horner by manipulating the vocabulary of the other inhabitants" (25).Phil's words do not aim to represent the truth of a given situation.Instead, the situation is presented in such a way as to serve his political interests and strengthen his totalitarian regime.This is clear, for example, when Phil describes the fall of a pile of Inner Hornerites inside Outer Horner as an "Invasion."This fall could never constitute an invasion as the Inner Hornerites were unarmed and they only happened to be inside Outer Horner because the tower to escape from taxation had collapsed (37).Even though this is clear to Phil, he calls it an "Invasion" and names the day when it happened "Dark Dark Thursday" to manipulate the emotions of his acolytes.Similarly, the next day (when the Outer Hornerites had to collect the overdue taxes from Inner Horner) is called the "Memorable Friday of Total Triumphant Retribution" by Phil (54). In the story, Phil employs language numerous times to alter reality to his advantage.One such instance is his assault on the Inner Hornerites, which he portrays as an act of kindness, thus reversing the actuality of the situation.When Cal attacks him, Phil tells him that the Outer Hornerites should not be considered aggressors "when in fact we [the Outer Hornerites] are selflessly lending you [the Inner Hornerites] precious territory" (61).In the same way, when Phil is called to talk to the President about the fact that he disassembled Cal and scattered his severed parts throughout the country, his words -in a fragment that qualifies for the type of rhetoric despised by Orwell in the essay cited above-do not reveal, but hide the truth: "Yes, Mr. President," said Phil. "You called me here for a report on the situation at the border.And I'm happy to report that I was recently able to gracefully quell a disturbing outbreak of violence at the border by enacting certain physical rearrangements designed to prevent further outbreaks of violence, thus rendering the instigator of the violence incapable of instigating further violence, via separating the instigator's component parts and relocating them in discrete physical locations."(74) Phase I of the "Border Area Improvement Initiative" (88) consists of building a fence around the "Short-Term Residency Area".However, instead of using the words "fence" or "cage," Phil invents the term "Peace-Encouraging Enclosure" (102).The dialogue that follows is characteristic of Phil's linguistic manipulation: PAPER ACCEPTED "What, we're in jail?" said Elmer. "You're putting us in jail now?" said Wanda. "How typical of the Inner Horner mindset!" said Phil. "To be unable to distinguish a jail from a Peace-Encouraging Enclosure.Safe inside the Peace-Encouraging Enclosure, you will be protected from your innate violent tendencies, and we will be protected from you.It is a real win/win."(102) The manipulation of language is evident whenever the author capitalizes words that should normally be written in small letters.The capitals mark that the term contains a certain ideological nuance, for example: "Border Area," "Peace-Encouraging Enclosure," "Special Friends," "Outer Horner Militia," "Expeditionary Force," "Invasion in Progress," "Outer Horner Border Guard," etc.The capitalization of words -which is ultimately decided by the author himself (and not by the character)-indicates that they have been shaped by power and that they carry an ideological weight that exceeds their ordinary meaning.The author marks these terms with a capital letter as an indication that they represent the language of power and that Phil uses them to pursue his agenda.As Kasia Boddy explained, these terms constitute a sort of "euphemism" that, on the one hand, "provide an endlessly rich vein for satirical mining" (2), but, on the other, show how language is an effect of power and ideology 9 . Apart from the calculated use of language, Phil specifically targets the emotions and desires of his followers to make sure that they remain aligned with his own goals.Phil praises his acolytes and makes them feel important as long as they act as he wants, as we see in this example: Next morning, Phil and the Outer Horner Militia (Freeda, Melvin, and Larry) arrived at the border before dawn and stood watching the Inner Hornerites sleep while standing up."Snooze, snooze, snooze," said Phil. "Sort of lazy, aren't they?" "Whereas us," said Larry, "we're up before dawn, diligently working.""That's right Larry," said Phil. "Good observation.""Doing our diligent work of collecting taxes," said Melvin."Super, Melvin," said Phil. "We really are a diligent people.""Diligently collecting taxes to protect the security of our nation," said Freeda. 9 Referring to the short-story "Pastoralia," Hayes-Brady has explained that "the capitalization of the terms situates them in a corporate discourse, reflecting the emphatic capitalization of marketing languages, and, along with the recurrence of the phrase, suggests that it is a corporate mantra of sorts, used to manage employee behaviors.The same repetition of mantra-like language is visible everywhere in Saunders's work" (31). PAPER ACCEPTED "You know what?" said Phil. "After spending some time with you folks, I am tempted, in terms of our most important National Virtue, to replace 'Generosity' with 'Remarkable Intelligence.'" Larry,Melvin, Phil's "Special Friends", Jimmy and Vance, are manipulated similarly.However, in this case, the manipulation is requested by the workers themselves in an attitude that directly points to the "neoliberal quest for full alignment" explained by Lordon (51).When the "Special Friends" start to work for Phil, he agrees to pay them one "smoloka" per day.This makes them very happy, but, before signing the contract, Vance has an additional request: The "Special Friends" want to be praised and feel valued for their work, and the psychological reward is far more important to them than the monetary incentive.This emotional recompense has to do with the will of the subordinates to make their master happy.As Lordon explains, in today's society employees no longer work only for a salary but, among other things, "to be identified by the master as the cause of his or her joy, so that the master will love [him or her]" (71).Later on in the story, when Phil's "Inaugural Party" takes place, the Special Friends are sitting in a corner "wearing headphones, listening to personalized Tapes of Praise, made for them by Phil": "Oh jeez," said Jimmy, too loudly."He just said I have great biceps!" "He just said, about me?" said Vance, also too loudly."That he loves the focused look I get on my face when following an order.""He likes the way my lats flare when I pick someone up!" shouted Jimmy. "I work well with others!" shouted Vance."There's a deep intelligence in me that others rarely see!" shouted Jimmy.(85)(86) This attitude is related to the "commercialization of human feeling," a motif that, according to Kasia Boddy, Saunders's stories explore in a variety of ways: "most directly, in stories set in theme parks, where employees are required to dress up and perform roles, but also in businesses that offer carefully edited experiences and PAPER ACCEPTED feelings […].Those who work in these places have strict 'feeling rules' to follow for it is 'negative' attitudes (rather than low productivity) that threaten their jobs" (5).In the case of the Special Friends, this "commercialization of feeling" appears in a sarcastic and inverted way, as it is the employees themselves who demand love and praise as a reward for their attitude at work. In regards to the political manipulation of affect, I must also allude here to the exercise of biopolitical power in the nation of Greater Keller, whose "Expeditionary Force" liberates the Inner Hornerites from the genocidal project led by Phil.Greater Keller "ran like a six-inch-wide circular strip of ribbon around Outer Horner," and, because the country "was so thin it was almost nonexistent, it was rarely visited much less invaded, and was therefore very prosperous" (95).The nine citizens that form the nation of Greater Keller have their power relations, their body shapes and their habits determined by the particular spatial configuration of their country: The nine Greater Kellerites spent their days walking behind their President singlefile, carefully placing one foot in front of the other, happy and cordial, engaged in endless energetic conversation about the appearance of the portion of Outer Horner they happened to be walking around, the nuances of the cup of coffee they were currently enjoying, and/or the enjoyable impression being made on them by the way the person in front of them looked when viewed from behind.( 95) In a physical disposition that represents a perfect alignment between government and the governed, the Greater Kellerites are numbered from one to nine, with President Rick, the First Lady, and the First Daughter taking the first three positions in the hierarchy of the country and in the walking line.President Rick and the rest of the Greater Kellerites give huge importance to the well-being of the nation and make "enjoyment" their main political goal.In this way, Greater Keller represents a "society of mass hedonism and consumerism" that is, together with totalitarianism, the main form that biopolitical power took in the 20th Century (Agamben,Homo Sacer 13). The importance given to enjoyment in the nation's life explains why Cliff, the "National Enjoyment Assessor," is the fourth person in the country's hierarchy.He is in charge of the "National Life Enjoyment Index Score" (114), a score from 0 to 10 that determines at any given moment the level of enjoyment of the nation and helps President Rick make his political decisions.The main habits of the Greater Kellerites -conversing and drinking coffee-have to do with the biopolitical need to keep this level as high as possible.When Dale, Citizen #9, reports to the group what is going on in the "Border Area" under Phil's command, the "National Life Enjoyment Index Score" in Greater Keller drops to 3 out of 10 because of the anxiety produced by the situation in Outer Horner.President Rick then sends an "Expeditionary Force" (114) to save the Inner Hornerites from extinction, but also, and maybe more importantly, to increase the "National Enjoyment Level" of his population. When the "Expeditionary Force" of Greater Keller arrives at the Border Area, Phil's Special Friends, who "had never in their lives seen anyone bigger than themselves" (118), suddenly feel nostalgic for their previous job and abandon Phil.After Phil's death, President Rick frees the people of Inner Horner from the cage PAPER ACCEPTED (the "Peace-Encouraging Enclosure") because, as he says, "that doesn't look very Enjoyable" (120), and gives this advice to the population he had just liberated: "Our advice, to all of you people, is Enjoy!" said President Rick."Life is full of beauty.Why fight?Why hate?Learn to Enjoy, and you will have no need to fight, and no desire to!Love life, walk in a circle, learn to enjoy coffee!Will you do that?Will you promise to try that?" (120).As a consequence of the liberation of Inner Horner, the "National Enjoyment Level" of Greater Keller reaches 9.8 out of 10, "due to their pride in their recent heroism and their anticipation of the many days of Enjoyable storytelling that lay ahead" (121). Throughout the novella, Saunders highlights how political control is achieved by manipulating the emotions and desires of the governed.The story portrays language, especially the language of power, as a tool that shapes social reality.As a result, it becomes a significant instrument of biopolitical governance, alongside violence.Phil alters the meaning of words to enforce his agenda and to achieve the alignment of his followers.In addition, he praises his supporters to cultivate loyalty, manipulating their emotions to ensure their allegiance to his regime.In the nation of Greater Keller enjoyment is the technology of power that allows President Rick to achieve the perfect alignment of his subjects.It is through enjoyment that the desires of the citizens perfectly correspond to those of the ruling family.The "National Enjoyment Level" serves as the index to gauge the political approval of the population and minimize the chances of dissent or rebellion.As discussed above, Phil uses fear and terror to consolidate his rule.He punishes even the slightest hint of disloyalty or rebellion with the destruction and display of bodies.Phil hates his enemies and seeks to annihilate them using racist and eugenic measures.This terrorizes the Inner Hornerites and Phil's followers, thereby reinforcing his power. JUSTICE BEYOND BIOPOLITICS? The totalitarian regime presented in the story is implemented through fear and love.As explained above, physical violence -the Outer Horner Militia, and the Special Friends-plays a significant role in the consolidation of Phil's rule.The threat of assassination is ultimately what makes the Outer Hornerites and the Inner Hornerites obey the dictator.It should come as no surprise then that the liberation from Phil's oppression is a result of violence.It is after being hit by an object thrown from Inner Horner that Phil starts to lose his linguistic ability and, therefore, his power, until he dies 10 .In addition to that, the arrival of the "Expeditionary Force" to 10 Clare Hayes-Brady has noted this about the relationship between language and power in Phil's leadership: "the political heft of language […] is a way for Phil to cope with his embittered personal circumstances and to deflect attention from his ugliness and rejection […].This connection of the personal and the political resonate throughout Saunders's work, and Phil's final fall from grace further strengthens the link, turning him into a pitiable figure […].Phil´s demise, interestingly, is linguistically figured, with his indirect monologue descending into nonsense in the absence of his brain" (28). PAPER ACCEPTED the Border Area scares Phil's Special Friends, who abandon Phil and allow the Greater Kellerites to liberate the Inner Hornerites from oppression.Before Phil's downfall, some Inner Hornerites unsuccessfully revolted against his abuses.They engaged in democratic discussions, but were unable to protect themselves.When the Outer Horner Militia removed the apple tree, the stream and the dirt, the Inner Hornerites held "a whispered frantic national referendum" ( 17), but they were incapable of agreeing on what the "primary issue" (18) was, or on where they should start the debate.Their respect for each other and their democratic conventions results to be impractical and, ultimately, negative for themselves.For this reason, at some point, Curtis proposes to act: "I say enough talking," said Curtis."I say it's time we did something" (20).Their first solution is to send a letter to the President of Outer Horner asking him to discuss with them the unfair taxation system that Phil has imposed upon them.This peaceful solution does not work out well as the President ends up validating Phil's actions and appointing him as "Special Border Activities Coordinator" (32).Later on in the story, when the clothes of the Inner Hornerites are removed by Phil's unfair taxation system, Curtis again proposes to act and to resist: "Oh, this is crazy," Curtis said."How long are we going to take this?We've got to do something.We've got to start resisting" (57).However, when Cal is about to be disassembled and asks his compatriots to join him in the fight against the Outer Hornerites, they do not dare to do so as they are afraid of Phil's Special Friends.After Cal is disassembled, all the Inner Hornerites feel guilty, and Curtis justifies his cowardice by claiming that he was not advocating for actual violent resistance but only for "conversational resistance" (100).Here Saunders seems to criticize democratic forms of deliberation when they lead to weakness and inaction.He seems to acknowledge that only words and good intentions are not enough to fight a totalitarian regime that is ultimately enforced through violence.It seems then that violence is the only possible way to fight back against the abuses of power.Phil's regime only falls when he is hit by an object thrown from Inner Horner.The blow in the back of Phil's head leads to the loss of his linguistic ability and later on to his death.Thereby, the story makes evident the strong relationship between language and power, as Phil's political clout is coextensive to his control of discourse.This is how the text presents this moment: A great high-pitched wailing now sounded from Inner Horner.It may have been this that caused Phil's rack to spasm.Oh shoot, wow, Phil thought, that really hurts.He had only got this same spasming sensational [sic] once before in his life, and that had been the worse, due to, just after that, his speech would began suffering.Darn, Phil thought.It are [sic] happening now, somewhat slight [sic].He'd better hurry, get this Phase III wropped [sic] up, so he could go homer and find that stupid brawn, and remont [sic] it (111) In Outer Horner, all the characters are blind followers of Phil except for Freeda.She is kind and empathizes with the Inner Hornerites.For this reason, she keeps a distance from Phil's actions and is ultimately accused of disloyalty and disassembled.When Cal's body is broken up, Freeda is deeply concerned, and she writes a letter to the President of Outer Horner denouncing Phil's criminal practices.In doing so, PAPER ACCEPTED she acknowledges the radical equality of the Outer Hornerites and the Inner Hornerites, even at the price of putting her own life at risk.This means that she firmly rejects the distinction between political existence and bare life that is imposed by the sovereign dictator.In doing so, Freeda represents "kindness" in the story, a virtue that Saunders encourages even in "the worst environments" 11 (Boddy 9).Freeda's in-betweenness represents a path to revolutionary politics that is explained by Flavia Costa (regarding Agamben) in these terms: In general, in our culture, the human being has always been thought of as the articulation and conjunction of two opposed principles: soul and body, language and life, and in this case a political element and a living element.On the contrary, we must learn to think the human being as that what results from the disconnection of these two elements and to research not the metaphysical mystery of conjunction, but the practical and political mystery of the separation.("Entrevista" 18.My translation) According to Agamben, modern biopolitics constructs political subjects from bare life, bios from zoé.Political emancipation should come then from the disconnection of these two sides of biopolitical power.Recognizing the equality of Outer Hornerites and Inner Hornerites implies the suppression of the biopolitical bond and is a way to achieve actual justice, which, as Alain Badiou explains, "means examining any situation from the point of view of an egalitarian norm vindicated as universal" (20). Apart from the political dimension of these characters (Cal and Freeda), the novella proposes the intervention of the "Creator" (123) as a way to overcome the biopolitical distinction between Outer Hornerites and Inner Hornerites.The massive hands of the Creator appear above the Border Area to stop the fighting and they create a new people out of the parts of the old inhabitants.The Creator is a kind of supernatural deity (different from the "God Almighty" that Phil refers to throughout the story) with the capacity to disassemble and assemble the bodies of the subjects and to form a new society: The Outer Hornerites and Inner Hornerites had all thought about the Creator, and talked about the Creator, and some of them had even prayed to the Creator, but none of them had ever dreamed the Creator was so big.The fighting stopped, the dust cloud settled, the nations of Inner and Outer Horner stared up, wide-eyed and open-mouthed.Then a second hand descended, with a vegetable garden running across the wrist, and two mechanical fingers, and a frozen lake in its palm, holding a spray can, and the Creator's left hand sprayed the Border Area, and the Outer and Inner Hornerites fell instantly asleep.The two hands, working together, gently disassembled the Outer Hornerites.Then they gently disassembled the Inner 11 Saunders's commencement address at Syracuse University in 2013 was on the topic of kindness.It was published as Congratulations, by the Way: Some Thoughts on Kindness (2014).Kasia Boddy has noted: "For Saunders, being kind is both a personal and a political imperative, but, most importantly, […] it is something that requires work: 'go after these things,' he urged graduates" (8). PAPER ACCEPTED Hornerites.Using the Inner and Outer Horner parts, they rapidly constructed fifteen entirely new little people.(123)(124) As we see, the Creator seems to be a sort of social engineer (or surgeon) who puts the old society asleep with a spray and "gently" forms the bodies of the new subjects.The intervention of the Creator and the fact that the subjects' bodies in the novella are artificial seem to imply that there is no natural existence, but only a biopolitical one, mediated or constructed by power -in this case, divine power.By constructing the new subjects, the Creator aims to found a universal subject and a perfect society.However, these efforts do not succeed, as the tendency "to turn our enemies into objects" (Saunders, qtd.by Nayebpour and Varghaiyan 848) is soon going to reappear in New Horner. The only part that the Creator does not use to form the New Hornerites is Phil's brain, which ends up eaten by fish, and Phil's body, which is mounted on a black platform with a plaque on it that reads "PHIL MONSTER" (126).The exhibition of Phil's body refers again to the centrality of the body in modern biopolitical regimes, as explained by Foucault and Agamben.Phil's torso becomes the bare life of the new world created by the Creator, that is to say, an excess (and a threat) that founds the life of the city through its same exclusion: As the months went by, the New Hornerites took to avoiding The Phil.Although nobody could exactly say why, The Phil gave them the creeps.Soon the path bowed out around it, weeds overtook it, and all that could be seen of The Phil was the tip of Phil's rack, which stuck out of the weeds like a bad flagpole.Animals burrowed in The Phil, birds nested there, balls accumulated there because the New Horner kids were too scared to retrieve them.And that is where Phil is today: hidden in a thicket of weeds, not loved, not hated, just forgotten, rusting/rotting, with even the sign that proclaims his name fading away.(129) Once the New Hornerites are created, the massive hands of the Creator lift the new people and convey to them a message of love and peace: Then the massive hands lifted the new people up to a pair of giant indescribable lips and whispered, in a fundamentally untranslatable Creator-language, something that meant, approximately: THIS In this message, the Creator is answering these very significant questions: "what becomes of the law after its messianic fulfillment?" or "what becomes of the law in a society without classes?"(Agamben,"State" 63).With the mandate to be kind, to be happy, to be free, and to love oneself, the Creator in the story proposes, like Walter Benjamin, a kind of justice "in which the world appears as a good that absolutely cannot be appropriated or made juridical" (qtd.by Agamben, "State" 64).Therefore, PAPER ACCEPTED the path to justice proposed in the story does not imply the cancellation of the law, but a new use of it based on "its deactivation and inactivity" (Agamben,"State" 64). Once the Creator delivers his message, the hands remove the strings that marked the Border Area and the "Peace-Enforcement Enclosure" and plant a sign that reads: "Welcome to New Horner" (127).Along with a new understanding of the law, the unification of the bodies of the Inner Hornerites and the Outer Hornerites conducted by the Creator aims to create a utopian society beyond biopolitical power.Referring to The Brief and Frightening Reign of Phil, Kasia Boddy has explained how "working together with one's neighbors is Saunders's happy ending, […] a counter to the human tendency to continually divide the world into dualities" (10).However, the ending of the story is not as happy as Boddy assumes, because the equality among the New Hornerites and the overcoming of biopolitical distinctions can only be temporary 12 . Phil's body constitutes now the political excess that founds the new nation, and this excess allows the New Hornerites to be united and live in peace.When the New Hornerites wake up, they find Phil's platform and comment on its meaning: On the way to a nearby apple tree, they passed a hulking black mess on a platform."What is that thing?" said Gil. Leona looked at Gil. Syntax?What the heck kind of word was that?What was Gil, some kind of bigshot?She hated big-shots, she suddenly realized.She'd have to watch Gil.She'd talk to Sally about it.Sally didn't seem like a big-shot.Sally seemed sensible and moral and down to earth.Sally, like Leona, was compressed and ball-shaped, unlike the freakishly elongated Gil. ( 128) 12 In his essay "The New Mecca," George Saunders reflects, ironically, on the benefits of globalization and of the mixing of races and cultures that comes with it: "in my tube at Wild Wadi [a waterpark in Dubai], I have a mini-epiphany: given enough time, I realize, statistically, despite what it may look like at any given moment, we will all be brothers.All differences will be bred out.There will be no pure Arab, no pure Jew, no pure American American [sic].The old dividers -nation, race, religion-will be overpowered by crossbreeding and by our mass media, our world Culture o' Enjoyment.Look what happened here: hatred and tension where defused by Sudden Fun." (28) PAPER ACCEPTED This minor disagreement marks a distinction between friends and enemies as it divides Leona and Sally from Gil, based on differences in body shape and language use.In the paragraph that marks the end of the story, the narrator explains that Leona has started to visit Phil's platform and that she dreams of a new world only populated by people like her: Except sometimes Leona comes to visit.She does not find The Phil monstrous, but strangely beautiful, and sometimes sits in the thicket for hours, dreaming, for reasons she can't quite explain, of a better world, run by humble, compressed, ball-shaped people, like her and Sally, who speak, when they speak at all, in short sentences, of their simple heroic dreams.(129)(130) Although a new world beyond the law was created where justice seemed attainable at last, the novella suggests that new forms of biopolitical power are emerging and that the next genocide is already in the making.This seems to imply that any attempt to create a perfect society is doomed because biopolitical racism -"the tendency to continuously divide the world into dualities" (Saunders qtd.by Nayebpour and Varghaiyan 848), and the "genocidal impulse" (Saunders qtd.by Rendueles)-are intrinsic to human societies. CONCLUSIONS In this article, I have explained how Phil creates a totalitarian regime by inventing an enemy that needs to be annihilated.Phil justifies the genocide he is leading through racist and eugenic arguments, as he considers the destruction of the Inner Hornerites a matter of national care and security.I have discussed Phil's destruction of the enemies' bodies in relation to this biopolitical goal.In the totalitarian state of Outer Horner, any political or aesthetic excess must be removed to avoid any deviations and to ensure a strict coincidence of the population with itself. In addition, I have analyzed how the biopolitical power presented in the story aims to control the desires and affects of the governed.In this regard, I have examined Phil's manipulative use of language and how he praises his followers to ensure their alignment with his political objectives.I have studied the use of "enjoyment" by President Rick, of Greater Keller, to achieve the complete domination of his subjects.I have interpreted the political positions of the characters who revolt or speak out against Phil's totalitarian regime.Moreover, I have explained how the novella ends with the creation of a unified nation, New Horner, that in the beginning seems to overcome the biopolitical distinctions of the past but soon hints at a new genocide. The biopolitical strategies presented in The Brief and Frightening Reign of Phil constitute a literary transfiguration of the technologies of power that conform contemporary Western societies.If these technologies have existed in the West since the 18 th Century, they have been perfected over time.A significant update of these technologies occurred after the 9/11 attacks with the onset of the "War on Terror."These events were the main preoccupation of public discourse in the United States PAPER ACCEPTED at the time of the publication of Saunders's novella and constitute its historical background.As Kimon Keramidas has mentioned: "released in 2005, the cultural context is clear; Saunders would never be accused of subtlety.The book was written during aftermath of the US-Iraq war, and it clearly references the politics of the day."The creation of an external enemy, the obsession with the security and care of the population, the disciplinary destruction of the bodies, and the use of affect as a biopolitical technology may well be a literary transfiguration of these real-life events.As Saunders learned from one of his "literary heroes" (Huebert 116), direct representation may not be the most convenient or truthful way to represent historical facts in literature.A short story may carry the knowledge or experience of what happened in reality without the need to refer explicitly to the circumstances: Your real story [what happened to you] may have nothing to do with your actual experience, Vonnegut seemed to be saying.In constructing your black box [your short story], feel free to shorthand those [real life] experiences, allude to them sideways, or omit them entirely.Joke about them, avoid directly exploiting them, shroud them in an over-story about aliens: you know what you know, and that knowledge will not be shaken out of your stories no matter how breezy or comic or minimalist your mode of expression, or how much you shun mimesis.("Mr.Vonnegut" 78.Italics my own) Despite being probably inspired by real-life events, The Brief and Frightening Reign of Phil has the potential to be interpreted beyond its historical context.This is because it showcases the technologies of power that are inherent to political modernity.By presenting this gallery of biopolitical technologies to his readers, Saunders is effectively aiding in our comprehension of contemporary power dynamics and urging us to fight for our independence "from the commercial and political forces" (Hayes-Brady 37) that govern contemporary societies. " It's a Phil," said Clive."What is a Phil?" said Sally."A monster," said Leona."Apparently," said Fritz."Or maybe Monster was his last name?" said Gil. "You know: Phil Monster.Like: Hi, I'm Phil Monster?It's not entirely clear from the syntax.""Whatever," said Sally."Let's go eat." " Vance, jeez!" whispered Jimmy."Don't get all demanding!You'll screw it up!" "Jimmy, don't worry, I know what I'm doing," said Vance."What I want, sir, to, uh, request, additionally?Is that, every now and then, you say something nice about us.If that's not too much.Like you could say something about how much potential we have, or how obedient we are, it doesn't even need to be true.Just something nice to us every day.""We didn't get much of that at home," said Jimmy."Mostly it was just, you know, Jimmy you jerk, how did you get so dumb?That sort of thing."[…] "I'll tell you what," said Phil. "Every day, in addition to your smoloka, I'll say something nice about each of you." (50.Italics my own) TIME, BE KIND TO ONE ANOTHER.REMEMBER: EACH OF YOU WANTS TO BE HAPPY.AND I WANT YOU TO.EACH OF YOU WANTS TO LIVE FREE FROM FEAR.AND I WANT YOU TO.EACH OF YOU ARE SECRETLY AFRAID YOU ARE NOT GOOD ENOUGH.BUT YOU ARE, TRUST ME, YOU ARE.(127)
15,194.2
2024-08-28T00:00:00.000
[ "Philosophy", "Political Science" ]
Magnetic-optical transitions induced by twisted light in quantum dots It has been theoretically predicted that light carrying orbital angular momentum, or twisted light, can be tuned to have a strong magnetic-field component at optical frequencies. We here consider the interaction of these peculiar fields with a semiconductor quantum dot and show that the magnetic interaction results in new types of optical transitions. In particular, a single pulse of such twisted light can drive light-hole-to-conduction band transitions that are cumbersome to produce using conventional Gaussian beams or even twisted light with dominant electric fields. Introduction Twisted light (TL) is light having a helical wave front. Such light exhibits several interesting features: Due to the azimuthal phase dependence a phase singularity occurs at the beam axis, leading to the name optical vortex. Furthermore, in addition to spin angular momentum (SAM) associated with the handedness of circular polarization, such light fields carry orbital angular momentum (OAM). Depending on the combination of SAM and OAM, TL beams may have strong field components along the propagation direction or strong magnetic field close to the beam axis. The research in TL spans nowadays several areas of fundamental and applied physics [1] including its potential for quantum communication [2]. Of particular interest is the interaction of a TL beam with matter, which opens up the possibility to address unusual transitions [3,4] and to excite typically dark modes in plasmonics [5]. In this paper we focus on the interaction of TL with a semiconductor quantum dot (QD). QDs are discussed for many applications in optoelectronics and spintronics [6] which, however, requires a precise control of the optically excited states. Here we will show that the interaction of a QD with TL having a strong magnetic component may be used for the excitation of specific light-hole (LH) excitons, which cannot easily be addressed by conventional Gaussian beams or even twisted light with dominant electric fields. Twisted light with a strong magnetic component TL can be characterized by the handedness of its circular polarization σ (SAM) and its OAM quantum number ℓ. We recently identified two distinct classes of TL [3,4], that we named parallel and antiparallel class reflecting the relative orientation of SAM and OAM. Figure 1 depicts the electric field profiles for a beam with ℓ = 2 and σ = ∓1 showing the pronounced difference in the topology of the field lines. Starting from the vector potential for a Bessel beam with given values of ℓ and σ propagating in z-direction, which is an exact solution of the vectorial Helmholtz equation, the behavior of the electric and magnetic field in a region close to the beam axis has been derived in [3]. Remarkably, it has been found that especially the antiparallel class is characterized by unconventional features which become particularly strong in the case of tightly focused beams: (i) The electric field close to the beam axis is dominated by its longitudinal component; (ii) for |ℓ| ≥ 2 the beam in that region is dominated by the magnetic field. The virtue of employing the longitudinal electric field component of antiparallel beams with |ℓ| = 1 for the excitation of LH excitons has already been discussed in [7]. Here we complement this work by showing that the use of the dominant magnetic field present in an antiparallel beam with |ℓ| = 2 again opens up new excitation pathways which are usually unaccessible. Close to the singularity the field amplitudes read (see Table 1 in [3]) Here, q r and q z are the transverse and longitudinal wave vector, where q r is inversely proportional to the beam waist, r ⊥ is the in-plane coordinate with |r ⊥ | = r and azimuthal angle ϕ. We will consider the excitation by tightly focused TL, i.e., beams with the paraxial parameter q r /q z ≈ 1. E 0 andB 0 =Ẽ 0 /c are the respective amplitudes. When approaching the beam center, i.e. for r → 0, we find that the electric field E and the longitudinal magnetic field B z vanish while the transverse magnetic field B ⊥ remains finite. Therefore, the transverse magnetic field dominates for interaction with particles or nanostructures, which are much smaller than the beam waist and are placed at or close to r = 0, which is in fact a surprising effect at optical frequencies. Recently, we have developed the theory of light-nanostructure interaction for the electric and magnetic components of general antiparallel TL beams [3,4]. It turned out that the interaction can be written in a form similar to the well-known dipole approximation for electric and magnetic fields [4]. In particular, for TL with ℓ = 2 and σ = −1, in which the magnetic field dominates, the Hamiltonian can be written as [4] with q and m being the electron charge and mass, respectively, ω is the frequency of the light, r ± = x ± iy, p ± = p x ± ip y , and c.c. denoting the complex conjugate. Quantum dot light-hole states We consider a semiconductor QD, where the conduction band states are s-type (|s ) and can be classified by the electron spin given by the quantum number s z = ±1/2. For the valence band states (|p x , |p y , |p z ), due to their p-type character, one finds states with total orbital angular momenta j = 3/2 and j = 1/2, where the j = 3/2 states are split into heavy holes with j z = ±3/2 and LHs with j z = ±1/2. Here, we focus in the LH states, such that the Bloch function for electrons and holes read: where the arrow indicates the electron spin. Note that the sign of j z refers to the angular momentum of the holes which is opposite to that of valence band electrons. To describe the full wave function of electrons and holes in the QD, we further apply the envelope function approximation. Within this scheme the Bloch functions are multiplied by the respective envelope functions for electrons and holes, which have been further separated into the radial part R e/h n,m (r), an envelope angular momentum part e imϕ , and a part Z e/h (z) in z-direction. Assuming a flat QD, we restrict ourselves to the ground state in z-direction assuming wave functions Z e/h with well-defined parity. The interaction of twisted light with light holes To calculate the matrix elements between LH valence and conduction band states we follow the standard procedure in the envelope-function formalism: We split the integration into a sum over all lattice vectors R and an integral over the unit cell indicated by the coordinate r ′ [8]. Finally, we replace the sum over lattice vectors by an integral over the whole system. Following this procedure, in the Hamiltonian (1) the products of coordinates and momenta separate into terms such as (R + + r ′ + )(P z + p ′ z ). Keeping in mind the orthogonality and parity of the Bloch functions under study, one finds that all matrix elements involving only intracell coordinates r ′ or only envelope coordinates R vanish. Furthermore, the integral over the envelope functions including either Z or P z vanish, i.e., dZ Z e * (Z) Z Z h (Z) = dZ Z e * (Z) P z Z h (Z) = 0, due to the same parity of the ground state functions Z e/h (z). The only remaining matrix element then reads where we have introduced the microscopic matrix elements In Eq. (3), we find that due to the optical selection rules only electron-hole pairs having opposite angular momenta of the Bloch state are excited, because only the matrix element M z and M pz survive. In other words, only excitons with a total angular momentum of zero (i.e. J z = s z + j z = 0) are excited. This is very different compared to the excitation with plane waves. Plane waves have an angular momentum of ±1 and thereby excite excitons with a total angular momentum J z = ±1. To excite LH excitons in a QD with J z = 0, one would have to apply an excitation from the side of the QD, which typically requires cleaving the sample. We can now evaluate the macroscopic integrals over R for the two different parts. For the first term, we rewrite R + = Re iΦ . For the second term we write the momentum operator in cylindrical coordinates P + = −i e iΦ [∂ R + (i/R)∂ Φ ]. Thus, both terms contribute a term ∼ e iΦ to the angle integral. This shows, that by exciting the QD with a magnetic TL beam having ℓ = 2 and σ = −1, the envelope state changes by ∆m = 1 while the total band+spin angular momentum of the exciton is zero. Such a transition is forbidden for excitation with a plane wave due to dipole selection rules, and it is also not present in the case of excitation by a TL beam with dominant electric field in z-direction (i.e., an antiparallel TL beam with |ℓ| = 1) [7]. m= 1 m=0 m=0 Figure 2. Optical transition between light-hole and conduction band state in a QD induced by single pulses of twisted light having ℓ = 2 and σ = −1. For comparison also transition due to beams with ℓ = 0 and ℓ = 1 are included [7]. Note that for clarity reasons holes states with m = ±1 have been removed. are not depicted. We thus find, that by using TL beams with |ℓ| ≤ 2 all possible exciton states in the lowest two shells with m = 0 and m = ±1 can be excited. Conclusions We have demonstrated that the optical-magnetic interaction in case of excitation with a TL beam with OAM ℓ = 2 and SAM σ = −1 generates LH exciton states with no band+spin angular momentum (J z = 0) but with envelope angular momentum m = 1, and thus produces states that are not accessible with plane waves or even TL with dominant electric interaction. Combining these findings with previous results for beams with other values of OAM and SAM, we find that TL beams are beneficial to obtain a full control of exciton states in a QD.
2,415
2017-08-16T00:00:00.000
[ "Physics" ]
How Does High-Speed Rail Affect Tourism ? A Case Study of the Capital Region of China The objective of this study is to analyze the tourism spatial interaction that defines two scenarios, i.e., the actual one with the current high-speed rail (HSR) network, and the future one with an extension of the HSR network, considering as a case study the Capital region of China. The impact of HSR on the spatial distribution characteristics is investigated. The main outcome of this study is that the extension of the HSR network in the future scenario will significantly increase the total tourism spatial interaction and will reduce the spatial difference. What this paper adds to the current knowledge about HSR and tourism is that smaller cities, such as Tangshan, Zhangjiakou, and Chengde, connected via HSR to core cities will benefit the most from the HSR network’s operation. Those cities should take the HSR network as a development opportunity to enhance their attractiveness and strengthen their marketing to achieve sustainable tourism competitiveness. The study found that effects can also be registered on larger cities, but they are smaller. So, larger cities, such as Beijing and Tianjin, should reassess their attractiveness to the tourist market and take corresponding countermeasures. The findings of this study can be used by tourism management authorities to develop short-term and long-term plans. Introduction High-speed rail (HSR) and tourism are closely related economic activities because improved mobility is perceived to facilitate behavioral changes in tourists [1].HSR has become a hot topic for researchers in related fields, such as tourism and transportation, since more and more research papers on the topics of HSR networks and tourism are being published.Previous studies have examined the impact of HSR on tourism development from different angles, such as a national spatial structure [2], medium-sized regional cities in France and Spain [3][4][5], metropolitan areas, including Madrid, Paris, and Rome [6][7][8], and medium-and long-distance HSR on the areas along the HSR corridor [9][10][11][12], demonstrating that HSR networks have a "corridor" effect on the tourism spatial distribution, help medium-sized cities to develop urban tourism, facilitate the development of small suburban cities as special subcenters of the metropolitan area, and have a significant effect on tourists' choice to visit other smaller cities close to a Metropolitan area.Studies have shown that an HSR system has no influence on the choice of Madrid and Rome, but has an effect on the choice of Paris, as a tourist destination. Delaplace [13] pointed out that a city's size appears to be an important determinant of the impacts of HSR on tourism.A proof of this assertion needs to use a region with cities of different sizes as an analysis case.Case studies have been carried out in France, Spain, and China as shown above; however, none of them examines the impact of an HSR network that connects cites of all different sizes in an area on the tourism spatial structure.An HSR network that connects multilevel cities in one area with several lines and nodes fosters cooperation and competition among them and makes the area a destination network.Such HSR and destination networks will be found in more countries in the future with the development of HSR projects.For this reason, understanding the impacts of HSR networks on a destination network with multilevel cities is very important both theoretically and practically. This paper contributes to the existing literature on HSR and tourism management by filling this gap and developing an analysis of multilevel cities in one area with an HSR network.Specifically, it investigates HSR's influence on the tourism spatial structure of the Capital area of China through the specification of a spatial interaction model.The Capital area of China has been chosen because of the different sizes of the cities, and because some of them are served by HSR and others are not.Specifically, this study attempts to understand the impact that HSR can have on cities of different sizes.The spatial interaction model was chosen for two main reasons.First, distance (the actual distance or travel time) is an important independent variable in the model.HSR will change the travel time in a region, and the model can examine the HSR network's influence.Secondly, a city's size is also an independent variable in this model and can be measured by population.This study's objective is to determine whether an HSR network exerts different influences on cities of different sizes in one region.We consider the differences in spatial interaction between two scenarios, i.e., one with an existing HSR network and the other with a planned HSR network.We also investigate the question of how destination management organizations in different cities should respond to HSR impacts. Data were collected from the official statistical reports on HSR and tourism.The model developed by Wilson [14,15] was adapted and used to estimate the tourism spatial interaction (TSI).A Coefficient of Variation (CV) was used to compare the changes in tourism at the regional level in the two scenarios. The paper is organized as follows.The second section reviews the international literature.The third section presents the TSI model.The fourth section describes the impacts of HSR networks in 2017 and 2022 for the case study of China.The fifth and final section reports the findings and conclusions. Literature Review The international literature has given an increasing amount of attention to the important role of HSR in tourism development.To date, studies have proved that HSR investment has a positive influence on tourism.The operation of an HSR network improves access to tourist destinations [4,9,10,[16][17][18][19]].An HSR service modifies the link between tourists and accessibility [13][14][15][16][17][18] because of the decrease in traveling time.There is a general consensus that an increase in a tourist destination's accessibility usually leads to an increase in demand for tourism [20], and is a critical factor in the overall number of tourists a destination receives.A survey developed in China showed that passengers with a higher education and income level were more likely to choose high-speed rail (HSR) as their mode of long-distance tour transportation [21].As a result, many studies report that one consequence of HSR is an increase in tourists [5,16,17,[22][23][24].The tourist market in France, Spain, China, and Japan has grown as a consequence of HSR [5,9,10,18,22].The operation of the HSR network between Barcelona (Spain) and Perpignan (France) has led to an enlarging of the day-trip market of Barcelona, with the number of excursionists from Languedoc-Roussillon amounting to 723,200, and the main travel purposes of the excursionists being: shopping (60%), holidays (20%), and visits to family or friends (9%) [18].A case study of China shows that HSR can increase the amount of international tourism [23,25].A study from Spain demonstrates that while ridership and length are not really significant for international tourism, HSR plays a clear and positive role in the increase in both the number of foreign tourist arrivals and revenues, and is highly significant with a positive sign, but neither ridership nor the length and existence of an HSR network are significant for domestic tourism [26].HSR systems have a positive impact on the promotion of urban tourism and fostering business tourism development.Bazin et al. [17] performed a qualitative analysis to assess the impact of HSR on urban and business tourism in French cities close to Paris.They showed that this kind of tourism might be fostered by HSR for the reasons of avoiding driving fatigue, traffic congestion, and parking difficulties, cheaper travel, and a more comfortable travelling experience. However, in some cases, the impact of HSR on the growth of tourism has not been as positive as expected [27,28].Intercontinental tourists are less likely to be affected by HSR because they normally travel by plane, although HSR may increase international tourists' visits to nearby countries and cities once they reach their destination [8,9].Research has found that HSR may cause a reduction in demand for overnight stays [29] and a reduction in the length of stay [16,17,30].Survey data between 1980 and 1985 on the Paris-Lyon Line, which was started in 1981, show a decrease in the length of stay in Dijon, Lyon, and Valence Perrache [29].Additionally, a study in Le Mons of France showed that the average length of stay decreased from 2-3 days to 1.5 days a few years after the arrival of the TGV service.In the different administrative districts connected by the same HSR line, some of them have a substantial correlation with the HSR line's opening, and some do not [12].The phenomenon of the "double-edged sword" is also important for the analysis of the impact of HSR on tourism development in Spain.The substitution effect of HSR on aviation has reduced the number of immigrants in Madrid, which has indirectly brought about negative effects on inbound tourism.On the other hand, it has promoted domestic tourism; however, the effect is very weak.Therefore, the network effect of HSR is not positive on the tourism market [27], and the characteristics of tourists that choose HSR in different tourist cities are completely different [31].From a regional tourism development perspective, the introduction of an HSR service has the potential to generate a number of effects, including: a structuring effect [18]; agglomeration [18]; a corridor effect [2]; and centralizing effects [32].A "structuring effect" occurs where the introduction of a new transport system assists local actors to maximize the utility of pre-existing structures and relationships or encourages policy-makers to adopt complimentary policies that utilize HSR as a change agent [18].For example, the introduction of HSR may lead to a reduction in travel costs that may induce spatial changes as businesses, such as hotels, relocate to take advantage of the new HSR service.In this regard, Masson and Petiot [18] argue that, in instances where transport costs fall, an agglomeration effect can be observed where 'firms agglomerate at a larger scale into a smaller number of locations servicing an extensive hinterland'.Moreover, HSR may generate changes in the spatial distribution of an industry [33].Plassard [32] observed that a centralizing effect occurred in France, where Paris has become the center of the star-shaped TGV network.In China's case, the opening of high-speed railways has improved the inter-regional accessibility balance and increased accessibility from high-speed railway sites to non-site cities, while the national transport accessibility level still forms a new pattern of "corridors" and "islands" centered on high-speed rail lines and sites, which might lead to a new spatial inequity [34]. In previous studies, there is evidence for both a positive and a negative influence from HSR on tourism.Gutiérrez et al. [35] reported that the characteristics of tourists choosing HSR in different tourist cities are completely different.The literature shows that a city's size, a city's location in the HSR network, and the location of HSR stations might be the factors that determine the impacts of HSR on tourism.HSR may have a centralization effect where economic activity is drawn towards the big city nodes of the rail network, usually at the expense of smaller cities and towns [36].Ureña et al. (2009) argue that such large intermediate cities served by HSR, such as Lille, Zaragoza, or even Cordoba, are likely to see a growth in urban and business tourism [5].An analysis conducted on TGV's influence on the small-and medium-sized cities in Northern, Atlantic, and Eastern Europe showed that, even with the improvement in the accessibility, the increase in tourists due to the TGV is minimal [31].Research on the two medium-sized and intermediate cities Cuenca and Toledo in Spain showed that infrastructure is necessary but not sufficient for tourism, and its effectiveness depends on many accompanying conditions, such as the collective strategies of local stakeholders and HSR stations being located peripherally to cities [37].Studies in France have shown that there is no tourism development growth in smaller cities [37].A database of 124 municipalities during the 2005-2012 period was built [28] to study the effects of the introduction of new HSR corridors on the number of visitors and their total and average stay at several end-line and intermediate cities as compared to similar counterparts not having such an infrastructure, and it was found that these effects are generally extremely weak or just restricted to larger cities [27].Intermediate cities only benefit from HSR if they have well-developed tourism amenities [10]. A summary of studies related to cities' size and location in the HSR network is given in Table 1. Research Context The Capital area in China (named the Jingjinji Area, see Figure 1) has experienced rapid growth in recent decades, and is one of the most highly urbanized regions of China.The Jingjinji Area has 2 large cities (Beijing and Tianjin), 1 capital city (Shijiazhuang City in the Hebei Province), and 10 other cities (Chengde, Zhangjiakou, Qinhuangdao, Tangshan, Langfang, Baoding, Zhangzhou, Hengshui, Xingtai, and Handan).This region has always been an important tourist attraction in China.There are 10 world heritage sites in this area, occupying nearly 20% of the whole country.In 2017, the area received a total of 8.9 million inbound tourists and 1.05 billion domestic tourists. Because of the growth in economic activity in this area and the population density, traffic congestion has become a serious problem.At present, six HSR lines connect 11 cities in this area, and there will be 10 HSR lines by the end of 2021 that link all 13 cities.The HSR characteristics are shown in Table 2 and Figure 1.An HSR network with 10 lines and 13 nodes that covers a distance of 2272 km will be developed by the end of the year 2021.Two 0.5-h commuting circles around Beijing, Tianjin, Shijiazhuang, and Baoding will undoubtedly be able to foster tourism development.When major investments in the transportation system are carried out, the intensity and direction of the interaction of tourism between cities will also undergo major changes.Therefore, strategies should be adopted to support the regional tourism industry.An HSR network with 10 lines and 13 nodes that covers a distance of 2272 km will be developed by the end of the year 2021.Two 0.5-h commuting circles around Beijing, Tianjin, Shijiazhuang, and Baoding will undoubtedly be able to foster tourism development.When major investments in the transportation system are carried out, the intensity and direction of the interaction of tourism between cities will also undergo major changes.Therefore, strategies should be adopted to support the regional tourism industry.In order to provide a measure of the size of cities, population has been chosen as the indicator.In an HSR network, there are three types of city: core, intermediate, and end-line.Based on the map of the HSR network in the Capital Area of China, we define cities with over five lines as core cities, cities with more than two lines as intermediate cities, and cities with only one line as end-line cities.The size and the category of each city are reported in Table 3.In order to provide a measure of the size of cities, population has been chosen as the indicator.In an HSR network, there are three types of city: core, intermediate, and end-line.Based on the map of the HSR network in the Capital Area of China, we define cities with over five lines as core cities, cities with more than two lines as intermediate cities, and cities with only one line as end-line cities.The size and the category of each city are reported in Table 3.Based on the literature, we adopted six hypotheses in the context of our case study: Hypothesis 1 (H1). The HSR network has a positive effect on large and core cities. Hypothesis 2 (H2). The HSR network has a positive effect on large and intermediate cities. Hypothesis 3 (H3). The HSR network has a weak and insufficient effect on medium and intermediate cities. Hypothesis 4 (H4). The HSR network has a weak and insufficient effect on medium and end-line cities. Hypothesis 5 (H5). The HSR network has a weak and insufficient effect on small and intermediate cities. Hypothesis 6 (H6). The HSR network has a weak and insufficient effect on small and end-line cities. Research Design Spatial interaction is the movement of people, freight, or information between an origin and a destination, and the gravity, potential, and retail models are the main models in the current literature on spatial interaction analysis [38].Tourism spatial interaction (TSI) is a key feature of a tourism industry's development, where the tourism flow T ij [39][40][41][42][43][44] between two regions (i and j) is a function of the characteristics of the regions themselves (i.e., of an origin i and a destination j); (X i and X j ) are the attraction factors and/or the disutility factors, such as distance, where the disutility factors act as a proxy of the transportation costs.Gravity models are used to describe the TSI [45][46][47][48].Marrocu and Paci [49] reported results from an econometric study of tourism flows for 107 Italian provinces based on origin-destination (OD) spatial interaction model.Paturlli et al. [50] chose a gravity model to examine the effects of World Heritage Sites on the domestic tourism in Italy.Morley et al. [51] proposed an improved gravity model for tourism flows that considers the individual utility theory and showed the suitability of the gravity model in the context of tourism. The progress in transportation technology that HSR represents has affected tourism interactions by improving the connections between regional tourist destinations and therefore their accessibility, which in turn has reshaped the spatial structure of regional tourism.The use of spatial interaction models to examine HSR's influence on tourism development has not yet been applied in the tourism literature.In this paper, we have chosen a revised gravity model to explain and measure the impacts of HSR on TSI.Li et al. [14] present a model based on Alan Wilson's model [15] with three important coefficients and results based on data from China.The model formulation used here is: The accessibility from origin j is to a set of destinations, k; P j C α j is the tourism potential for destination k from origin j; P j is the population and C j is the average income in a given origin j; r jk is the distance (or travel time) between j and k; α is the income coefficient; β is a coefficient that represents the significance of distance or a travel cost; and K is a constant. Since the paper's objective is to examine the influence of HSR on TSI, we would have preferred the increase in tourist trips to be an extra term on the P j C j factor; e.g., accessibility to tourist destinations to a power, and then to use an origin-constrained model.In our case, it has to be unconstrained to allow for an increase in trips as a result of the r jk 's reduction.So, K, P, and C in Equation ( 1) are all assumed to be fixed between times.Li et al. [14] used historical data to estimate the three constants.The result of α as the income coefficient in China is 0.64.β was estimated using the "integral method on tourist amount" (IMTA), and was found to be equal to 0.0337.K was estimated using data from the whole country with the result of 1.89. Therefore, it follows that the equation for the Capital Area of China, using these results, is: Because K, P, C, and A are constant, the growth between time periods (1) and ( 2) is: exp −βr jk (2) (4) In China, the Tourism Administration evaluates tourist attractions and ranks them using different levels.There are five levels of attraction: AAAAA, AAAA, AAA, AA, and A. The more As an attraction has, the higher its level of tourism development.The A k has been measured using this rank.The population, income, and A-level Scenic Spots can be found in the Annual Statistical Bulletin of Economic and Social Development for each city.The shortest travel time was chosen to measure the distance between j and k, since the physical distance has been replaced by the temporal distance because of the progress in transportation technology [52] and 'the shrinking continent' [53].The travel time between two cities can be found on the website of 12,306, which is the official ticketing website of the Chinese railway. A comparison was performed between the two TSI indices in the two scenarios, i.e., the current one with the existing HSR network and the future one in which the extension to the HSR network is taken into account.Considering that the HSR network will be completed by the end of 2021, we have decided to choose the year 2022 for the future scenario.The predicted travel time that was proposed in the project has been considered.The population, the income, and the number of Scenic Spots cannot simply be linearly predicted by the year 2022; therefore, we have assumed that there will be no significant changes in the coming years. The Coefficient of Variation (CV) is a standardized measure of the dispersion of a probability distribution or a frequency distribution, and is used in this paper to compare the changes in regional tourism levels in the two scenarios.This index is expressed as follows: where σ is the standard deviation of TSI and µ is the mean value.The larger the CV value is, the greater the difference between the TSIs of the cities, which also means that a greater gap exists in the whole area.Microsoft Excel 2013 was used to calculate the TSIs and CVs. Decrease in Travel Time The construction of the HSR network has reduced the overall travel time in the Capital Area.In 2017, the total travel time was 557.5 h and 21 min, and in 2022 it will be reduced to 337 h and 17 min, a reduction of nearly 40% (see Table 4).The travel time has been significantly changed in Chengde and Zhangjiakou, where the decrease is 54.5 h and 46 h, respectively. TSI Variation With the extension of the HSR network, the TSI values across the region increase.In 2017, the total TSI was 151,246.63; in 2022, the total value will increase up to 181,954.35, with a percentage change of 20.3%.The increase in TSI means that the tourism industry in the cities in this area will grow.Beijing, Tianjin, and Shijiazhuang are the top three cities in the rank in both the origin and destination lists in both 2017 and 2022.From the origin perspective, Beijing will become the top origin city in the Capital area in 2022 thanks to the extension to the HSR network, replacing Tianjin in 2017.The rank of Tangshan, Zhangjiakou, and Chengde will increase to 3, 3, and 2, respectively, while that of Qinhuangdao and Hengshui will decrease to 2. From the destination perspective, the rank of Tangshan, Zhangjiakou, and Chengde will increase to 3, 5, and 4, respectively, while that of Xingtai, Qimhuangdao, and Cangzhou will decrease to 2. Hengshui will become the last city in the rank in both the origin and destination lists (see Table 4). According to Table 4, H1 is partly supported because we observed a large amount of TSI growth in Beijing, but a small amount of growth in Tianjin.H2 is not supported because Baoding and Shijiazhuang will have weak TSI growth in 2022.H3 is partly supported because we observed different effects from the two cities.Tangshan has a huge amount of growth, while Xingtai has little.H4 and H5 are supported by the small changes in TSIs in Handan, Cangzhou, and Langfang.We can find complex evidence for H6.Hengshui has a small change in TSIs, which is positive evidence for H6.Zhangjiakou and Chengde have a huge amount of TSI growth, which is negative evidence for H6.Qinhuangdao has some growth, which is positive evidence for H6. Cities' Roles We can define one city's role as an origin or a destination by comparing the TSI values.If the TSI value as an origin is larger than the TSI value as a destination, then we will understand that this city is more like an origin; otherwise, it is a destination.In such a context, the four cities of Beijing, Tianjin, Chengde, and Qinhuangdao received more interactions as destinations than as origins (see Table 5) in 2017, indicating that they played more of a destination role in the Capital Area.The other nine cities played more of an origin role in 2017.For the HSR network extension by 2022, Zhangjiakou's role as a destination is stronger than its role as an origin.At the same time, the TSI value generated by Qinhuangdao as an origin is larger than the TSI value generated by it as a destination.It has been shown that that the HSR network has changed the role of Zhangjiakou from an origin into a destination and the role of Qinhuangdao from a destination into an origin, strengthening the roles of Beijing and Chengde as destinations, and highlighting the role of Tangshan as an origin city, since the difference in 2022 will increase significantly.The data in Table 5 support H2, H4, and H5, and partly support H1 and H3.Table 5 provides negative evidence for H6.Of the four small and end-line cities, Hengshui is the only one that experiences a weak effect from the HSR network.The other three cities (Qinhuangdao, Zhangjiakou, and Chengde) experience a huge effect from the expansion of the HSR network in this area. A Downward Trend for the HSR Network After the construction of the HSR network in the year 2022, the TSI coefficient value for this area shows a decreasing trend.Indeed, the TSI values will decrease from 0.99 in 2017 to 0.90 in 2022 from the origin perspective, and will decrease from 1.76 in 2017 to 1.72 in 2022 from the destination perspective (see Table 4).This downward trend means that, although there are still spatial differences in the TSIs between the cities in this area, these differences are gradually decreasing, and this area is developing in a more balanced and coordinated direction. We should pay more attention to Tangshan and Chengde, since the CV values in 2022 will increase from both the origin and destination perspectives, which means that the TSI values between the two cities and other cities will have a more unbalanced distribution due to the HSR network.At the same time, the HSR network will balance the spatial difference from Beijing and Zhangjiakou to other cities, since the CV values of those two cities in 2017 and 2022 will decrease significantly. Tourist Flow Change after the HSR Network Extension in Both Intensity and Direction We can learn about the changes in tourist flow intensity between two cities by comparing the TSIs between them in 2017 and 2022 (see Figure 2 and Table 6).In all of these pairs of cities, the TSI values of both directions between Beijing and Chengde in 2022 increase by 1000 times w.r.t. the year 2017.The TSIs on the dual direction of Tianjin and Zhangjiakou, the dual direction of Tianjin and Chengde, the single direction from Tangshan to Chengde, the dual direction of Zhangjiakou and Tangshan, the dual direction of Zhangjiakou and Baoding, and the single direction from Langfang to Chengde increase more than 100 times.This change is due to the extension of the HSR network. We can learn which cities constitute another city's origins from the TSI matrix (Table 6).Consider Chengde as example.In 2017, Qinhuangdao, Beijing, Tangshan, and Cangzhou were the four origins for Chengde.In 2022, all of the other 12 cities become origins for Chengde, with Beijing, Tianjin, Tangshan, Langfang, and Shijiazhuang being the top five origins.For Beijing in 2022, Tangshan becomes the main origin, followed by Tianjin.In Zhangjiakou's origin list, there are only four cities, which are Beijing, Langfang, Tianjin, and Baoding in 2017.In 2022, all of the other 12 cities are on the list, with the top five being Beijing, Tianjin, Tangshan, Langfang, and Baoding. Discussion and Conclusions The objective of this study is to examine how the HSR network generates significant impacts on tourism spatial interaction between cities. Data were based on the economic and tourism development of 13 cities in the Capital Area of China.The findings indicate that the impact of HSR can be different depending on the size of the destination city.This aspect has not been developed in the previous international literature.The article has implications for HSR and tourism development research as well as for the tourism development strategies employed by multilevel city destinations connected to a HSR network. With an HSR network, the travel time in the area shrinks significantly, and, consequently, the TSI values increase everywhere.We found that the small cities, and even the end-line cities, will benefit significantly from HSR.These results contrast in part with the results of Bazin et al. [31], who, after observing the tourism development in France, pointed out that there was no tourism development growth in smaller cities.Another aspect that our results highlight is that small cities connected directly to core cities benefit more from the HSR network than those connected to non-core cities. Chengde will be connected to the core city of Beijing with the Jing-Cheng HSR, and will have a more convenient connection with other cities.The TSI values of Chengde in 2022 increase by 500 times w.r.t.2017.Zhangjiakou will be connected to Beijing with the Jing-Zhang HSR, and the TSI value in 2022 increases 100 times w.r.t.2017.Qinhuangdao is less influenced by the HSR network, since it is connected directly to Tangshan, which is a medium-sized city.Hengshui is a small and end-line city that does not benefit at all from the HSR network mainly because of its link with Shijiazhuang, which is a non-core city. The findings of this study partly support the work of Albalate et al. [54], where the impacts of HSR were found to be limited for larger cities.Indeed, this study shows that larger cities, such as Beijing, Tianjin, Shijiazhuang, and Baoding, will not experience significant changes after the HSR network's extension.All of the CV values of the larger cities decrease.For the medium-sized cities, this paper extends the work of Bazin et al. [31] by examining the effect of HSR on different medium-sized cities, and partly supports the work of Verela et al. [37] in which HSR was shown to have a weak effect on medium-sized and intermediate cities. We have also found that the HSR network will change the role or characteristics of the cities, which has not been examined in the international literature to date.The HSR network in the Capital Area of China will change the role of both Zhangjiakou and Qinhuangdao.This result can be deduced from the difference between the TSI value as an origin and the TSI value as a destination.In the Zhangjiakou case, the city of Zhangjiakou was an origin city in 2017, since the TSI value as an origin was larger than the TSI value as a destination.In 2022, when the HSR network will have been extended, the TSI value as a destination becomes larger than that as an origin.Therefore, Zhangjiakou will become a destination with the HSR network extension.The situation is the opposite in the case of Qinhuangdao.Specifically, in 2017, the TSI value as a destination is higher than that as an origin, while in 2022 the situation is the opposite and Qinhuangdao will become an origin.This study's findings also suggest that an HSR network has a positive impact on tourist destination choice [6,8,[55][56][57].We found that an HSR network will increase the number of destination choices, because in our study we observed that most of the cities would have a greater number of destinations after the implementation of the HSR network. Plassard observed that a time/space compression effect might generate a centralizing effect as occurred in France, where Paris has become the center of the star-shaped TGV network [32].Therefore HSR may reinforce an agglomeration effect on the tourism industry [18].In our findings, the TSI and CV values of the whole Capital Area became smaller from both the origin and destination perspectives in 2022 w.r.t.2017, meaning that the difference between the cities in the Capital Area will decrease in 2022 after the HSR network's extension.The difference in the results between the existing literature and our paper lies in the different case study.The Plassard case study considered the HSR network around Paris.In our study, the Capital Area's HSR has a reticulated structure, and this HSR network helps the whole area to develop in a more coordinated manner. The results of this study indicate that the HSR network will strengthen the tourism spatial interactions in the Capital Area of China, which means that there will be closer economic and tourism connections between the cities.From a demand perspective, destinations have been able to identify changes in tourist preferences after the introduction of the HSR network, such as staying for a shorter duration at the destination [17,58,59], more business tourism development [17,60], and a higher probability of coming back to the destination [16].Destinations should launch strategies to respond to such changes.Therefore, tourism products that focus more on special cuisine, shopping, and entertainment should be explored.In response to the business tourism opportunities, there might be more business hotels and conference centers at the destinations. Based on this research, it is clear that the destination marketing and management organizations of the 13 cities in the Capital Area will need to encourage investments in order to promote overnight stays; this is especially true for the local authorities in the three cities of Zhangjiakou, Qinhuangdao, and Hengshui.Zhangjiakou will become the main destination in this area, and receive a flow of tourists from Beijing, Tianjin, and Tangshan, which are the main origin cities in this area because of the high population and the strong economic development.Zhangjiakou will host the 2022 Winter Olympics, and the HSR line between Beijing and Zhangjiakou has been built for this event.Zhangjiakou has excellent ice and snow sporting attractions; however, the city still needs to think about a diversified tourism products plan to achieve sustainable competitiveness after the Olympics.Qinhuangdao will become an origin city in 2022 if no extra effort is made to market and promote it to other cities.Qinhuangdao should invest more in the development of attractions, since it is a city that is located by the sea and has a long history with seaside resorts in the North of China.A more urgent evolution for Hengshui is required, since the TSI values from both the origin and destination perspectives will be the smallest ones in the whole area, which means that the lowest tourism connection will exist between it and the other cities.In order to develop tourism in Hengshui, the tourism authority should encourage more investment in attractions; otherwise, the city will excluded from the HSR era.The second implication from this paper is that the transportation systems in the Capital area should be enhanced to take advantage of new information technology solutions and algorithms [61] to provide a more solid foundation for regional tourism development. The model adopted in this paper reflects the impact of the increase in trips as a function of accessibility.The obtained results are based on an unconstrained model.The case study reported in this paper deals with six differently sized categories of cities, and more cities with other characteristics, such as their location in the HSR network, need to be taken into consideration in the context of large and end-line cities, medium-sized and core cities, and small and core cities in future work. Figure 1 . Figure 1.Existing and planned high-speed Rail networks in the Capital Area of China. Figure 2 . Figure 2. The TSIs between the pairs of cities where changes were found.(a) TSIs in 2017; (b) TSIs in 2022. Figure 2 . Figure 2. The TSIs between the pairs of cities where changes were found.(a) TSIs in 2017; (b) TSIs in 2022. Table 2 . HSR in The Capital Area of China. Table 3 . The size and category of cities in the study area. Table 3 . The size and category of cities in the study area.: the official website of the National Bureau of Statistics of China (http://data.stats.gov.cn)* The unit of population is millions. Source * The unit of travel time in this table is minutes. Table 5 . Each City's Role based on TSIs. Table 6 . The TSI Matrix in the Capital Area in 2017 and 2022.means that the TSI in 2017 is 1107 and the TSI in 2022 is 5771; 316 means that the TSIs in 2017 and 2022 are 316.
8,362.2
2019-01-17T00:00:00.000
[ "Geography", "Economics", "Business" ]
The rise of predation in Jurassic lampreys Lampreys, one of two living lineages of jawless vertebrates, are always intriguing for their feeding behavior via the toothed suctorial disc and life cycle comprising the ammocoete, metamorphic, and adult stages. However, they left a meager fossil record, and their evolutionary history remains elusive. Here we report two superbly preserved large lampreys from the Middle-Late Jurassic Yanliao Biota of North China and update the interpretations of the evolution of the feeding apparatus, the life cycle, and the historic biogeography of the group. These fossil lampreys’ extensively toothed feeding apparatus differs radically from that of their Paleozoic kin but surprisingly resembles the Southern Hemisphere pouched lamprey, which foreshadows an ancestral flesh-eating habit for modern lampreys. Based on the revised petromyzontiform timetree, we argued that modern lampreys’ three-staged life cycle might not be established until the Jurassic when they evolved enhanced feeding structures, increased body size and encountered more penetrable host groups. Our study also places modern lampreys’ origin in the Southern Hemisphere of the Late Cretaceous, followed by an early Cenozoic anti-tropical disjunction in distribution, hence challenging the conventional wisdom of their biogeographical pattern arising from a post-Cretaceous origin in the Northern Hemisphere or the Pangean fragmentation in the Early Mesozoic. Our web collection on� contains articles on many of the points above. Policy information about�小tv of computer code Data collection The data used in this study followed that in previous relevant study plus a dozen of newly defined morphological characters tudy and the Statistical analyses were performed using the Bayesian phylogenetic inference software MrBayes (https://github.com/NBISweden/MrBayes).We describe the models, priors andMarkov chain Monte Carlo (MCMC) settings for the analysis in the Methods section in the main text. 丿 For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers.We strongly encourage code deposition in a community repository (e.g.GitHub).See the Nature Portfolio�心�for further information Policy information about� All manuscripts must include a�小�-This statement should provide the following information, where applicable -Accession codes, unique identifiers, or web links for publicly available datasets -A description of any restrictions on data availab由ty -For clinical datasets or third party data, please ensure that the statement adheres to our QQ!kY The data underlying this article are available in its online Supplementary Information and Supplementary Data 1-8.The data matrix was organized by using Mesquite and exported as .nex file for analyses. Life sciences study design All studies must disclose on these points even when the disclosure is negative. Sample size Describe how sample size was determined, detailing any statis打cal methods used to predetermine sample size OR if no sample-size calculation was performed, describe how sample sizes were chosen and provide a rationale for why these sample sizes are sufficient Data exclusions Describe any data exclusions/fno data were excluded from the analyses, state so OR if data were excluded, describe the exclusions and the rationale behind them, indicating whether exclusion criteria were pre-established Rep I ication Describe the measures taken to verify the reproducibility of the experimental findings/fall attempts at replication were successful, confirm this OR if there are any findings that were not replicated or cannot be reproduced, note this and describe why Randomization Describe how samples/organisms/participants were allocated into experimental groups/fa/location was not random, describe how covariates were controlled OR if this is not relevant to your study, explain why Blinding Describe whether the investigators were blinded to group a/location during data collection and/or analysis.If blinding was not possible, describe why OR explain why blinding was not relevant to your study 丿 Behavioural & social sciences study design All studies must disclose on these points even when the disclosure is negative. Study description Research sample Sampling strategy Briefly describe the study type including whether data are quantitative, qualitative, or mixed-methods (e.g.qualitative cross-sectional, quantitative experimental, mixed-methods case study丿 State the research sample (e.g.Harvard university undergraduates, villagers in rural India丿and provide relevant demographic information (e.g.age, sex) and indicate whether the sample is representative.Provide a rationale for the study sample chosen.For studies involving existing datasets, please describe the dataset and source. Describe the sampling procedure (e.g.random, snowball, stratified, convenience丿. Describe the statistical methods that were used to predetermine sample size OR if no sample-size calculation was performed, describe how sample sizes were chosen and provide a rationale for why these sample sizes are sufficient.Indicate the start and stap dates of data collection.If there is a gap between collection periads, state the dates for each sample cohort. If no data were excluded from the analyses, state so OR if data were excluded, provide the exact number of exclusions and the rationale behind them, indicating whether exclusion criteria were pre-established. State how many participants dropped out/declined participation and the reason(s) given OR pravide response rate OR state that no participants dropped out/declined participation if participants were not allocated into experimental groups, state so OR describe how participants were allocated to groups, and if a/location was not random, describe how covariates were controlled. Ecological, evolutionary & environmental sciences study design All studies must disclose on these points even when the disclosure is negative. Research sample Sampling strategy Data collection This study described two new fossil lampreys from the Jurassic of North China.It presented new interpretations of the evolution of the feeding apparatus, body size, life cycle pattern, and the historic biogeography of lampreys. Four fossil lampreys(jawless vertebrates), two of Yanliaomyzon occisor, and two of Yanliaomyzon ingensdentes.We chose them because they preserved structures of the feeding apparatus and the intestine content which can uncover the feeding biology of these fossil lampreys and shed new light on the evolution of lampreys.The colleagues of IVPP conducted fieldwork in the fossil localities and then collected the fossil materials and finnaly delivered them to the first author of this study.The sample sizes include the details of the feeding appratus and the postcranial part of the lampreys and therefore is sufficient. The morphological data was collected via careful observation on the specimens under the microscobe Zeiss Stemi 305 and Nikon 745T and codified as discrete characters in the matrix by Feixiang Wu, and then delivered the date to Chi Zhang for analyses Blinding Blinding was not relevant for data collection as samples were selected based on their location within the stratigraphy.Blinding was also not relevant for downstream analysis as previously established analysis pipelines was used for the processing of the Jianchang County, Lingyuan City, Liaoning Province, and Wubaiding section, Reshuitang Town, Lingyuan City, Liaoning Province, China. No data excluded The observation of the fossil can be performed by other colleagues.The phylogenetic analyses can be replicated by using the data we provided. No randomization procedure was applicable because it was not necessary for the type of study we conducted, it is a study about the fossils which are which are always incomplete and unreplaceable.Temperate region with plains and small hills in North China.The annual temperature is 3℃ ~16℃.Fossil localities are near towns and cities. Nanshimen (40°31'52"N, 119°29'11"E), Gangou Town, Qinglong County, Hebei Province; Toudaoyingzi (40°46'N 119°26'E), Jianchang County, Liaoning Province; Wubaiding section(41°22'9"N, 119°23'38"E), Reshuitang Town, Lingyuan City, Liaoning Province, China.Prof. Xiaoling Wang kept some of the specimens and delivered the holotype of Yanliaomyzon occisor to Professor Mee-mann Chang (IVPP) then.In 2019 and 2020 they handed over these lamprey materials to the first author of this work.In the May of 2020, the first author, together with Dr. Haibing Wang and Dr. Zhiqiang Yu, Wei Zhou (IVPP) went (driving a car by Zhou from Beijing to the fossil localities) to check the lithological, stratigraphical and sedimentary setting of relevant sections.The palaeontologists of IVPP are free to go there for fieldwork and collect specimens, no special permission is necessary for the scientific exploration in this region.No importion or exportion is involved in this study. No disturbances were caused during the study. 丿 Reporting for specific materials, systems and methods We require information from authors about some types of materials, experimental systems and methods used in many studies.Here, indicate whether each material, system or method listed is relevant to your study.If you are not sure if a list item applies to your research, read the appropriate section before selecting a response Authentication Describe the authentication procedures for each cell line used OR declare that none of the cell lines used were authenticated. Mycoplasma contamination Confirm that all cell lines tested negative for mycoplasma contamination OR describe the results of the testing for mycoplasma contamination OR declare that the cell lines were not tested for mycoplasma contamination. Commonly m因dentified lines Name any commonly misidentified cell lines used in the study and provide a rationale for their use (See氐霎register) Specimen deposition Dating The specimens of Yanliaomyzon occisor are from Nanshimen Village, Gangou Town, Qinglong County, Hebei Province and Toudaoyingzi Town, Jianchang County, Liaoning Province.Those of Yanliaomyzon ingensdentes are from Wubaiding section, Reshuitang Town, Lingyuan City, Liaoning Province, China .In these localities, the researchers are free to conduct the field work.People who want to check the specimen can get the access via contacting the corresponding authors.methods , J ' information on the approval of the study protocol must also be provided in the N/A manuscript 丿 Field-specific reporting Please select the one below that is the best fit for your research.If you are not sure, read the appropriate sections before making your selection.口Life sciences □ Behavioural & social sciences 口 Ecological, evolutionary & environmental sciences For a reference copy of the document with all sections, see� For qualitative data, please indicate whether data saturation was considered, and what criteria were used to decide that no further sampling was needed Nthe data collection procedure, including the instruments or devices used to record the data (e.g.pen and paper, computer, eye tracker, video or audio equipment丿whether anyone was present besides the participant(s) and the researcher, and whether the researcher was blind to experimental condition and/or the study hypothesis during data collection. Xiaolin Wang and in 2014 by Min Wang .Data collection from the fossils started in May, 2020 and ossils are from the fossil localities of Nanshimen Village , Gangou Town, Qinglong County, Hebei Province, and Toudaoyingzi, data Field work, collection and Did the study involve field work?口 Yes 仁] No transport 、 In 2008 , 2009, 2014, Prof. Xiaoling Wang and Min Wang of the Insititute of Vertebrate Paleontology and Paleoanthropology (IVPP), CAS conducted fieldwork in the fossil localities and collected fossil materials reported herein. Materials & ex perimental systems n/a I Involved i n the study DID Antibodies DID Eukaryotic cell lines 口口 Palaeontology and archaeology DID Ani mals and other organisms DID Cli nical data 口口 Dual use research of concern used in the study; as applicable, provide supplier name, catalog number, clone name, and lot number Describe the validation af each primary antibody far the species and application, noting any validation statements on the manufacturer's website, relevant citations, antibody profiles in on line databases, or data provided in the manuscript 丿 Eukary_otic cell lines J Policy information about� Cell line source(s)State the source of each cell line used and the sex of all primary cell lines and cells derived from human participants or vertebrate models.
2,595.6
2023-10-31T00:00:00.000
[ "Biology", "Environmental Science", "Geology" ]
Status and perspectives of the Dubna superheavy element factory In the Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research (FLNR JINR), construction of a new experimental complex is currently in progress (Superheavy Element Factory), aimed at the synthesis of new superheavy nuclides and the detailed study of those already synthesized. The project includes the construction of a new accelerator of stable and long-lived isotopes in the mass range A = 10–100 with an intensity of up to 10 pμA and energy up to 8 MeV/nucleon; construction of a new experimental building and infrastructure for housing the accelerator with five channels for the transportation of beams to a 1200-m2 experimental hall that is equipped with systems of shielding and control for operations with radioactive materials; development of new separators of reaction products; upgrade of the existing separators and development of the new detection modules for the study of nuclear, atomic, and chemical properties of new elements. The first experiments are planned for 2018. Introduction The six heaviest chemical elements with atomic numbers 113 to 118 that fill the 7 th row of Mendeleev's Periodic Table were synthesized in reactions of 48 Ca ions with actinide targets in the experimental studies carried out over the recent years.Over 50 new isotopes of elements 104 to 118 with maximum neutron excess were for the first time produced and their decay properties were determined in these investigations [1,2].The new isotopes considerably filled up the Chart of the Nuclides and expanded it up to Z = 118 and N = 177 superheavy elements.Fundamental conclusions of the modern theory concerning the limits of existence of nuclear matter have for the first time received experimental confirmation. The discovery of the new domain (island) of stability and the very fact of existence of superheavy elements (SHE) have posed a number of new questions associated with fundamental properties of nuclear matter: • Can even heavier nuclei exist?Is the "Island of Stability of SHE" the last one on the Chart of the Nuclides?• Can the superheavy nuclei be formed in the process of nucleosynthesis like those stable and long-lived nuclei in the groups of Pt, Pb, and U-Th found in Nature?• What is the electron structure of a superheavy atom in strong Coulomb field? • What is the limit of Mendeleev's Nobel Symposium NS160 -Chemistry and Physics of Heavy and Superheavy Elements In search for answers to these and many related questions we have to considerably widen our knowledge about the subject of investigation.Obviously, to reach this aim we need to greatly broaden the field of work and involve new facilities.However, judging by first attempts, going beyond the area of already synthesized superheavy elements is quite a difficult task. SHE factory Direct synthesis of elements with Z >118 in fusion reactions means using projectiles heavier than Ca, since the capability of high-flux reactors to produce target material is limited to Cf isotopes.A number of attempts undertaken in various laboratories and aimed at the synthesis of elements 119 and 120 did not yield any result.Limits of production cross sections of nuclei with Z = 120 in, for example, the reaction 54 Cr+ 248 Cm [3] and nuclei with Z = 119 via 50 Ti+ 249 Bk [4] were ten to twenty times lower than those of production of isotopes of elements 114 and 115 in experiments with 48 Ca.Advance in the other direction -towards the desired area of more neutron-rich isotopes of the known SHE closer to the predicted shell closure at N = 184 -is possible only with use of radioactive projectiles that are more neutron-rich than 48 Ca.Unfortunately, the intensities of beams of radioactive nuclei at the most advanced accelerators and even those in the design stage are extremely low for performing this kind of experiments. Moving in the opposite direction -towards neutron deficit in superheavy nuclei -leads to strong decrease of production cross section when going off the shell N = 184. Under these circumstances, it seems appropriate to investigate in detail the already synthesized nuclei: to determine their masses, forms, and fission barriers, find their excited states, etc.Yet here we need to considerably improve the statistics.Individual events of formation and decay of superheavy nuclides that are registered in present experiments with a rate of one per day to one per month (this is considered a great achievement of the last years) are far from being sufficient for these purposes.Experimental results should be presented not by single events but by hundreds and thousands.Only then one will see rare decay channels of the superheavy nuclei or study in detail nuclear transitions to various states, which means to make complete measurements in spite of extremely low cross sections of producing superheavy elements. Note that the present situation in some aspects reproduces the beginning of work on synthesis of superheavy elements twenty years ago when a new approach to the synthesis of SHE (reactions 48 Ca+actinides) was found and a significant breakthrough in experimental technique was made. We came to the conclusion that at present, based on accumulated knowledge in study of • reactions of synthesis of new elements (fusion reactions with evaporation of neutrons); • cross sections of production of the nuclei of SHE reaction channels at various beam energies (excitation functions); • competing reaction channels (background conditions); • decay characteristics of the isotopes of SHE (decay modes, energies, decay times, etc.), as well as the achievements of the last twenty years in the fields of • technique of physical experiment; • physics and technology of charged particle accelerators and plasma physics; • new highly sensitive precision detectors for nuclear radiation; • production of new target materials in high-flux nuclear reactors and development of new target technologies, etc. it is possible to increase the production rate of the known isotopes of SHE by one to two orders of magnitude and carry out experiments aimed at the synthesis of new elements with Z>118 by employing a new experimental complex, the "SHE Factory", and thus make a second breakthrough in the world of the heaviest nuclei and elements.The ideas incorporated in the project of the SHE Factory are associated with developing the experimental basis in several directions.These directions are: Nobel Symposium NS160 -Chemistry and Physics of Heavy and Superheavy Elements • construction of new powerful accelerator of stable and long-living isotopes in mass range A = 10-100 with intensity up to 10 p A and energy up to 8 MeV/nucleon (see Fig. 1 All major parts of new cyclotron DC-280 are produced and delivered to Dubna (Fig. 2). In general, studies of heavy and superheavy nuclei are related to atomic physics, nuclear physics, astrophysics, and radiochemistry.They cover a wide range of phenomena, the study of which is far more than experiments that are to be carried out at the SHE Factory.However, when experiments are performed with extremely heavy nuclei at the limit of stability and thus require extremely high sensitivity, advantages of the SHE Factory become highly important.This is the specificity of the new complex.Its program of experimental research will be directly connected with the achievements of world science and is designed for broad international cooperation. A long-term program of research of SHE should be developed with regard to the development of the new complex, its phased commissioning, production of the primary beam, and start of the first experiment in 2018.Experimental research at the existing complex ECR-4M+U-400+DGFRS and at the functioning setups will continue at the same pace until the first full-scale experiment at SHE Factory (2017-2018).However, priority is given to the studies that will be continued at the SHE Factory. Table 1 . Expected parameters of the DC280 cyclotron. and Table 1 for more details); • construction of a new experimental building and infrastructure for placing the accelerator with five channels for transportation of beams to a 1200-m 2 experimental Nobel Symposium NS160 -Chemistry and Physics of Heavy and Superheavy Elements
1,835.4
2016-12-12T00:00:00.000
[ "Physics" ]
The entrance law of the excursion measure of the reflected process for some classes of L\'evy processes We provide integral formulae for the Laplace transform of the entrance law of the reflected excursions for symmetric L\'evy processes in terms of their characteristic exponent. For subordinate Brownian motions and stable processes we express the density of the entrance law in terms of the generalized eigenfunctions for the semigroup of the process killed when exiting the positive half-line. We use the formulae to study in-depth properties of the density of the entrance law such as asymptotic behavior of its derivatives in time variable. Introduction It follows from excursion theory that the trajectories of a Lévy process can be decomposed using the excursions of the process reflected in its past infimum. This result justifies the importance of knowing the excursion measure of the reflected process and more particularly, the entrance law of this measure. There are also several interesting applications of this entrance law. First it is directly related to the potential measure of the time space ladder height process, see Lemma 1 in [3]. Moreover it provides a useful expression of the distribution density of the supremum of the Lévy process before fixed times, [3], [4]. More recently it has been involved in the study of the probability of creeping through curves of Lévy processes, [6]. In this article we obtain integral representations of the densities and the Laplace transforms of the entrance laws of the reflected excursions for two classes of real valued Lévy processes. The first class consists of symmetric Lévy processes, with a particular emphasis on subordinate Brownian motions, when the Lévy measure of the underlying subordinator has a completely monotone density. The other class is that of stable processes. The presented formulae for symmetric processes are given in terms of the corresponding Lévy-Kchintchin exponent Ψ(ξ) and the related generalized eigenfunctions introduced by M. Kwaśnicki in [14]. In the stable case, we based the calculations on the generalized eigenfunctions studied recently by A. Kuznetsov and M. Kwaśnicki in [13]. Then we used the formulae obtained for the entrance law densities to derive corresponding integral representations for supremum densities. Although the theory of Lévy processes is very rich and abounds in numerous general relationships, as those coming from the Wiener-Hopf factorizations, there are few examples where the explicit representations of the related densities are available. Apart from Brownian motion and Cauchy process, some series representations were recently found in [8], [12], [7] in the case of stable processes. A different approach was presented in [16], where the theory of Kwaśnicki's generalized eigenfunctions were used to described the first passage time density through a barrier for subordinate Brownian motions with regular Lévy measures. This concept was generalized to non-symmetric stable processes in [13]. In the present paper we stay in this framework and show that a similar approach leads to integral representations of the entrance law density, the supremum density or the density of joint distribution of the process itself and its supremum. Then we apply the obtained formulae to study the asymptotic behavior of the derivatives in time variable of the entrance law densities of the reflected excursions. Let us finally mention that these formulae can be used to perform numerical simulations and study in-depth properties of the process coupled with its past supremum. Preliminaries Let (X, P) be the real valued Lévy process whose characteristic exponent Ψ(ξ) is characterized in terms of the Lévy triplet (a, σ 2 , Π) by the Lévy-Kchintchin formula We write P x for the law of the process starting from x ∈ R. We denote by X * = −X the dual process and P * x stands for its law with respect to P x . The past supremum and past infimum of X before a deterministic time t ≥ 0 are For given t > 0 we write f t (dx) = P(X t ∈ dx) for the corresponding distribution and f t (x) stands for its density with respect to the Lebesgue measure on (0, ∞) whenever it exists. Recalling that the reflected processes X − X and X − X are Markovian, we write L t and L * t for their local times at 0 respectively, where these are normalized in the following way We write n (and n * ) for the Itô measure of the excursions away from 0 of the reflected process X −X (resp. X −X). Our main objects of studies are the corresponding entrance laws defined by q t (dx) = n(X t ∈ dx, t < ζ), q * t (dx) = n * (X t ∈ dx, t < ζ), t > 0, where ζ is the life time of the generic excursion and q t (x), q * t (x) denotes the densities on (0, ∞) of q t (dx) and q * t (dx), whenever they exist. In this paper, it will always be assumed that 0 is regular for both half-lines (−∞, 0) and (0, ∞). In this case, the double Laplace transform of q t (dx) is given by where κ(z, ξ) is the Laplace exponent of the ladder process (L −1 t , H t ), t < L(∞). Here L −1 t denotes the right continuous inverse of L t (ladder time process) and the ladder-height process is defined by H t = X L −1 t . Analogous relations hold for q * t (dx) and the Laplace exponent κ * (z, ξ) for the ladder process ((L * t ) −1 , H * t ). Formula (2.1) actually shows that q s (dx)ds is the potential measure of (L −1 , H). We denote by h the renewal function of the ladder height process H, that is In the light of Theorem 6 in [3], the entrance laws q t (dx) and q * t (dx) seem to be basic objects in the study of the supremum distributions. More precisely, under our assumption that 0 is regular for both negative and positive half-lines, the representation (4.4) from [3] reads as which, in particular, implies Finally, for x > 0, we denote by Q * x the law of the processes killed when exiting the positive half-line, i.e. The law Q x is defined in the same way, but with respect to the dual process. The corresponding semigroups are defined as , for non-negative Borel functions f . We also write q * t (x, dy), q t (x, dy) and q * t (x, y), q t (x, y) for the corresponding transition probability measures and their densities whenever they exist. Recall that whenever q * t (x, ·) and q t (x, ·) are absolutely continuous, the duality relation holds q * t (x, y) = q t (y, x). Symmetric Lévy processes and subordinated Brownian motions This section is devoted to symmetric Lévy processes with some addition regularity assumptions on the Lévy-Kchintchin exponent Ψ(ξ) presented in details below. We also exclude compound Poisson processes from our considerations. Note that the symmetry assumptions simplify the general exposure presented in Preliminaries, where, roughly speaking, we can remove the notation with * . Moreover, the ladder time process is the 1/2-stable subordinator for every symmetric Lévy process, which implies that Finally, we recall the integral representation of the Laplace exponent of the ladder process where, in the symmetric case, Ψ(ξ) is a real-valued function. Our first result gives the expression for the Laplace transform of q t (dx) (for fixed t > 0) in the case of symmetric Lévy processes with increasing Lévy-Khintchin exponent. This is an analogue of Theorem 4.1 in [15], where the corresponding formula for X t was derived. Note that even though the formulae for the Laplace transforms of q t (dx) and P(X t ∈ dx) seem to be similar, passing from one to the other by using (2.3) and (3.1) is not straightforward. Theorem 1. Let (X, P) be a symmetric Lévy process that is not a compound Poisson process. Assume that the Lévy-Khintchin exponent Ψ(ξ) of (X, P) is increasing in ξ > 0. Then Proof. The proof is based on the same idea as in the proof of Theorem 4.1 in [15] with a slight modification of the arguments. For the completeness of the exposure and the convenience of the reader we present it below. We put ψ(ξ) = Ψ( √ ξ) for ξ > 0 and define Obviously, for fixed ξ > 0, the function ϕ(ξ, z) is a holomorphic function of z, which is positive for z > 0. Note also that lim z→0+ ϕ(ξ, z) = 0 (by monotone convergence) and lim z→∞ ϕ(ξ, z) = 1 (by dominated convergence). Moreover, as it was shown in [15], that for Im z > 0 we have Thus Arg( √ zϕ(ξ, z)) ∈ (0, π) for Im z > 0. This is equivalent to h ξ (z) = ϕ(ξ, z)/ √ z being a Stieltjes function (for fixed ξ). In general, a function g(z) is said to be a Stielties function if The constants and a measure appearing in the definitions of Stielties functions are given by Note that the last limit is understood in the sense of weak limit of measures. Since the constants appearing in the representation (3.4) for Stielties function h ξ (z) are zero. Moreover, for z = ψ(λ 2 ) we get Therefore, by (3.5), for every z > 0 we have Thus, the Laplace transform of the right-hand side of (3.3) is equal to 1/κ(z, ξ) and the theorem follows from uniqueness of the Laplace transform. From now on, for the rest of the section, we will follow the approach presented in [16] and restrict our consideration to the case where (X, P) is a subordinate Brownian motion whose underlying subordinator has a complete monotone density. The process (X, P) has the latter form if and only if its characteristic exponent Ψ(ξ) can be written as Ψ(ξ) = ψ(ξ 2 ) for a complete Bernstein function ψ (see Proposition 2.3 in [14] where a 1 ≥ 0, a 2 ≥ 0 and µ(dζ) is a Radon measure on positive half-line such that min(ζ −1 , ζ −2 )µ(dζ) is finite. As in the Stielties function representation, the above-given constants and the measure µ are determined by suitable limits as follows The spectral theory of subordinate Brownian motion on a half-line was developed by M. Kwaśnicki in [14], where the generalized eigenfunctions F λ (x) of the transition semigroup Q t of the process (X, P) killed upon leaving the half-line [0, ∞) were constructed. Some additional properties of F λ (x) were also studied in [16]. For a fixed CBF ψ and λ > 0 the generalized eigenfunctions of Q t with eigenvalue e −tψ(λ 2 ) are given by where the phase shift ϑ λ belongs to [0, π/2) and is given by Recall the following upper-bounds (Proposition 4.3 and Proposition 4.5 in [16] respectively) The function G λ is the Laplace transform of the finite measure Here ψ + denotes the holomorphic extension of ψ in the complex upper half-plane. The Laplace transform of F λ (x) is given by Recall also the following estimates Proposition 5.4 in [16] states that for unbounded ψ such that lim sup λ→0 + ϑ λ < π/2 we have the following limiting behavior and the convergence is locally uniform in x ≥ 0. The functions F λ (x) were used to find the integral representations for the density function of τ − 0 and its derivatives (see Theorem 1.5 in [16]). In the next Theorem we show that an analogous representation can be obtained for the density of the entrance law. Proof. The proofs of both formulae are direct consequences of the integral representation (3.15), the relations (2.2), (2.3) and (3.1) together with the Fubini's theorem, which can be applied due to the integral condition (3.14). The representation (3.15) enables to compute the derivatives of q t (x) and examine its behavior in two asymptotic regimes: as t goes to infinity and x goes to 0. It is described in the following theorem. 17) where ψ −1 denotes the inverse of ψ, and the convergence is locally uniform in x. This also holds for α 0 = 1 with the additional assumption 19) where p t denotes the density of the transition semigroup of (X, P). using (3.20) and dominated convergence we obtain Finally, the expression can by bounded for every t > t 1 by with some c 3 = c 3 (n, t 0 , t 1 ) > 0, which together with the regularity of ψ −1 at zero and estimates (3.20) implies that 1 vanishes uniformly in x, as t → ∞. Collecting all together we arrive at Because the justification of the fact that under assumption from point (b) we have follows in the same way as in the proof of Theorem 1.7 in [16], we omit the proof. Note that using (3.14) we can rewrite the last integral as where the last equality follows simply by integration by parts. Finally, the regular behavior of ψ at infinity implies that e −tψ(λ 2 ) is in L 1 (R, dλ), which in particular means that the transition probability density is given by the inverse Fourier transform Combining all together we get (3.19), which ends the proof. In addition to numerical applications of our results, they can be used to obtain more transparent representations as in the following example related to the Cauchy process. Proposition 1. For the symmetric Cauchy process, i.e. ψ(ξ) = √ ξ, we have Then the density f t (x) of the past supremum at time t of (X, P) can be derived from the above expression together with (2.3) and (3.1). Proof. Since ψ(ξ) = ξ 1/2 , ψ ′ (ξ) = 1/(2 √ ξ) the formula (3.15) reads as where we used the scaling property F λ (x) = F 1 (λx) = F x (λ). By the Plancherel's theorem we get, for fixed b ∈ (0, t), that where f (x) = √ 2x/π. The Laplace transform of f can easily be computed as follows Formula (3.11) gives Substituting u = z/s in the last integral we get Finally, the function studied in details in [11], is holomorphic in the region. We recall (see (3.13) in [11]) that and (see (4.1) in [11]) that where σ(z) = 1 for Im(z) > 0 and σ(z) = −1 for Im(z) < 0. Consequently, defining (for fixed x) the function of complex variable z it is easy to see that G x (z) is a meromorphic function on {z ∈ C : Re(z) < t} \ (−∞, 0] with single poles at ix and −ix. To evaluate the integral (3.21) we integrate G x over the (positively oriented) curve consisting of (see Figure 1. The contour of integration. First we compute the residua of G x at ix and −ix. By (3.22), we have Since (t ± ix) 3/2 = (t 2 + x 2 ) 3/4 e ±3i/2 arctan(x/t) , we arrive at Using the relation (3.23) we obtain where the last integral, after substituting y = −xu, is equal to Using the bounds (3.12) we can write It implies that the integrals of G x (z) over γ 1 , γ 2 , γ 5 and γ 6 vanish as n goes to infinity. Since G x (z) is bounded in the neighborhood of 0 (Re(z) > 0), the same holds for the integral over the semi-circle γ 7 . Now we can finish the computations by applying the residue theorem in order to get Taking into account (3.25) and (3.24) and dividing both sides by 2 √ π lead to the result. Remark 2. It is also possible to find similar formula for the entrance law density of the symmetric α-stable process with index α ∈ (0, 1). Using the scaling property F λ (x) = F 1 (λx) and writing t (u) is the density of the α-stable subordinator we obtain The inner integral can be evaluated similarly as in Proposition 1. Stable processes For the rest of the paper we focus on stable processes and use the theory of the corresponding generalized eigenfunctions developed in [13]. We assume that X is a stable process with characteristic exponent Ψ(ξ) = |ξ| α e πiα(1/2−ρ)sign(ξ) , ξ ∈ R. We exclude spectrally one-sided processes from our considerations, i.e. we assume that α ∈ (0, 1] and ρ ∈ (0, 1) or α ∈ (1, 2], but then we assume that ρ ∈ (1 − 1/α, 1/α). We write ρ * = 1 − ρ and define non-symmetric analogous of F 1 (x) defined in Section 3 for stable processes as follows The function S 2 (z) = S 2 (z; α) is the double sine function uniquely determined by the following functional equations together with the normalizing condition S 2 ((1 + α)/2) = 1 (see [9], [10] and Appendix A in [13] for equivalent definitions and further properties). We define F * (x) and G * (x) by the same formulae as in (4.1) and (4.2) but with ρ replaced by ρ * (and consequently ρ * replaced by (ρ * ) * = ρ). Note that whenever ρ > 1/2 the oscillations of F coming from the sine function are multiplied by the exponentially decreasing factor, but then F * oscillates exponentially, when x → ∞ and the situation is reversed for ρ < 1/2. The behaviour of F at zero is described by (see the proof of Lemma 2.8 in [13]) Although the constant √ α 2 S 2 (αρ) Γ(1+αρ * ) was not specified in [13], using (1.10) and (1.19) from [13], we obtain that Consequently, using the Karamata's Tauberian theorem and the Monotone Density Theorem we obtain (4.3). Moreover, if ρ > 1/2 then Even though the functions F and F * do not simultaneously belong to L 2 (0, ∞) (for ρ = 1/2), they can be understood as the generalized eigenfunctions of the semigroups Q * t and Q t , respectively (see Theorem 1.3 in [13]). Moreover, using Theorem 1.1 in [13] the transition probability density of the process X killed when exiting the positive half-line is given by whenever α > 1. Note that the restriction on α ensures that the exponential oscillations of F (or F * ) are suppressed by the factor e −tλ α , which makes the integral convergent. The formula (4.5) is the analogue of the integral representation for subordinate Brownian motions presented in [14]. Note also that assuming ρ = 1/2 we have F (x) = F * (x) = F 1 (x), where F λ (x) is the generalized eigenfunction defined in Section 3 for symmetric α-stable process. By duality, we have the corresponding integral representation for q t (x) with F * (x) and ρ * replaced by F (x) and ρ. The analogue of Theorem 3 can now be proved.
4,467.4
2019-01-25T00:00:00.000
[ "Mathematics" ]
Antinociceptive Activity of Zanthoxylum piperitum DC. Essential Oil Zanthoxylum piperitum DC. (ZP) is a traditional medicinal plant used mainly in countries from Asia such as Japan. This study aimed to investigate the antinociceptive effect of ZP essential oil (ZPEO). The major component present in the essential oil was beta-phellandrene (29.39%). Its antinociceptive activity was tested through animal models (formalin-, capsaicin-, and glutamate-induced paw licking and hot plate). The anti-inflammatory effect was evaluated through the carrageenan-induced leukocyte migration into the subcutaneous air pouch (SAP), with measurement of cytokines. The results showed antinociceptive effect for ZPEO for the first phase of the formalin-induced licking, glutamate, and hot plate tests. However, ZPEO had no effect on reducing paw licking induced by capsaicin. Finally, ZPEO had no effect against inflammation induced by carrageenan. Introduction Essential oils are naturally occurring complex molecules composed mainly of monoterpenes. They have been used in several industries around the world, especially for cosmetics including beauty creams and perfumes due to their pleasant scents. Essential oils have also been used to treat several diseases and some of them have been tested for medicinal purposes, such as treating pain and inflammation [1]. Known in Japan as Asakura sansho, Zanthoxylum piperitum (ZP) first attracted the attention of researchers due to its aroma [2,3]. Later, the research became more focused on its antioxidative effect resulting in very positive outcomes contributing to the cosmetic industry. In 2001, Hashimoto et al. [4] reported the ability of an aliphatic acid from ZP in inducing relaxation in the circular muscle of the gastric body. Perhaps due to its promising results as an antioxidant, ZP was also tested for anti-inflammatory activities targeting nitric oxide and cytokines production. This treatment was made from fresh and dried fruits of ZP and showed an inhibitory effect on cytokines (TNF-and IL-1 ) production from mouse macrophage cells [5]. The anti-inflammatory effect correlated with the production of nitric oxide was also described after testing ZPEO. The essential oil also had an effect on reduction of cyclooxygenase-2 expression and activity. Later, a glycoprotein (24 kDa) was isolated from ZP fruits to investigate its anti-inflammatory potential. It was shown to suppress cytokines IL-1 , IL-6, and TNF-production and expression of inducible nitric oxide synthase (iNOS), COX-2, and myeloperoxidase 9 (MMP-9) [6]. This glycoprotein was reported to prevent inflammatory gastrointestinal diseases [7], and a larger glycoprotein (115 kDa) was also effective in blocking proinflammatory signals [8]. The strong correlation between antioxidant compounds also having anti-inflammatory activities was later reported by Diaz et al. [9]. Furthermore, substances that are antioxidant and anti-inflammatory have been reported to be likely to have an anticancer effect [10]. This anticancer effect was first described in relation to ZP by the Japanese group of Hirokawa et al. [11], suggesting that ZP extract could potentially be useful against breast cancer. Evidence-Based Complementary and Alternative Medicine ZP was also tested as part of a herbal formulation for periodontitis showing substantial improvement especially in recovery of collagen gingival tissue [12]. In Korea ZP is used in traditional medicine as a diuretic and to treat digestive disorders. It is also used to help the cardiovascular system [12]. Some ZP compounds have been described to inhibit cholesterol acetyltransferase, thus contributing to helping the cardiovascular system, which validates the traditional use since cholesterol ester plays an important role in cardiovascular diseases [13]. The fact that ZP is used for digestive disorders could indicate an effect on stomach pain that might suggest pain relief properties for this species. As part of our continuous interest in search for pharmacological effect of natural products and because ZP is widely used to treat several disorders, in this work we focused our efforts on the evaluation of the possible antinociceptive effect of essential oil obtained from Zanthoxylum piperitum. Plant Material. Plant material (aerial parts) of Z. piperitum was collected from the Glasnevin Botanic Gardens, Dublin, and dried at room temperature for two weeks. Dr. Colin Kelleher from the Glasnevin Botanic Gardens identified the species and a herbarium sample is kept in the Botanic Garden under the collection number 1984.1920. Isolation of the Essential Oil. Air-dried, to constant weight, plant material (3 batches of 250 g of aerial parts) was subjected to hydrodistillation with circa one liter of distilled H 2 O for 2.5 h using the original Clevenger-type apparatus. The yield was 0.015% (w/w, dried weight basis) for the oil isolated from Z. piperitum (ZPEO). The obtained oil was separated by extraction with Et 2 O (Merck, Germany), dried over anhydrous Na 2 SO 4 (Aldrich, USA), and immediately analysed. Chemical Analysis of ZPEO. Qualitative analyses were carried out on a GC-QP2010 PLUS Shimadzu with a ZB-5MS fused silica capillary column (30 m × 0.25 mm × 0.25 m film thickness) under the experimental conditions reported for GC-FID analysis. The essential oil components were identified by comparing their retention indices and mass spectra to published data and computer matching with WILEY 275 and the National Institute of Standards and Technology (NIST 3.0) libraries provided by a computercontrolled GC-MS system. The results were also confirmed by comparing the compounds' elution order with their relative retention indices reported in the literature [14]. The retention indices were calculated for all the volatile constituents using the retention data of linear n-alkanes C8-C24. Animals. Swiss Webster mice (20-25 g, two months old), donated by Instituto Vital Brazil (Niterói, Rio de Janeiro, Brazil), were used in this study. The animals were maintained in standard conditions (room with light-dark cycle of 12 h, 22 ± 2 ∘ C to 70% to 80% humidity, and with food and water ad libitum). Twelve hours before assays the animals were maintained in fasting in order to avoid food interference with the absorption of the tested substances. Animals were acclimatized to the laboratory conditions for at least 1 h before each test on set and were used only once throughout the experiments. All protocols were conducted in accordance with the Guidelines on Ethical Standards for Investigation of Experimental Pain in Animals and followed the principles and guidelines adopted by the National Council for the Control of Animal Experimentation (CONCEA), approved by the Biomedical Science Institute/UFRJ, Ethical Committee for Animal Research, and received the number DFBCICB015-04/16. All experimental protocols were performed during the light phase. Animal numbers per group were kept at a minimum and according to rules from CONCEA. At the end of each experiment mice were killed by ketamine/xylazine overdose. Formalin-Induced Acute Pain. Twenty microliters of 2.5% formalin (37% formaldehyde) was injected in the plantar region of the right hind paw of mice 30 min after oral treatment with ZPEO (10, 30 or 100 L/kg) or vehicle (oil) or 1 hour after oral treatment with morphine (2.5 mg/kg) or acetylsalicylic acid (200 mg/kg). The animals were placed individually in a transparent glass chamber and the duration of time (in seconds) that they spent licking their paw after injection of formalin was recorded and analysed over two separate periods: 0-5 minutes after injection (named early phase or neurogenic pain) and 15-30 minutes after injection (named late phase or inflammatory pain). Capsaicin-Induced Nociception. This test was based on the method described by Sakurada et al. [15] with some modifications. Capsaicin (1.6 g/paw) was injected into the plantar region of the right hind paw of the mice one hour after treatment. The animals were placed individually in a transparent glass chamber and paw licking duration (seconds) was recorded (0-5 minutes after capsaicin injection) and analysed. Glutamate-Induced Nociception. This method was first described by Beirith et al. [16]. One hour after oral treatment of ZPEO, the plantar region of the right hind paw of the mice was injected with 20 L of glutamate solution in PBS (3.7 ng/paw). The animals were placed individually in a transparent glass chamber and paw licking duration (seconds) was recorded (0-15 minutes after glutamate injection) and analysed. Central Nociception: Hot Plate Test. This test was based on the method described by Ohlsson [17]. Mice were treated with ZPEO (10, 30, or 100 L/kg, p.o.), vehicle (oil), or morphine (2.5 mg/kg, p.o.). They were placed on the hot plate apparatus (Insight, Brazil), kept at a constant temperature of 55 ± 0.5 ∘ C. The latency time until the animal began jumping, licking, or shaking the paw was recorded. The measurements occurred before treatment (baseline, mean of 60 and 30 minutes before treatment) and 30, 60, 90, 120, and 180 minutes after treatment. In order to prevent tissue damage to paw, a maximum exposure time (cut-off) of the animal's paws to the heated plate was established. Aiming to investigate the antinociceptive mechanism involved, the animals were treated intraperitoneally 15 minutes before the oral treatment with ZPEO, with naloxone (a nonselective antagonist of the opioid receptor, 1 mg/kg) or atropine (a nonselective antagonist of the muscarinic receptor, 1 mg/kg). The results for the hot plate test were expressed as a percentage increase compared to baseline (% ICB), calculated by the formula latency × 100/baseline − 100 and area under the curve. 2.9. Acute Toxicity. To exclude a possible toxicity in bone marrow and circulating leukocytes 24 hours after treatment with 100 L/kg of ZPEO mice were anesthetized with ketamine/xylazine, blood was collected by orbital plexus into a heparinized tube, and after that mice were euthanized. The bone marrow was collected from the animal's femur by washing with 1 mL of PBS into the cavity. Haemogram analysis was performed in a CellPocH-100iV Diff (Sysmex) hematology analyser. Inflammation Model: Subcutaneous Air Pouch (SAP) Model. This model was described by Sedgwick et al. [18] with modifications done by Raymundo and colleagues [19]. The animals received a dorsal subcutaneous injection of sterile air (10 mL) and an addition of 7 mL of air on the third day. On the sixth day, animals received a subcutaneous injection of sterile carrageenan solution (1%; 1 mL). Mice were pretreated with vehicle or the ZPEO (10, 30, or 100 L/kg) 1 h before carrageenan injection into the SAP. The control group received an injection of sterile PBS (1 mL) into the SAP. Animals were sacrificed 24 h after carrageenan injection. The cavity was washed with 1 mL of PBS and the exudates were collected. The total cell counts were carried out from the exudates using a CellPocH-100iV Diff (Sysmex) hematology analyser. The exudates were centrifuged at 12,000 rpm for 8 min at 4 ∘ C and aliquots of the supernatants were stored at −20 ∘ C for dosages of tumour necrosis factor-(TNF-) and extravasated protein. TNF-dosage was carried out by enzyme-linked immunosorbent assay (ELISA) according to the manufacturer's instructions (B&D, USA). Extravasated protein was determined using the BCA method (BCA6 Protein Assay Kit, Pierce). The results are expressed as pg/mL of TNF-or mg/mL of protein. 2.11. Statistical Analysis. All experimental groups consisted of 6-10 mice. The results are presented as the mean ± SD. Statistical significance between groups was performed by applying analysis of one-way variance (ANOVA) followed by Dunnett's and Bonferroni's test using GraphPad Prism 5.0 software. values less than 0.05 ( < 0.05) were considered significant. GC-FID and GC/MS Analyses. Analysis of the essential oil was carried out by GC and GC/MS. The GC/MS analyses (three repetitions) were performed on a GC-QP2010 PLUS Shimadzu with a ZB-5MS fused silica capillary column (30 m × 0.25 mm × 0.25 m film thickness) and coupled with a 5975B mass-selective detector from the same company. The injector and interface were operated at 260 ∘ and 200 ∘ , respectively. The oven temperature was raised from 60 to 240 ∘ at a heating rate of 3 ∘ min −1 and then isothermally held for 10 min. As a carrier gas, He at 1.0 mL min −1 was used. The sample, 1 mL of the solutions, in Et 2 O (10 mg in 1 mL of Et 2 O), was injected in a pulsed split mode (the flow was 1.5 mL/min for the first 0.5 min and was then set to 1.0 mL/min throughout the remainder of the analysis; split ratio 40 : 1). The mass-selective detector was operated at the ionization energy of 70 eV, in the 35-650 amu range and a scanning speed of 3 scans/sec. GC (FID) analysis was carried out under the same experimental conditions using the same column and the same gas chromatograph type as described for the GC/MS. The percentage composition was computed from the total ion chromatogram peak areas without the use of correction factors. Qualitative analysis was based on the comparison of their linear retention indices relative to retention times of C8-C24 n-alkanes on the DB-5MS column with those reported in the literature [14] and by comparison of their mass spectra with those from Wiley 6, NIST07, MassFinder 2.3 ( Figure 1 and Table 1). Effect of Z. piperitum on Formalin-Induced Acute Pain. In the formalin-induced acute pain test, all doses (10, 30, and 100 L/kg) were able to decrease paw licking in the first phase of the test. The doses reduce by 28%, 34%, and 43.3% the licking time, respectively. However, ZPEO did not decrease licking in the second phase of the response to formalin injection ( Figure 2). Effect of Z. piperitum on Capsaicin-Induced Nociception. In order to verify if ZPEO would interfere with TRPV1 receptors, it was tested in a model of pain induced by capsaicin. The result of this evaluation showed that oral administration of ZPEO (100 L/kg) was unable to decrease licking induced by capsaicin (Figure 4). Effect of Z. piperitum on Thermal Nociception. Through the hot plate test, which measures central antinociception, ZPEO was only able to increase animal paw withdrawal threshold when treated with a higher dose of 100 L/kg. The lower dose of 30 L/kg did not show activities. The treatment of the animals with either naloxone, a nonselective antagonist of the opioid receptor, or atropine, a nonselective antagonist of the muscarinic receptor, did not have any effect on that antinociception ( Figure 5). Effect of Z. piperitum on Leukocytes Migration into Subcutaneous Air Pouch (SAP). ZPEO did not reduce the number of leukocytes that migrate to the SAP after carrageenan injection ( Figure 6) nor did it have an effect on TNFproduction by leukocytes and on extravasation of protein (Figures 7 and 8). 3.6. Acute Toxicity. Twenty-four hours after oral administration of ZPEO (100 L/kg) aliquots of blood and femoral fluxes were obtained and leukocyte counts were performed. The results indicated that the essential oil did not affect leukocyte number either in bone marrow or in blood (data not shown). Discussion The formalin test is believed to be a test that closely simulates clinical pain because it generates injured tissue. The test is known to induce phases and predominately involves C fibres. The early phase of this test has participation of substance P and bradykinin while the later phase also known as inflammatory pain phase has involvement of histamine, serotonin, prostaglandin, and again bradykinin [20]. In this study, ZPEO was able to decrease paw licking in the first phase but not in the second phase. The fact that bradykinin appears in both phases could indicate that the effect observed is not through bradykinin interference. More recent publications indicate the participation of glutamate in both phases of the formalin test and a special participation of adenosine A 2 NMDA glutamate receptor binding in the first phase of the formalin test [21]. Interestingly, in the present research, ZPEO was effective in reducing nociception in the glutamate-induced licking test even in the lower dose of 10 L/kg. This result suggests that the ZPEO could act on glutamatergic pathways and could involve the same NMDA receptor. An antagonist of this receptor could be used in future studies to check this. On the other hand, only the higher dose 100 L/kg had an effect in the hot plate test, which measures central nociception activity and which mechanism involves mainly A fibres but also has involvement of glutamate receptors. Drugs such as morphine and oxotremorine are, respectively, opioid and muscarinic receptor agonists, and they also have an antinociceptive effect in the hot plate test [22]. The use of the opioid and cholinergic antagonists did not decrease the antinociception observed for ZPEO. Therefore, this suggests that the effect of ZPEO observed in the hot plate test is mediated through neither the opioid nor the muscarinic receptors. For this reason, it is very likely that the effect of ZPEO observed in both models is due to the ability of ZPEO compound(s) to inhibit the excitatory transmission induced by glutamate. Evidence-Based Complementary and Alternative Medicine 7 This effect is also observed using morphine and as shown in the results this drug has an effect in the hot plate test and a potent effect in the first phase of the formalin-and glutamate-induced licking tests. Nevertheless, the fact that ZPEO antinociception is not mediated through the opioid receptor makes it an ideal candidate as a potential analgesic, since medicines that have an effect through this mechanism have been shown to be more likely to cause side effects such as tolerance, hyperalgesia, and drug dependence [23]. ZPEO did not reduce capsaicin-induced licking. This result indicates that the compounds in ZPEO are not having an effect on TRPV1 receptors, although this receptor is also playing an important role in the hot plate models and its connection with glutamatergic pathway [24]. In this present study ZPEO had no effect on inflammation inducing mediators such as TNF-. The anti-inflammatory effect of ZPEO has been reported by several research groups. Reference [25] showed ZPEO effect on cyclooxygenase, an enzyme involved in prostaglandin production and which is an inflammatory mediator. Regarding mediators, Yang et al. [26] reported ZP effect on TNF-using ZP compounds isolated from pericarps; these compounds were ZP amides (A, B, D, and F), bungeanumamide A, tumuramide C, hypericin, sesamin, and quercitrin. Also, the ability of suppressing TNF-, interleukin-(IL-) 1 , and interleukin-6 was reported earlier by Kim et al. [27]. The major compounds described in that research were octanoic acid (13.4%), n-heptanol (9.8%), and 1-octanol (8.1%). None of these compounds were present in the ZPEO; perhaps this could explain the inability of ZPEO as an anti-inflammatory agent. The toxicity of ZP has been looked into by other researchers and it was found that it is phytotoxic due to the presence of the compound eucarvone [28]. The cell death occurred due to overproduction of reactive oxygen species (ROS). Eucarvone was not present in ZPEO analysed in the present study. On the other hand, Lee et al. [6] reported the antioxidant and protective effect against hepatotoxicity for glycoproteins from ZP. This protective effect is obtained by inducing apoptosis and enhancing the activity of natural killer cells [29]. Therefore, the importance of chemical identities when testing ZP essential oils is emphasised. Conclusions The essential oil of Z. piperitum has a significant antinociceptive activity. This effect is not mediated through opioid or muscarinic receptors. However, it seems to have interference on the glutamatergic pathway. These findings suggest that Z. piperitum essential oil containing the chemical profile presented in this study has potential as an analgesic medicine and the fact that it is not mediated through the opioid mechanism suggests that it is less likely to cause serious side effects.
4,468.8
2016-07-31T00:00:00.000
[ "Biology" ]
Quantum Extremal Islands Made Easy, Part II: Black Holes on the Brane We discuss holographic models of extremal and non-extremal black holes in contact with a bath in d dimensions, based on a brane world model introduced in arXiv:2006.04851. The main benefit of our setup is that it allows for a high degree of analytic control as compared to previous work in higher dimensions. We show that the appearance of quantum extremal islands in those models is a consequence of the well-understood phase transition of RT surfaces, and does not make any direct reference to ensemble averaging. For non-extremal black holes the appearance of quantum extremal islands has the right behaviour to avoid the information paradox in any dimension. We further show that for these models the calculation of the full Page curve is possible in any dimension. The calculation reduces to numerically solving two ODEs. In the case of extremal black holes in higher dimensions, we find no quantum extremal islands for a wide range of parameters. In two dimensions, our results agree with arXiv:1910.11077 at leading order; however a finite UV cutoff introduced by the brane results in subleading corrections. For example, these corrections result in the quantum extremal surfaces moving further outward from the horizon, and shifting the Page transition to a slightly earlier time. Abstract: We discuss holographic models of extremal and non-extremal black holes in contact with a bath in d dimensions, based on a brane world model introduced in [1]. The main benefit of our setup is that it allows for a high degree of analytic control as compared to previous work in higher dimensions. We show that the appearance of quantum extremal islands in those models is a consequence of the well-understood phase transition of RT surfaces, and does not make any direct reference to ensemble averaging. For non-extremal black holes the appearance of quantum extremal islands has the right behaviour to avoid the information paradox in any dimension. We further show that for these models the calculation of the full Page curve is possible in any dimension. The calculation reduces to numerically solving two ODEs. In the case of extremal black holes in higher dimensions, we find no quantum extremal islands for a wide range of parameters. In two dimensions, our results agree with [2] at leading order; however a finite UV cutoff introduced by the brane results in subleading corrections. For example, these corrections result in the quantum extremal surfaces moving further outward from the horizon, and shifting the Page transition to a slightly earlier time. Understanding the quantum description of black holes remains a central question in theoretical physics. One unresolved question is the fate of information during black hole evaporation. In his seminal work, Hawking argued that in a quantum theory black holes evaporate into a mixed state of radiation, independently of how the black hole was formed [3][4][5]. Of course, this is in tension with the assumption that to an outside observer, the black hole looks like an ordinary, unitary quantum mechanical system, e.g., as suggested by the AdS/CFT correspondence [6,7]. This tension is colloquially known as the black hole information paradox [8]. One way of sharpening the paradox is to consider the von Neumann entropy of the Hawking radiation produced during black hole evaporation. Assuming the gravitational system begins in a pure state, this entropy gives a measure of the amount of entanglement between the radiation and the black hole. According to Hawking's original calculation, the entanglement increases monotonically throughout the evaporation process since the radiation is thermal. On the other hand, unitary evolution would require that the thermodynamic entropy of the black hole, which is proportional to its horizon area [9][10][11], set an upper bound on the the entanglement entropy of the radiation. Since the former decreases as the black hole radiates, at some time -known as the Page time -the thermodynamic entropy of the black hole will equal the entropy of the radiation, and the latter entropy must then decrease in the subsequent evolution, reaching zero when the black hole has disappeared. That is, subtle correlations between the quanta emitted at early and late times must produce a purification of the final state, in a unitary evolution of the full system. This qualitative behaviour of the radiation's entropy as a function of time is known as the Page curve [12] -see also [7]. While reconciling Hawkings calculation with the idea that quantum gravity is unitary was a longstanding puzzle, recently progress has made it possible to compute the Page curve in a controlled manner [13][14][15]. The new approach builds on insights coming from holographic entanglement entropy [16][17][18][19] and its extension to include quantum contributions [20,21]. It is best understood in a setting where a black hole is coupled to an auxiliary, non-gravitational reservoir -referred to as the bath -which captures the Hawking radiation. 1 This setup can be interpreted as a idealized picture, where we split the spacetime into two regions: The first, in which gravity is important, is close to the black hole while the second region is far away, where gravitational effects are negligible, at least semi-classically. In this situation, it was argued that instead of using Hawking's calculation, the true entropy of the Hawking radiation captured in a region R of the bath should be calculated using the so-called island rule [15] S EE (R) = min ext islands S QFT (R ∪ islands) + A(∂(islands)) 4G N . (1.1) This formula instructs us to evaluate the (semiclassical) entanglement entropy of the quantum fields in the bath region R combined with any codimension-two -and possibly disconnected -subregions in the gravitating region. The boundary of the candidate islands also contributes a gravitational term in the form of the usual Bekenstein-Hawking entropy. One extremizes the right-hand side of eq. (1.1) over all such choices, and if the latter yields multiple extrema, the correct choice is the one that yields the smallest entropy for R. If this procedure yields a solution with a nontrivial region 'islands', the latter is called a quantum extremal island -see [37] for a recent review. For an evaporating black hole, an obvious choice for the island region which extremizes the entropy functional is the empty set, in which case the result of eq. (1.1) agrees with Hawking's calculation. However, if radiation in the region R shares a large amount of entanglement with the quantum fields behind the horizon, new quantum extremal islands can appear. In particular, this occurs for an old evaporating black hole, and in this case a quantum extremal island appears just behind the horizon [14]. It turns out that after the Page time, this configuration yields the minimal entropy in eq. (1.1). As time evolves further, the entropy of R is controlled by the horizon area of the black hole which enters through the second term in eq. (1.1). Hence as the black hole evaporates, the latter shrinks to zero size and the island rule (1.1) gives a unitary Page curve. Eq. (1.1) was motivated in part by analyzing a "doubly-holographic" model in [15]. This model provides three different descriptions of the physical phenomena: First, from the boundary perspective, the system consists of two (one-dimensional) quantum mechanical systems, which are entangled in a thermofield double state. Further, one of the quantum mechanical systems is coupled to a two-dimensional holographic CFT, which plays the role of the bath -see figure 1a. With the brane perspective, the quantum mechanical systems are replaced by their holographic dual, a two-dimensional black hole in JT gravity. The latter has an AdS 2 geometry, which also supports another copy of the two-dimensional holographic CFT -see figure 1b. Finally, with the bulk perspective, the holographic CFT is replaced everywhere with three-dimensional Einstein gravity in an asymptotically AdS 3 geometry. The latter effectively has two boundaries: the standard asymptotically AdS boundary and the region where JT gravity is supported, which is referred to as the Planck brane -see figure 1c. An advantage of working in the bulk perspective is that entanglement entropies of subregions in the bath can be computed geometrically using the usual rules of holographic entanglement entropy a. d. b. e. f. c. Figure 1: Illustration of doubly-holographic models: The top row illustrates (a time slice of) the three perspectives of the model in [15], while the bottom row displays the analogous descriptions of our construction in higher dimensions [1]. In the latter, we are using the global conformal frame where the boundary CFT lives on R × S d−1 and the conformal defect appears on the equator of the (d − 1)-sphere -see discussion in section 2 and [1]. The bottom row reduces to the top upon setting d = 2 and taking a Z 2 quotient across the defect in the boundary or the brane in the bulk. The boundary, brane and bulk gravity perspectives correspond to panels a & d, b & e, and c & f, respectively. [ 16,17,59], taking into account that the RT surfaces that can also end on the Planck brane [60,61]. One direction for progress is to understand the Page curve and quantum extremal islands in higher dimensions. While limited results have been obtained on this front [14,[30][31][32][33][34], we focus here on the holographic model which we introduced in [1]. Our model allows us to obtain analytic results, while being powerful enough to do calculations in the regime where the gravitational theory on the brane is well-approximated by Einstein gravity. In our previous paper, we showed that quantum extremal islands can appear in any spacetime dimension, and clarified several of the properties of the doubly-holographic model in [15]. Here, we will extend our earlier work and discuss the presence of quantum extremal islands for black holes coupled to bath at a finite temperature. That is, our analysis provides a higher dimensional extension of the two-dimensional scenario considered in [2]. The key feature of our holographic model [1] is that it reproduces the three de-scriptions of the underlying physics discussed above for the doubly-holographic model of [15]. From the boundary perspective, our system consists of a d-dimensional holographic CFT coupled to codimension-one conformal defect, as shown in figure 1d. Using the standard AdS/CFT dictionary, this description is translated to the bulk gravity perspective. The latter describes the system in terms of (d + 1)-dimensional Einstein gravity in an asymptotically AdS d+1 geometry coupled to a d-dimensonal brane, which intersects the boundary at the location of the conformal defect -see figure 1f. According to the Randall-Sundrum (RS) scenario [62][63][64], the gravitational backreaction of the brane warps the bulk geometry creating new localized graviton modes in its vicinity. This mechanism allows for the brane perspective, shown in figure 1e, where the system is described by an effective theory of Einstein gravity coupled to (two copies of) the holographic CFT on the brane, all coupled to the boundary CFT. In [1], we also considered introducing an intrinsic Einstein term to the brane action, analogous to the construction of Dvali, Gabadadze and Porrati (DGP) [65]. Hence our construction [1] provides a natural generalization to higher dimensions of the two-dimensional doubly-holographic setup considered in [15]. Let us also note that our model resembles the setup in [15] even more closely upon taking a Z 2 orbifold quotient across the brane. Further, we emphasize that while the three different perspectives were presented on a more or less equal footing, the fact that the RS gravity on the brane has a finite UV cutoff [62,63] singles out the brane prespective as an effective low-energy description, in contrast to the boundary and bulk descriptions. 2 Again, the bulk gravity perspective allows us to calculate entanglement entropies of boundary regions geometrically with the usual rules of holographic entanglement entropy [16,17,59]. From the brane perspective then, quantum extremal islands simply arise when the minimal RT surfaces in the bulk extend across the brane for certain configurations. In this case, the entanglement entropy of the corresponding boundary region R is given by where Σ R is the usual bulk RT surface, i.e., an extremal codimension-two surface in the bulk homologous to R. As argued in [1], when the brane supports an intrinsic gravitational action, we must also include a Bekenstein-Hawking area contribution for the brane region σ R = Σ R ∩ brane. This intersection of the RT surface with the brane becomes the boundary of the islands seen in the brane prespective. The equivalence between eqs. (1.1) and (1.2) can be easily understood as follows: The bulk term in eq. (1.2) describes the leading planar contributions of the entanglement entropy of the boundary CFT, and so matches the first term in eq. (1.1). However, expanding this geometric contribution near the brane also reveals an Bekenstein-Hawking term that matches the induced Einstein term in the effective gravitational action on the brane [1]. This contribution combines with the brane term in eq. (1.2) to produce the expected gravitational contribution appearing in eq. (1.1). In fact, the RT contribution also captures higher derivative contributions matching the Wald-Dong entropy [69][70][71][72] of the higher curvature terms appearing in the effective gravitational action [1]. Further, as discussed in [1], the competition between candidate quantum extremal islands, denoted by the 'min' in eq. (1.1) simply becomes the usual competition between different possible RT surfaces in the holographic formula (1.2), e.g., see figure 2. In the following, we will study the question of quantum extremal islands for black holes in arbitrary dimensions using the purely geometric description (1.2) of the bulk gravity perspective. As emphasized in [1], the transition between the phase without an island and that with the island is nothing more than the usual transition between different classes of RT surfaces [73][74][75] -see figure 2. In particular, in the island phase, the RT surface crosses the brane so that a portion of the latter, i.e., the island, is included in the corresponding entanglement wedge. Thus the appearance of quantum extremal islands is simply decribed by a well understood feature of holographic entanglement entropy in a new setting. The main advantage of our construction here and in [1] lies in its simplicity. As we will show, our framework allows us to carry the calculations remarkably far analytically, complementing previous approaches which heavily relied on numerics [30]. In our case, the numerics required to extract quantitative results are limited to solving few ODEs. The remainder of this paper is organised as follows: In section 2, we review the bulk geometry and effective action of our model presented in [1], which is based on the Karch-Randall setup [64,[76][77][78][79][80][81] for branes embedded in AdS. We also discuss the addition of a DGP term [65] to the brane action. For the two dimensional bulk gravity case, we summarize the setup of [2], describe the connection to our model and introduce eternal black holes. In section 3, we construct eternal black holes on the brane in higher dimensions. As in the d = 2 case, these black holes are in equilibrium with the bath at finite temperature and so they do not evaporate. Nonetheless, there is a continuous exchange of radiation between the black hole and the bath, which has the potential to create an information paradox [2]. Hence, we use eq. (1.2) to investigate under which conditions islands appear. We present the general analysis for the time dependence of the entropy, exploring the island and no-island phases. In section 4, we develop the numerics associated to some integral equations found in the previous section and explicitly evaluate the Page curve for d = 3, 4 and 5. Section 5 examines an extremal horizon with a vanishing temperature, and find that in contrast to two dimensions [2], generally islands do not form in higher dimensions. However, this is not problematic, since at zero temperature the black hole and bath are not actually exchanging radiation and thus no information paradox arises. Details for the special case d = 2 appear in section 6. We review the induced action on the two-dimensional brane, including the introduction of JT gravity terms, given in [1]. We also evaluate the corresponding quantum extremal surfaces and the Page curve, and show that the brane cutoff produces subleading corrections compared to the results in [2]. Finally in section 7, we discuss our results and point towards some future directions. Braneworlds in higher dimensions Let us review the holographic model discussed in [1]. Beginning with the bulk gravity perspective our setup is described by (d+1)-dimensional Einstein gravity with a negative cosmological constant, 3 where g ab denotes the bulk metric. We also introduce a codimension-one brane in the bulk with action whereg ij is the induced metric on the brane. As well as the usual tension term, we have also introduced an intrinsic Einstein-Hilbert term in the brane action, in a manner analogous to Dvali-Gabadadze-Porrati (DGP) braneworld gravity [65]. We have separated the brane tension into T o and ∆T , and will tune ∆T ∝ 1/G brane so that the brane position is determined entirely by T o . Adding the DGP term is a natural generalisation to higher dimensions of having JT gravity on a two-dimensional brane [15] -see section 6. Since the brane is codimension-one, the bulk geometry away from the brane locally takes the form of AdS d+1 with the curvature scale set by L. We will work in a regime where the induced geometry on the brane will be that of AdS d space -see [1] for details -and so it is useful to consider the following foliation of the AdS d+1 geometry by AdS d slices The AdS d metric is dimensionless with unit curvature. This metric would cover the entire AdS d+1 vacuum spacetime if we take 0 ≤ θ ≤ π. The solution for the backreacting brane is constructed by first cutting off the spacetime along an AdS d slice near the asymptotic boundary θ = 0, i.e., at θ = θ B 1 where θ B is determined by the brane tension T o -see below. Then, two such spaces are joined together along this surface, and the brane is realized as the interface between the two geometries. With this construction, the brane divides the bulk spacetime in half, but the backreaction of the brane has enlarged the geometry -see figure 3. In this case, the metric (2.3) can be used to cover a coordinate patch with θ B ≤ θ ≤ π on either side of the brane. With the above construction, the induced geometry on the brane is simply AdS d and using the Israel junction conditions [1,85], one finds the curvature scale to be For the most part, we will be interested in the regime where L 2 / 2 B 1 or ε 1. As we will explain below, this ensures that the gravitational theory on the brane is essentially Einstein gravity. Implicitly in eq. (2.4), we have tuned the "shift" ∆T to produce an embedding of the brane that is independent of the DGP coupling G brane , i.e., the brane location remains unchanged when we vary G brane . This is achieved by setting (2.5) Figure 3: A timeslice of our Randall-Sundrum setup. In panel (a), we cut off the AdS d+1 spacetime along an AdS d slice near the asymptotic boundary θ = 0, in the metric (2.3). Two of these spaces are glued together in panel (b) and the brane is realized as the interface between the two geomeries. The boundary perspective simply considers the dual description of the above gravitational system using the standard rules of the AdS/CFT correspondence. As described in [1], when considered in "global" coordinates, the dual solution is naturally the boundary CFT on a spherical cylinder R×S d−1 (where the R is the time direction). Further there is also a codimension-one conformal defect positioned on the equator of the sphere, where the brane reaches the asymptotic boundary. The central charge of the boundary CFT is given by the standard expression c T ∼ L d−1 /G bulk , e.g., see [86], whereas the (d − 1)-dimensional CFT of the conformal defect hasc T ∼ d−2 eff /G eff c T . Similarly, one can consider the ratio of the couplings in the defect and bath CFTs: We arrive at the brane perspective by replacing the conformal defect in the boundary perspective by its gravitational dual. Hence this description includes the boundary CFT on the asymptotic AdS d+1 boundary, but also two copies of the boundary CFT on the brane, as dictated by the usual Randall-Sundrum (RS) scenario. Of course, the latter is an effective theory with a finite UV cutoff set by the position of the brane, e.g., see [87] and references therein. 4 Further, new (nearly) massless graviton modes localized in the vicinty of the brane also appear and so the brane also supports a gravitational theory. We can think that integrating out the brane CFT (or the bulk gravity) induces an effective gravitational action on the brane of the form [1] and ε is given in eq. (2.4). Note that in the regime of interest (i.e., ε 1), we have eff B . Hence to leading order, the above gravitational theory (2.6) corresponds to Einstein gravity coupled to a negative cosmological constant. In the second line of eq. (2.6), we show the first of a(n infinite) sequence of higher curvature corrections, involving powers of L 2 × curvature. Since the gravitational equations of motion set the curvatures to be roughly 1/ 2 eff (at least for the background of interest), the contribution of these terms is highly suppressed since we work in the regime where L 2 / 2 eff 1. 5 Lastly, let us note that 1/G RS is the standard RS gravitational coupling induced in the absence of a DGP term, i.e., λ b = 0. It turns out that in the case of a brane theory with negative cosmological constant, like the one we are considering here, the graviton acquires a mass [64,[76][77][78][79][80][81]. For small brane angles, the graviton mass is proportional so some power of the brane angle [80,81] and thus vanishes as we take the zero-angle limit. It was suggested in [33] that this mass is a crucial ingredient for islands to exist, since the limit of vanishing graviton mass coincides with a limit in which islands cannot be created since their area becomes infinite. Alternatively, it is possible that in the Karch-Randall model, the graviton mass simply depends on the effective gravitational coupling on the brane, and is thus correlated with the island size, but not responsible for the island. Two dimensions and black holes In two dimensions, we need to revisit our setup for an accurate effective brane action and to make connection to [2]. First, there are factors of 1/(d − 2) appearing in 5 A more careful examination in [1] showed that the gravitational theory on the brane was well approximated as semiclassical Einstein gravity with L 2 / 2 eff 1 for λ b > 0, but required L 2 / 2 eff 1 + λ b for λ b < 0. However, the latter constraint is replaced by L 2 / 2 eff (1 + λ b ) 2 for the special case of λ b < 0 and d = 3. eq. (2.6), 6 which indicate that the bulk integration analysis leading to this result must be reconsidered for d = 2. As reviewed in section 6, we find that the induced brane action is non-local, a signature of the trace anomaly. In addition, the two-dimensional analogue of the DGP brane action is a JT gravitational action localized on the brane. Having accounted for these changes, we may relate our setup directly to that of [2], which we now briefly review. Ref. [2] interprets the two Rindler patches of AdS 2 as exteriors of an eternal nonzero temperature black hole and subsequently considers coupling each exterior to a flat half-space, consitituting a bath region. A matter CFT 2 theory spans both the bath and AdS 2 regions and JT gravity is placed on the AdS 2 region. Invoking AdS 2 /CFT 1 , this setup is alternatively described by the thermofield double (TFD) state of a BCFT living on two half-lines (the bath regions) coupled to quantum mechanics (dual to the AdS 2 spacetime) on the boundaries of the half-lines. The authors then compute the entanglement entropy of a region consisting of intervals on both sides of the TFD including the defect and with endpoints in the bath regions. From the AdS 2 perspective, this entropy is obtained using eq. (1.1), allowing for the possibility of islands in the AdS 2 spacetime. In particular, this gives rise to a competition between a no-island phase and an island phase, with the former dominating at early times and the latter at late times. In the island phase, quantum extremal surfaces (QESs) appear in the AdS 2 spacetime just outside the horizon, marking the boundaries of an island, stretching through the AdS 2 wormhole, which now belongs to the entanglement wedge of the bath complements to the intervals. Let us return to our braneworld to see how our setup mimics that of [2] described above. From the bulk perspective, we have an AdS 3 spacetime with a brane lying along an AdS 2 slice ( fig. 4). We may reproduce the AdS 2 black hole on the brane by taking Rindler-AdS coordinates in the AdS 3 bulk -this equips the AdS 3 bulk with a horizon and 'left' and 'right' exterior regions. The resulting picture is that of a Hartle-Hawking state prepared by the Euclidean path integral drawn in fig. 5. The Rinder AdS 3 coordinates also induce a horizon on the brane. In fact, the geometry of the brane is itself Rindler-AdS, supporting a dilaton profile Φ ∝ ρ. In the brane perspective, we then have a CFT spanning the left and right asymptotic boundary regions -the baths -and the Rindler-AdS 2 brane, which also supports a theory of JT gravity. Illustrated in figure 6, this is essentially the same setup as in [2], up to a Z 2 -quotient across the brane. We may alternatively take the boundary perspective, wherein the bulk AdS 3 plus brane theory is dual to a CFT 2 plus defect theory. More precisely, the Euclidean path integral preparing the Hartle-Hawking bulk is equated to a thermal path integral preparing a TFD state of two copies of a CFT 2 with a defect running through its middle. We are thus led to the boundary picture drawn in figure 7. Taking a Z 2 quotient across the defect, this, of course, is the alternative description of the setup in [2] as a thermal BCFT coupled to quantum mechanics. With our setup in place, we can then consider subregions of the boundary CFT and use the RT formula (1.2) to compute the corresponding entanglement entropies. Analogous to [2], we choose 'belt' subregions consisting of intervals symmetric about the defect. The details of the resulting entropy calculation in two dimensions are provided in section 6. The upshot is that we find a competition between a no-island phase and an island phase, as sketched in fig. 2, with the former dominating at early times and the latter past a Page time. Notice that these phases are analogous to the no-island and island phases of [2], with now the QESs demarked by the intersection between our bulk RT surface and the brane. Namely, it is clear from the bulk picture shown in the right panel of fig. 2 that the island region between these intersection points belongs to the entanglement wedge of the bath region complementary to the belt. In section 6, we also explicitly demonstrate that our bulk RT calculation using eq. (1.2) precisely reproduces the results of [2], in the limit where the brane approaches the would-be AdS 3 boundary by slicing through the bulk at a small brane angle θ B (that is, the high-tension limit of higher dimensions). For early times, we find that the entanglement entropy grows linearly in the no-island phase 7 as 4πct/(3β) (see eq. (6.38)), whereas for late times it is dominated by the island and given by a constant, 1 2G brane Φ 0 + Φ r (see eq. (6.51)). Thus, as in [2], the appearance of an island caps off the entropy growth at the expected course-grained entropy of two copies of the black hole on the brane, rescuing the system from a potential information paradox (the resulting Page curve is shown in fig. 24). While we find perfect agreement with [2] at leading order in θ B , we also find corrections to these results due to the brane imposing a UV cutoff at finite θ B . The result is O(θ 2 B ) corrections which, for instance, push the QES further from the horizon, lower the entropy of the island phase, and lead to a hastened Page transition. (Note that, in the no-island phase, no such corrections appear as the bulk RT surface does not intersect the brane.) It would be straightforward to use our setup to perform the zero-temperature analysis also covered in [2] for d = 2. Here one would instead take Poincaré coordinates which would equip the AdS 3 bulk and AdS 2 brane with an extremal horizon. We then expect entanglement entropy of large regions in the bath to require the inclusion of islands on the gravitating brane. In particular, intervals stretching from some location in the bath out to infinity require the inclusion of an island localized around the horizon. (This is to be contrasted with our findings in d ≥ 3, where islands are lacking in the extremal case at small brane angle θ B .) The benefit of our Randall-Sundrum setup is that it allows great flexibility in generalizing the construction of [2] to higher dimensions. Indeed, it is straigtforward to re-interpret figures 4, 6, and 7 with a suppressed hyperbolic H d−2 direction. In the following sections, we shall apply our setup to extend the results mentioned here to higher dimensions. Black hole in equilibrium with an external bath In this section, we discuss how islands arise in the presence of certain topological, non-extremal black holes in higher-dimensional brane-world models. Topological black holes are characterized as having nontrivial horizon topology, and we will be interested in the case of neutral black holes with a hyperbolic horizon [88,89]. The general metric is given by with the blackening factor Here, L denotes the AdS curvature scale and dH 2 d−1 denotes the line element on a (d − 1)-dimensional hyperbolic plane with unit curvature. This bulk geometry (3.1) is dual to a thermofield double (TFD) state for two copies of the boundary CFT [90], where each resides on a spatial geometry H d−1 . After an appropriate Weyl rescaling, the boundary metric for each CFT reads and hence the scale R (introduced in eq. (3.1)) corresponds to the curvature scale of the spatial geometry. The full boundary geometry is then two copies of R × H d−1 , where the R corresponds to the time direction in each of the CFTs. Turning back to eq. (3.1), the relation between the position of the horizon r h , the black hole mass M , and the 'mass' parameter ω is [84,91,92] Here and in the following, we use vol H d−1 to denote the dimensionless volume of the spatial boundary geometry, i.e., the volume measured by the metric dH 2 d−1 . Of course, this volume is infinite and we must introduce an infrared regulator -see below. In the following, we will consider the special case of a topological black hole with vanishing mass M = ω = 0. Note that despite the fact that ω = 0, we still find a horizon at r h = L from eq. (3.2). In fact, the bulk geometry corresponds to the AdS vacuum (as expected for M = 0), but we are describing this geometry with the AdS-Rindler coordinates where the metric resembles that of black hole [93]. In this case, it is straightforward to evaluate the entropy and the temperature of the black hole . In terms of the dual CFT, we are considering a pure state (i.e., the vacuum) in the conformal frame where the boundary geometry corresponds to R×S d−1 . However, with an appropriate conformal transformation, we produce the TFD state on two copies of R × H d−1 with temperature T = 1/(2πR) [93]. The entropy in eq. (3.5) corresponds to the entanglement entropy between the two copies of the CFT -and alternatively, can be interpreted as the entanglement entropy between two halves of the sphere in the original conformal frame. From the point of view of the CFT, masslessness of the black hole corresponds to a fine tuning of the temperature to T = 1 2πR . Following the brane world construction outlined in the previous section, we locate a codimension-one defect at the center of each CFT. By the holographic dictionary, this corresponds to a brane which cuts through the bulk and orthogonally intersects the horizon -see figure 4. Since with ω = 0 , the bulk geometry is just the AdS vacuum, our previous discussion of the brane geometry (above and in [1]) is still applicable. Hence, the brane position in the bulk is determined precisely as described above in terms of the brane tension T o . In fact, this bulk geometry provides a higher dimensional generalization of the construction discussed in section 2.2, and we will see that the brane inherits a black hole metric with temperature T = 1/(2πR), from the AdS-Rindler coordinates in the bulk. Our aim will be to use eq. (1.2) to investigate the appearance of quantum extremal islands, from the brane perspective, where (two copies of) the boundary CFT are supported in this black hole geometry on the brane. Further, we will compute the entanglement entropy associated to symmetric regions R on each side of the defect as a function of time -see figure 2. The regions R of interest consist of those points on a CFT timeslice which are further than a distance χ = χ Σ away form the defect. 8 The entanglement entropy is evaluated using the holographic prescription of the bulk perspective and as described in the introduction, the corresponding RT surfaces can be in one of two phases. Either they connect through the horizon, which we will call the no-island phase, or they connect through the brane, which we will call the island phase. The reason for those names is apparent from the d-dimensional effective gravity on the brane, i.e., the region bounded by the intersection of the RT surface and the brane is a quantum entremal island, which now contributes to the entropy of R. This also implies that from the (d + 1)-dimensional bulk perspective, the appearance of islands is simply explained as a standard phase transition of an RT surface. We will see in the remainder of this section that at early times, the RT surfaces starts out in the no-island phase, i.e., connects throught the horizon. As is well known [94], the volume of the corresponding surfaces grows linearly with time. At some point its volume will have grown so large, that the RT surface in the island phase has smaller area and gives the correct entanglement entropy. The calculation of the time-dependence of the area of RT surfaces will proceed in two steps: In sections 3.2 to 3.4, we will derive expressions for the area of three special cases of extremal surfaces. The first one will be RT surfaces in the island phase anchored at Rindler time τ Σ = 0. The second and third special cases will be RT surfaces Σ R in the no-island phase which either end on entangling surfaces ∂R at χ = ±χ Σ and τ Σ = 0, or end on entangling surfaces located at the defect (χ Σ = 0) and arbitrary τ Σ . While these special cases naively might seem not to contain enough information to completely reconstruct the time-evolution of the entanglement entropy, we will argue in section 3.5 that the time-evolution of any symmetric RT surface in the no-island phase can always be reduced to one of those three cases. We remind the reader that as described in section 2.2, we are considering eternal black holes which do not evaporate. Nonetheless, from the effective brane point of view, the black hole on the brane and the fields on the asymptotic boundary are in contact, and can therefore continuously exchange radiation. If island are not accounted for appropriately, this leads to information loss [15]. In section 3.6 we will argue, using results obtained below, that also in higher dimensions the presence of islands makes the entanglement dynamics of the joint system of black holes and radiation compatible with unitarity. Geometry on the brane To set the stage for the following calculations, we will start by discussing the bulk and brane geometry. As noted above, the bulk metric is described by AdS-Rindler coordinates which is obtained from eq. (3.1) by taking the massless limit ω, M → 0 and rescaling the coordinates t → R τ and r → L r, such that the coordinates in eq. (3.6) are dimensionless. Although the underlying geometry is simply the AdS vacuum, the metric (3.6) resembles a black hole metric with horizons at r = ±1 and an apparent singularity at r = 0. We can also extend the spacetime at a fixed time-slice through the bifurcation surface and arrive at a second Rindler wedge. The bulk spacetime thus has two asymptotic regions, located at r → ∞, each of which hosts one copy of the boundary CFT on the R × H d−1 geometry. As noted above (in terms of the dimensionful coordinates), the corresponding TFD state has a (dimensionful) temperature T = 1/(2πR), which is tuned in relation to the curvature scale R of the hyperbolic geometries (3.3). Lastly, note that since the Rindler wedges are simply a reparametrization of pure AdS, it is clear that the singularity at r = 0 is only a coordinate singularity. 9 In fact, we can extend the coordinates smoothly through the interior to negative r where we can exit the region behind the (inner) horizon at r = −1 and enter a new set of Rindler wedges. For each CFT, we introduce a codimension-one conformal defect (with zero extrinsic curvature) at the center of the hyperbolic spatial geometry. It is convenient to choose slicing coordinates for the hyperbolic boundaries, such that In these coordinates, the location of the conformal defect is χ = 0. From the bulk perspective, the CFT defects are dual to a co-dimension one brane, which spans a slice of constant extrinsic curvature of the bulk spacetime and intersects the asymptotic boundary at the location of the CFT defect. In order to describe its trajectory, it is convenient to write the bulk metric in terms of the slicing coordinates in eq. (2.3). The brane is located at constant θ = θ B , which is determined by the tension The trajectory of a hypersurfaces of constant θ B in the bulk spacetime is then given by As noted in [1], this means that a brane with positive tension (i.e., T o ≥ 0) creates additional geometry by its backreaction. Of course, the backreaction of a negativetension brane would remove geometry. However, let us add that there is no (nearly) massless graviton induced on a negative-tension brane 10 and therefore we will only consider positive tensions in the following, i.e., 0 ≤ θ B ≤ π 2 . For such a (positive-tension) brane, the bulk geometry to one side of the brane can be described by eq. (3.6), with r sinh χ ≤ cot θ B , while the geometry to the other side of the brane is given by the same metric with r sinh χ ≥ − cot θ B . We can therefore treat either side of the brane as an AdS-Rindler geometry which is cut off by the brane. Using eq. (3.9), we can determine the induced metric on the brane. After a short calculation, one finds where we have changed the radial coordinate with 2 B (ρ 2 − 1) = L 2 (r 2 − 1) . (3.11) 10 We thank Raman Sundrum for explaining this point to us. This brane metric again takes the form of an AdS-Rindler metric, c.f. eq. (3.6). Further, this demonstrates that the Rindler horizon in the bulk (at r = 1) induces a Rindler horizon on the brane (at ρ = 1), as one would expect from the bulk perspective. 11 From the boundary perspective, this behavior is readily explained by the fact that the conformal defect is in thermal equilibrium with the surrounding CFT. In the effective Randall-Sundrum description of the brane perspective, this behaviour arises because the region of dynamical gravity is coupled to the bath CFT along an accelerated trajectory, so that the temperature felt by the accelerated boundary agrees with the temperature of the CFT, e.g., see [95][96][97]. As already mentioned, this setup generalizes the two-dimensional framework presented in [2] to higher dimensions. All calculations below will be done for the case of positive tension branes. However, when it comes to interpretation, we will be particularly interested in the case where 1 θ B L B , for which the brane theory is well described as Einstein gravity coupled to two copies of the boundary CFT (with a high cutoff). The reason is that in this limit, we can interpret the intersection of the brane and the RT surface as bounding an island in this effective gravitational theory. 3.2 Island phase at τ Σ = 0 We will start our analysis by calculating the area of the RT surface for an entangling surface lying in the τ = τ Σ = 0 plane and crossing the Planck brane. In other words, the RT surface is in the connected phase -see figure 8. We are interested in the entanglement entropy of R comprised of the combined regions χ > χ Σ and χ < −χ Σ in both the left and right CFTs. Hence the entangling surfaces of interest have two components (in each CFT) sitting a constant distance away from the defect at χ = ±χ Σ . We note that the induced metric on the latter surfaces is proportional to cosh d−2 χ Σ . In two dimensions, the analysis of the RT surfaces is simplified because the metric (3.6) has a shift symmetry χ → χ + const, but the latter is absent in higher dimensions. However, we can find a similar simplification by going to a different coordinate system defined via [1,98] (1 + ζ 2 ) = r 2 cosh 2 χ , such that the horizon is located at ξ = ± π 2 . By time-translation invariance, we know that the RT surface lies on a constant Rindler time slice and hence we consider the metric on the τ = 0 slice in the new coordinates, 12 which reads (3.13) Hence the geometry of this spatial slice (or any constant τ slice) is invariant under ξ → ξ + const, which will simplify the following. Making the ansatz ζ = ζ(ξ) for the profile of the RT surface, the induced metric on these surfaces takes the form with metric determinant To obtain the correct RT surface, we now need to extremize the area functional subject to the correct boundary conditions. Here, a few observations are in order. The boundary condition is determined by the RT surface ending at the entangling surface on both sides of the defect. Alternatively, since our setup is reflection-symmetric across the brane, we can also consider a family of bulk extremal surfaces which end on the brane and vary with respect to the point of intersection of the brane and the RT surface [1]. Even in higher dimensions, this variation takes a fairly simple form (see eq. (3.22) below), since extremizing the RT surface can be cast as an effectively two-dimensional problem with metric (3.17) Note that the area functional does not explicitly depend on ξ. Rather, ξ plays the role of an angular coordinate and its associated Hamiltonian is conserved. This allows us to turn the second order equation which determines extremal surfaces into a first order expression, 18) 12 Note that the full metric takes the form ds 2 = L 2 ζ 2 cos 2 ξ dτ 2 + ds 2 E , and hence the shift symmetry does not extend to the full spacetime metric. where we have introduced ζ * which is the turn-around point for ζ as a function of ξsee figure 8. The sign depends on whether ζ is going towards (+) or away (−) from the boundary as ξ increases. In the latter case, where the RT surface does not turn around before it intersects the brane we have to think of ζ * as a coordinate of vacuum AdS extended past the brane, as shown in figure 9. More generallly, the sign starts out negative and generally flips after ζ = ζ * has been reached. The area functional for the RT surfaces satisfying eq. (3.18) then becomes where here and below, we use the subscript QES to mark coordinates of the intersection between RT surface and brane, which corresponds to a quantum extremal surface in the brane theory. The upper limit of integration indicated as ∞ must be regulated, since the area of the RT surface is infinite. The sign here is the same sign as in eq. (3.18). We have also included a factor of four, since there is one RT surface to each side of the defect and considering both CFTs, we need to multiply the result by another factor of two. Eq. (3.18) yields a family of RT surfaces (parameterized by ζ * ) which are locally extremal in the bulk away from brane. However, fully extremizing the area functional (3.16) requires that we also extremize over the possible locations where these candidate surfaces intersect the brane. That is, we consider the extremization condition of the RT surfaces' area (plus possibly the area of the QES, should there be extra DGP gravity) with respect to the position of the intersection σ R , where the two contributions reflect the two contributions in eq. (1.2). Here, ρ QES denotes the location of σ R in coordinates along the brane in eq. (3.10). As described in [1], this extremization leads to a boundary condition restricting the angle at which the RT surface meets the brane. Normally, this would be a difficult problem in higher dimensions. However, here we are leveraging the hyperbolic symmetry along the transverse directions, which reduces the present case to a two-dimensional problem. That is, we need only extremize a one-dimensional profile ζ(ξ) of the RT surface in the effective two-dimensional geometry given by eq. (3.17). Assuming that we consider an extremal bulk surface which is anchored at the asymptotic boundary, the variation of the surface's area with respect to its intersection point with the brane is given by where h ij is the two-dimensional metric (3.17) and T i is a normalized (w.r.t. h ij ) tangent vector to the RT surface, which can be obtained from eq. (3.9). The vector X i determines the variation along the brane. In the absence of a DGP gravity term in the action, this variation must vanish for X j along the brane; hence we have a boundary condition which sets the RT surface perpendicular to the brane. More generally, we must balance the above variation against the variation of the entropy contribution intrinsic to the brane, as can be seen from eq. (3.20). The first contribution to eq. (3.20) is then calculated using eq. (3.21) and yields which is evaluated at ζ = ζ QES . Here we have used the brane angle θ B defined in eq. (3.8). If the brane DGP coupling is turned on, the variation of the area also obtains a contribution from the second term in eq. (3.20), Substituting eqs. (3.22) and (3.23) into eq. (3.20), we obtain the following relation between the QES position ζ QES and the deepest point ζ * reached by the RT surface: where λ b was defined in eq. (2.7). A final relation associating ζ QES and the belt width ξ Σ comes from integrating eq. (3.18) from the boundary to the brane, After using eq. (3.24), this can then be rewritten as a relation between the location of the entangling surface ξ Σ and the QES ζ QES only, if we further use eq. (3.9) together with eq. (3.12) to find the brane trajectory in ζ, ξ coordinates and determine the relationship between ξ and ζ on the brane In section 4, we will use eqs. (3.19), (3.24) and (3.25) to produce the late-time part of the Page curve for a topological black hole coupled to a bath in higher dimensions. No-island phase for τ Σ = 0 We can use the result of the previous subsection to obtain a solution for the no-island phase. The first order equation (3.18) (where we choose the minus sign) again determines the shape of extremal surface. By symmetry, we know that ζ * must lie on the bifurcate horizon and is thus determined by solving Here we have implicitly chosen to perform the calculation in the asymptotic CFT which sits at negative ξ, i.e., to a particular side of the brane. By symmetry the calculation on the other side of the brane yields the same result. The total area of the two RT surfaces which connect both CFTs through the horizon is then given by with ζ * given by eq. (3.27). In the case of small brane angle θ B this phase always dominates at early times. The reason is that the the RT surface in the competing phase, i.e., the phase where the RT surface crosses the brane, has to travel a large distance to the brane before it can return to the asymptotic boundary across the brane. This additional distance can be made arbitrarily small by choosing a small enough brane angle. We will furthermore see in section 3.5 how the time evolution of an RT surface at early times can be mapped to this case. No-island phase for χ Σ = 0 Lastly, we will consider the case of a zero-width belt, i.e., the case where the location of the entangling surface is taken towards the defect, so that the RT surface falls straight through the bulk along constant boundary slicing coordinate χ = χ Σ , c.f. figure 10. Note that this setup is essentially the same as considered in [94], which studied entanglement entropy of identical half-spaces in the two sides of a time-evolved TFD. Due to symmetry, the trajectory of the RT surface is determined by its radial coordinate r as a function of time τ . However, it is convenient to introduce Eddington-Finkelstein coordinates to avoid the coordinate singularity at r = 1. Hence, describing ingoing null rays, we have Figure 10: The RT surface of an entangling surface located at the defect in the noisland phase. where r tor (r) denotes the usual tortoise coordinate. 13 Note that with the above definitions, r tor (r → ∞) → 0 and hence v = τ at the asymptotic AdS boundary. Then the metric becomes Now the extremal surface will fall from the asymptotic boundary, through the exterior, across the Rindler horizon, reaching a minimal radius at r * , within the interior. Then the surface will continue emerging into the second exterior region. Due to reflection symmetry, we need only track the trajectory of the RT surface until it reaches r * . Using eq. (3.30), the area functional can be written as where λ is a radial coordinate intrinsic to the surface, which increases along the surface moving from the left asymptotic AdS boundary to the right boundary. The limits of integration here correspond to λ * , the value at the minimal radius r * , and λ UV , the value at the UV cutoff near the right boundary -see figure 10. We have also included a factor of 4 to account for the fact that we only integrate from the Z 2 symmetric point λ * out to the right boundary, and the fact that there are two such RT surfaces, one on either side of the brane. Of course, we have also integrated out the directions along the belt, i.e., along the H d−2 . Now, we fix the reparametrization symmetry of the area functional with the following convenient gauge choice The integrand in eq. (3.31) is independent of v and so we have a conserved 'vmomentum' where the second expression results from substituting in the gauge choice (3.32). Using eqs. (3.32) and (3.33) to solve forṙ andv, we finḋ Note that we have implicitly chosen a positive sign forṙ indicating that r is increasing as we move along the surface out towards the asymptotic boundary. An intuitive picture of the dynamics of the extremal surfaces is given by recasting theṙ equation above as a Hamiltonian constraint, where the effective potential is given by In this framework, P 2 v plays the role of the conserved energy and the minimum radius r * corresponds to the turning point whereṙ = 0, i.e., The area (3.31) of the extremal surface becomes using eqs. (3.32) and (3.34). Note that r UV denotes the position of the UV cutoff surface near the asymptotic AdS boundary. With eq. (3.37), the extremal surface can be specified by the integration constant P v or the boundary condition r * . However, we want to examine the time evolution of the entanglement entropy and so we must determine a relation between these constants and the boundary time. In particular, using eq. (3.34), we can integrate out to the right boundary to determine where v * denotes the value of the Eddington-Finklestein time at the turning point. However, because of the Z 2 symmetry of the extremal surface, we know that the turning point lies on the surface t = 0, and so we may use eq. (3.29) to write Further, we know that v bound = τ [P v ] and hence we find Note that the integrand is nonsingular in the vicinity of the horizon, i.e., near r = 1. The time derivative of the area (3.31) admits a very simple form where τ is the boundary time parameter. 14 Further, we also observe that the critical radius where ∂ r U = 0 is given by At late times, the turning point is very close to this critical radius, i.e., , the critical surface lies near the surface r = r c for a long time, and so we can replace r * → r c into eq. (3.42). Hence we expect the growth of the area is fixed at late times, i.e., As we will see momentarily, the late time behavior of the entropy of any subregion bounded by constant χ in the no-island phase is determined by a zero-belt width calculation. Thus, as in the two-dimensional case studied in [34] (as well as the higher dimensional case [30]), the entropy corresponding to the no-island phase grows without bound. Time-evolution for general Given the region R of interest, 15 we can ask how the RT surface changes under time evolution. If we are in the island phase, the RT surface is completely contained inside the Rindler patch so that time translations are a symmetry and the entropy is a constant. On the other hand, in the no-island phase, the RT surface connects to both bath CFTs. Forward time evolution of both sides is not a symmetry and the area of the RT surface changes. Obtaining RT surfaces in the no-island phase which are anchored on symmetric entangling surfaces of arbitrary width and at arbitrary times in higher dimensions is generally difficult. However, as we will now show, our choice of entangling surfaces with the hyperbolic symmetry of H d−2 allows us to map the RT surface at any (χ Σ , τ Σ ) either to some RT surface in the τ = 0 slice, i.e., with (χ Σ , τ Σ = 0) or to the case where the entangling surface is at χ = 0, i.e., with (χ Σ = 0, τ Σ ). In particular, this means that the solutions obtained in the last two subsections are sufficient to discuss the full time evolution of the symmetric entangling surfaces of interest. The strategy we will employ in this chapter is the following. We will perform a coordinate change from Rindler space to a particular Poincaré coordinate system defined below. In the new coordinates, the entangling surfaces are straight lines. By exploiting the boost symmetry of the Poincaré patch and mapping back to Rindler space, the task of calculating entanglement entropy of a subregion with χ Σ at time τ Σ can be reduced to one of the cases discussed in sections 3.3 and 3.4. To understand the required coordinate changes it is convenient to embed AdS d+1 into R d,2 , i.e., we are looking for a parametrization of (parts of) the hyperboloid defined via Our original two Rindler patches correspond to the parametrization where µ i denotes further angular coordinates, e.g., µ 3 = cos φ 1 , µ 4 = sin φ 1 cos φ 2 , . . ., which, together with η parametrize the H d−2 slice of the metric (3.7). The AdS boundary is located at r → ∞, and each sign corresponds to one of the two Rindler wedges. On a fixed r slice, we can reach the boundary by taking χ → ±∞ or η → ±∞. For 15 Recall that R consists of all points more than a distance χ Σ away from the defect in both CFTs. any constant Rindler time (i.e., fixed τ ), the bifurcation surface reached with r → 1. The defect in the CFT is located at χ = 0 = X 2 . The entangling surfaces are defined to be at χ = ±χ Σ in both CFTs. We will now consider a particular Poincaré coordinate system, which covers both Rindler wedges and is defined in terms of embedding coordinates as In these coordinates, the bifurcation surface intersects the boundary (z → 0) atx 1 =t = 0, 16 while the defects are located atx 2 = 0. The two CFTs are mapped to the regionsx 1 > 0 andx 1 < 0, respectively. We will denote the CFT atx 1 > 0 as the right CFT, and the one atx 1 < 0 as the left CFT. Comparing eqs. (3.46) and (3.47) in the boundary limit, it is easy to see that the entangling surfaces in the right CFT get mapped tõ This shows the convenient property of the new Poincaré coordinates: entangling surfaces lie along rays (i.e., straight lines) in the positive half-space withx 1 > 0, whose slope depends on the spatial location χ Σ and the Rindler time τ Σ at which the entangling surfaces are defined. Further, flipping the sign ofx 1 to −x 1 in the above expressions yields the entangling surfaces in the left CFT. The relation between the Rindler coordinate given in eq. (3.46) and the new Poincaré coordinates of eq. (3.48) is illustrated in figure 11. We now need to choose cutoffs in order to regulate the area integrals of the RT surfaces. First, we need to regulate the UV divergence in the entanglement entropy by introducing maximum radius in both AdS-Rindler patches r UV 1. This translates to az-dependent cutoff in the new coordinates, where in the last step, we used that r UV 1. Second, we need an IR cutoff which we impose in the transverse directions along the entangling surface. Since the solution is independent of shifts in all directions along boundary of H d−1 defect origin in Poincaré coordinares point at infinity in Poincaré coordinates χ Σ Figure 11: A time-slice of our setup. The spatial boundary S d−1 (in global coordinates) is split into two hyperbolic discs H d−1 , shown in pink and aqua, which are glued together at infinity. At the same location, the bifurcate horizon intersects the boundary. The CFT on either disc is dual to a Rindler wedge in the bulk. The defect (green) is a great circle on the global boundary. As indicated in the figure, the Poincaré coordinates introduced in this section cover the full sphere, with the point at infinity appearing on the south pole of the sphere. Entangling surfaces are the semi-circles shown in red. the brane, the transverse directions should just contribute an overall volume factor. We choose η max = IR R 1, which translates tõ (3.50) Island phase As a warm-up exercise, we will show that the entropy on the island phase is in fact invariant under time evolution. This is obviously true, since the RT surface is completely contained within one Rindler wedge and τ is a Killing coordinate for the corresponding metric (3.10). Hence the corresponding time evolution of a single Rindler wedge is an isometry of that wedge. In this case, we are looking for an extremal surface which ends on the boundary at the location defined by eq. (3.48) for eitherx 1 > 0 orx 1 < 0, depending on which Rindler wedge we are interested in. Here, we choosex 1 > 0. We can express the problem in a boosted coordinate systems with boost parameter β = tanh τ Σ . This is depicted in figure 12. This boost leaves the cutoffs given in eqs. (3.49) and (3.50) invariant, and changes the equation for the tx 2 Figure 12: The left panel shows two components of the entangling surface (red) at non-zero Rindler time τ Σ in the right CFT in the Poincaré coordinates (3.47). In the island phase, these two rays in the boundary geometry are connected by an RT surface in the bulk. We can perform a boost inx 1 direction to map this set of entangling surfaces to the t = 0 slice, which also corresponds to τ = 0 slice of the hyperbolic boundary geometry. The boost is a symmetry of the defect (green). This is precisely the entangling surface of the same region at τ = τ Σ = 0 with the appropriate cutoffs. We may thus conclude that entropy of the region R remains constant in the island or connected phase, as anticipated. Again because we have a pure state globally, we can see that the entropy of the complementary region, i.e., the two belts centered on the conformal defects in each of the two CFTs, is independent of τ Σ in this connected phase. No-island phase For the no-island phase, we focus on the case in which the RT surface connects entangling surfaces in the CFTs dual to different Rindler patches. The entangling surfaces are located att where we have chosen to focus onx 2 < 0, i.e., to the region on one side of the defect. Similarly to the island phase, we want to go to a new coordinate system in which the calculation becomes simpler. Now, however, we have to distinguish two cases. 17 we can boost this problem inx 2 direction with boost parameter β = − sinh τ Σ sinh χ Σ . This is depicted in the upper panel of figure 13. The new entangling Figure 13: The left panel shows two components of the entangling surface (red) at non-zero Rindler time τ Σ in the right CFT in the Poincaré coordinates (3.47). These two rays are located in different CFTs so that in the no-island phase, they are joing by an RT surface in the bulk which passes through the Rindler horizon. In this case, we can now boost inx 2 direction to map these two rays to τ Σ = 0 when τ Σ < χ Σ or to surfaces are then located at wherex 2 < 0. Expressing the result in Rindler coordinates, we are dealing with the case of an entangling surface in the τ = τ Σ = 0 plane. The new location of the entangling surface χ Σ is given by Note that as cosh τ Σ → cosh χ Σ (and so as | sinh τ Σ sinh χ Σ | → 1), the new entangling surface gets closer and closer to the defect, i.e., χ Σ → 0. Importantly, the cutoffs are not boost invariant in this case. The IR cutoff given in eq. (3.50) remains unchanged, but the UV cutoff in eq. (3.49) changes along the trajectory of the entangling surface to We should caution the reader that we arrived at eq. (3.56) by substituting the trajectory of the entangling surface into the boosted cutoff. This means that eq. (3.56) is only correct for a small cutoff. Luckily, the corrections to the new cutoff only change the entanglement entropy at order O(1/r UV ). Case 2: The other case, τ Σ > χ Σ , is shown in the lower panel of figure 13. Now we can boost in thex 2 direction again, but usingβ = − sinh χ Σ sinh τ Σ . The new entangling surfaces are located at While this does not reduce to a surface lying in the τ = 0 plane, in Rindler coordinates it reduces to an entangling surface for a belt width χ Σ = 0 and Again, the IR cutoff in eq. (3.50) is unchanged, however, the UV cutoff changes to 18 Let us note that the cutoff location still is continuous. In the previous case, the new cutoff was the old cutoff multiplied by cosh 2 τ Σ . The latter was reliable as long as τ Σ < χ Σ . However, we see here that once τ Σ > χ Σ , the cutoff is no longer timedependent. The information paradox Now the preceding results can be combined to give a qualitative description of the time evolution of the entanglement entropy. Following the discussion in section 2.2 for two dimensions, at time τ = 0, we have a standard thermofield double state of the two CFTs on hyperbolic spatial geometries, including the conformal defects at χ = 0. If we restrict the observations to either the left or right side, the reduced state is a thermal one and in particular, the bath CFT is in thermal equilibrium with the corresponding conformal defect, with temperature T = 1/(2πR). Using the brane perspective and an appropriate choice of parameters, 19 we can describe the conformal defects are replaced by (two copies of) the boundary CFT coupled to Einstein gravity on an AdS d region. For the configuration described above, this yields a topological black hole solution shown in eq. (3.10). We emphasize that the latter really describes an AdS d geometry in AdS-Rindler coordinates, and hence the thermal equilibrium between this 'black hole' and the finite temperature CFT on the asymptotic boundary can be understood as arising because the two systems are coupled along an accelerated trajectory in the region of dynamical gravity. While the black hole is in equilibrium with the bath CFT, under time evolution, the two systems are constantly exchanging thermal quanta. The immediate effect of this process after τ = 0 is to increase the entanglement between one side of the black hole, i.e., one of the AdS-Rindler wedges on the brane, and its respective bath CFT. A standard measure for the entanglement between both AdS-Rindler wedges and their respective baths is given by the entanglement entropy of the complement of two belt subregions centered around the conformal defects in the boundary as discussed above. In section 3.5, we saw that by a judicious change of coordinates (and cutoff), the calculation of the entanglement entropy of these regions can be mapped at late times (i.e., τ Σ ≥ χ Σ ) to the case of a zero-width belt. Further, in section 3.4, we found that the entanglement entropy grows linearly in time, as shown in eq. (3.44). 20 As in the two-dimensional case [2,99], this linear growth of entropy would lead to an information paradox for our eternal black holes, if it was valid for all times. The reason is that the entanglement entropy must be bound from above by the defect entropy, since the defects need to purify the bath system. In the case of interest, the theory is well approximated by weakly coupled Einstein gravity. This allows us to view the quantum fields on the gravitational background as giving a small correction to the entropy and thus, the defect entropy is well-approximated by two times the black hole entropy. 21 The appearance of an island in the effective gravity theory from the brane perspectice is simply related to a phase transition of the RT surfaces in the bulk description of our system. The RT surface changes from the no-island phase, in which it connects both CFTs through the horizon, to the island phase, in which it connects both sides 20 Implicitly, to apply eq. (3.44), we must also show ∂ τ Σ ∂ τΣ . The latter follows at late times from eq. (3.58), which yields Alternatively, the same result also follows by simply observing that eq. (3.58) implies that at late times: . Let us add that this linear growth is analogous to that found for planar black holes in [94]. 21 The black hole entropy is proportional to the horizon area of the black hole, which in our case is infinite. Hence to be precise, we must consider an IR regulated entropy, as discussed with eq. (3.50). of the defect in a single Rindler wedge. The fact that there will always be an extremal surface crossing the brane is easy to see: Before we invoke the extremization condition at the brane, there is an infinite family of candidate RT surfaces, which start in the bath and meet at the brane. To get the correct RT surface, we need only extremize the area by varying the position of the surface where they meet the brane. Subregion duality and the homology constraint guarantee that there will be one extremal surface for every belt configuration (although the boundary of the island might sit at the horizon or at the CFT defect). In order to establish unitarity of the Page curve, we still need to argue that the island appears before the black hole fails to purify the bath region R under consideration. In the case of interest here, we have that B L 1. In this approximation, it follows from eq. (3.24) that In deriving this equation, we have used that the location of the horizon on the brane is at ζ hor ∼ B L 1 and that ζ * cannot scale with B L at leading order. The reason is that ζ * is bounded from above by a function of the belt width. We can see that the location of the new quantum extremal surface will always be close to the horizon -see also the next section for numerical plots. The leading order contribution to the generalized entropy is given by the area of the horizon which gives the black hole entropy. While a more involved analysis is needed to demonstrate that the appearance of the island saves unitarity, this shows that the island mechanism has the right qualitative behaviour to unitarize the Page curve. Numerical results In the previous section, we found a phase transition between the no-island and island phases that has the right qualitative properties to yield a Page curve consistent with unitarity. The calculations involved differential equations which have no known closed form solution. However, the reader might have realized that all of these equations were ordinary differential equations and are thus easily solved numerically. In this section, we will first present numerical solutions to the equations for the RT surface in the island phase, and then use the arguments of the previous section to obtain the Page curve for massless, topological black holes in equilibrium with a bath. General behavior of the islands As discussed previously, by choosing entangling surfaces with the hyperbolic symmetry of H d−2 , the problem of finding the corresponding RT surfaces reduces to a twodimensional problem. Choosing the convenient coordinates in eq. (3.12), we can express the profile of the RT surface as ζ(ξ). We start here by discussing examples of extremal surfaces in the island phase for different choices of parameters. Instead of working with ζ as a radial coordinate, we conformally compactify the geometry and use the coordinate which maps timeslices of AdS to a finite region. In order to calculate the profile of the RT surface, we fix the location of the entangling surface χ Σ at the boundary. Applying the large r limit of eq. (3.12), we relate this to ξ Σ , the location of the entangling surface in ζ, ξ coordinates. We can then use eqs. (3.24) and (3.25) to determine ζ * and ζ QES numerically as a function of ξ Σ . The shape of the RT surface is obtained by integrating eq. (3.18) from the boundary. Figure 14 shows a few examples of RT surfaces in the connected phase for d = 3, 4 and 5, i.e., in four, five and six bulk dimensions, respectively. Here, we only show the geometry on one side of the brane. The other side is determined by a reflection across the brane. Since the RT surfaces do not cross the horizon, the configuration is independent of the choice of Rindler time τ . Figure 14a shows RT surfaces with fixed χ Σ for different values of the dimension and selected values of the DGP coupling λ b . We can see that positive DGP coupling pushes the point of intersection between brane and RT surface towards the horizon, i.e., it reduces the area of the island's boundary. Similarly, negative DGP coupling causes the island to become bigger. This behaviour is readily explained through eq. (2.7) which shows that by increasing (decreasing) the value of λ b , the gravitational coupling in the brane theory, i.e., the effective Newton's constant, becomes smaller (bigger). In turn, the coefficient of the Bekenstein-Hawking contribution is bigger (smaller) in the island rule (1.1) and therefore creating an island of fixed size becomes harder (easier). Figure 14b shows how the RT surface in the island phase behaves as we vary the brane angle given by sin θ B = L/ B (or equivalently the brane tension -see eq. (2.4)). Recall that Einstein gravity is a good approximation when θ B is small. As we depart from the limit of small brane angle, the island grows. Finally, figure 14c shows that the size of the island varies with χ Σ , the location of the entangling surface in the bath. Moreover, as we will discuss momentarily, we see that an island phase for the RT surface seems to exist for all values of the belt width, although of course it will generally not dominate at early times. We can get an even better idea of the qualitative features of the islands in higher dimensions by plotting the turning point ζ * and the QES position (ζ QES , ξ QES ) as a function of the brane angle θ B for different dimensions -see figure 15. A general feature is that in the θ B → 0 limit, the QES always approaches the horizon on the brane at ξ = π/2, as discussed around eq. (3.61). In terms of ξ QES and the distance from the horizon on the brane, ρ QES , we have where the first terms on the RHSs give the location of the horizon. Granted ζ * tends towards a finite value as θ B → 0, the above formulas tell us that the QES tends towards the horizon on the brane. Applying eq. (4.2) to eq. (3.25) and noting from eq. we find that ζ * at small θ B is determined by the equation with dζ/dξ given by eq. (3.18). At leading order in θ B , the second term on the LHS can be ignored and the above equation is just the statement that the RT surface should stretch from the belt boundary to approximately the bifurcation surface on the brane. The Page curve in d > 2 As discussed in section 3, the benefit of our model is that calculating the entropy of (the complement of) the belt-shaped subregions centered on the conformal defects reduces to calculating areas in an effectively two-dimensional geometry. Further we produced explicit formulas for the areas of a number of special RT surfaces, which -as shown in section 3.5 -are sufficient to calculate the full time evolution of the RT surfaces and thus of the entanglement entropy. Figure 16a shows the Page curves for d-dimensional topological black holes, coupled to a bath on a hyperbolic background, for the cases d = 3, 4, 5. More precisely, we consider the entropy of the region defined by χ Σ = 1, which is given by Here A(Σ) are the regulated areas of the RT surfaces, and the subscript indicates whether we consider the extremal surface in the island or no-island phase. Since eq. (4.6) is a cutoff dependent quantity, it is convenient to subtract off [A(Σ R )]ï sl. ,τ =0 . That is, we subtract off the value of the entropy at τ = 0, at which point the minimal RT surfaces in the no-island phase, to define Even though the UV divergences have been removed, eq. (4.6) would still be infinite, as a result of the infinite extend of the entangling surface. Hence the plots in figure 16 show the change in the entropy density, with respect to the entropy at τ = 0. 22 The kinks in the plots of figure 16 indicate the time at which the island phase of the RT surface begins to dominate. The corresponding time is, of course, the natural analog of the Page time for eternal black holes coupled to a bath at finite temperature. The slope of the (linearly) rising portion of the Page curve has been determined in section 3.4 and is given by Moreover, recall that τ is a dimensionless time such that the temperature of the hyperbolic black hole is 1 2π (cf. the discussion in section 3.1). The dimensionful time t is related to τ by where R is the curvature scale for the spatial sections in the bath CFT, as defined in eq. (3.3), and the bath CFT is taken at temperature T = 1 2πR . The calculation of the RT surfaces is performed as follows: the area in the island phase is computed by substituting eqs. (3.24) and (3.26) into eq. (3.25) and numerically solving for ζ QES . The result is then used together with eq. (3.24) to numerically integrate the area in eq. (3.19). There are three different regimes for the calculation of the are in the no-island phase. At early times, τ Σ ≤ χ Σ , the calculation of the entropy of the subregion with boundaries at ±χ Σ can be translated to the calculation of the entropy of a belt with boundary χ Σ = ±arccosh cosh χ Σ cosh τ Σ in the τ = 0 time-slice, as explained in section 3.5. As also explained in the same section, we need choose a different cutoff on r in this case. However, working in ζ, ξ coordinates, it turns out that the cutoff on ζ does not change. At intermediate times, τ Σ χ Σ , the entropy can be computed by calculating the area of an RT surface for a zero-belt-width entangling surface at a time given in eq. (3.58). Accidentally, the relation between r and ζ works out in such a way that the cutoff of r agrees with the cutoff on ζ in the previous calculation. As τ Σ becomes larger, the numerics become less reliable. However, for moderately sized belt widths we are already well into the regime in which the area of the RT surface grows linearly in time. Therefore, we use a linear fit to extrapolate the last few numeric data points to late times, τ Σ χ Σ . We verified that the resulting slope agrees with the analytic result given in eq. (4.9). In figure 16b, we show how the Page curve and Page time change as we vary the brane angle. As we see, increasing θ B decreases the Page time, or in other words decreases the number of microstates available to the black hole on the brane. This can also be understood from the CFT point of view where the defect entropy is given in terms of an RT surface in the island phase [60,100]. As the brane angle approaches zero, the Page time diverges. The reason is that in this limit the area of the island diverges. The absence of islands in this limit was already noted in [33]. The divergence as θ B → 0 goes like θ 2−d B , and in the small-angle approximation we find that (4.11) For example, the numerical coefficient which multiplies θ 2−d B can be estimated from the above formula to be 1.30 for d = 4. A fit to the numerical data plotted in figure 16b agrees with this value. Figure 16c shows the dependence of the Page curve and Page time on the DGP coupling. As we decrease the DGP coupling (i.e., increase G eff ) the Page time goes to zero. The linearity can be easily explained be recalling that in the small θ B regime we are interested in the island sits close to the horizon and thus has a fixed location for varying values of λ b . The Page transition occurs whenever the area of the RT surface in the no-island phase exceeds the area of the RT surface in the island phase. Since the area in the no-island phase approximately grows linearly with time and the area in the island phase depends approximately linearly on λ b , c.f. eq. (4.6), we obtain a linear relationship between the Page time τ P and λ b . Based on this argument, we can estimate the slope of the graph to be which for the parameters in 16c (i.e., θ B = 0.1 and d = 4) evaluates to τ P ∼ 130 λ b and agrees with the fitted value of the slope. The Page curve and Page time only depends very weakly on the belt size. In fact, the only significant effect can be seen at very early times of the evaporation. Figure 17a shows that for wide belts, the entanglement between the belts and baths starts growing convexly (i.e., ∂ 2 ∆s/∂τ 2 Σ > 0), then enters a period of concave growth (i.e., ∂ 2 ∆s/∂τ 2 Σ < 0) before entering the linear regime. Generally, we can separate the time-dependence of the Page curve into four different regimes. At times of the order of the thermal scale β (∼ 0.16 in figure 17a) the entanglement growth increases until it enters a phase of fast growth between τ Σ ∼ O(β) and τ Σ ∼ O(χ Σ ). This fast growth depends on the belt size. At time τ Σ ∼ O(χ Σ ) a universal, linear behavior takes over, which is independent of the belt width. The entanglement keeps growing until at the Page time τ P it saturates and stays constant. In the following we will explain the region of fast growth and its transition into the region of universal linear growth. To understand the behaviour of the Page curve, first consider a few characteristics of our belt geometries. As can be seen from the metric in eq. (3.7), points on any of our entangling surfaces are a fixed distance χ Σ from the surface at χ = 0, where the defect is located, i.e., where the bath is coupled to the black hole. 23 However, the extrinsic curvature of the entangling surfaces which we consider depends on this distance. Similarly, the entangling surfaces with larger χ Σ have a larger regulated volume. In [101], it was proposed that the growth of entropy S[Σ] for an arbitrary entangling surface Σ is bound by where A(Σ) is the area of the entangling surface Σ, as measured by the boundary metric in eq. (3.7). The thermal entropy density s th and the entanglement velocity v ent are region independent constants. The entropy density is given by the black hole entropy (i.e., 1 4G bulk times horizon area) divided by the CFT volume of the spatial slices (again, measured by the metric (3.7)): (4.14) In [101] which primarily considers flat space, v ent is defined such that eq. (4.13) is saturated at times just above the thermal scale for sufficiently straight entangling surfaces -this definition is well-defined in the sense that v ent turns out to be independent of the shape of the entangling surface, provided it is sufficiently straight [102,103]. In hyperbolic space, v ent can be similarly defined by demanding that the straight surface χ = 0 saturates eq. (4.13) -we shall justify this choice further below -specifically, obtained by comparison of eq. (4.13) with the zero-width belt result in eq. (3.44). It is clear that (4.13) cannot be tight at late times for belts of finite width. The reason is that the area factor on the right hand side A[χ > 0] is exponentially large compared to A[χ = 0], while, as can be seen from figure 17a, all belts share the same rate of entanglement growth at late times. To more tightly bound the late time behavior of finite width belts, we will therefore need to combine eq. (4.13) with the monotonicity of mutual information. It will turn out that the optimal bound obtained in this way for finite-width belts uses eq. (4.13), but always evaluated for the χ = 0 surface Σ at late times; thus we will find that the χ = 0 surface acts as a bottleneck for entanglement growth even for finite width belts. To see why the surface at χ Σ = 0 acts as a bottleneck, let us formulate the more refined combined bound now, following closely [101]. To this end, it will be less helpful to consider the entanglement entropy of the bath intervals R; instead we will consider their complementR, i.e., belts surrounding the defects, whose entropy is the same as R since the state of both Rindler patches is pure. ConsideringR instead of R is equivalent to looking at the Page curve of the black hole instead of that of the radiation. It is useful to rewrite the entropy displayed in the Page curve as where is the mutual information between the regionsR in the left (L) and right (R) CFT at time τ . Similar to [101], we now assume that information is only transported with the butterfly velocity v but or less. 24 For the hyperbolic geometries considered here and the temperature T = 1 2πR , this velocity is given by [105,106] v This implies that a belt regionR at time τ can be considered a subsystem of the original beltR at τ if Collecting everything, we find a bound on the Page curve of the black hole, To find a tightest bound this has to be minimized over all choices of χ Σ , see below. For any fixed χ Σ it is sufficient to focus on the case where τ < τ , which will always give the smaller bound. The mutual information appearing on the right hand side are evaluated on the initial time slice and can be obtained numerically by using the results of section 3.3. 24 The butterfly velocity is defined as the spread of the region in which the commutator of an operator O 1 (t) with O 2 (t) is bigger than 1 [104]. From eq. (4.20), it is now easy to see why the entanglement growth becomes universal at late times. Note that eq. (4.20) is in fact a family of inequalities, parametrized by a choice of regionsR . The time τ is chosen such thatR at τ is just barely a subsystem ofR at τ , in the sense described below eq. (4.17). For times before τ we assume that the mutual information of subregionsR is allowed to decrease as fast as possible, while still compatible with eq. (4.13). Since the regionsR at time τ are subregions ofR at time τ , their mutual information bounds the mutual information of regionsR. We can find a tight bound on the Page curve by minimizing over all choices ofR , or in other words, by minimizing over all χ Σ with τ = τ − χ Σ −χ Σ v but . It turns out that, for sufficiently large τ , the tightest bound is obtained for χ Σ = 0, yielding the prescription stated below eq. (4.15). We thus see from the first term on the right hand side of eq. (4.20) that this surface acts as a bottle neck for information transfer and thus controls the late time growth of entropy. Matching this behaviour to the late time rate of growth of the exact Page curve provides further justification, a posteriori, for the choice of the entanglement velocity stated in eq. (4.15). The bounds found in this way are presented in figure 17b. We see that a fast growth at early times is allowed by the bounds, before the linear growth phase is entered. Further, as can be seen from the figure, these bounds are fairly loose. It would be interesting to understand how to make them tighter. Note that the blue curve in figure 17b behaves qualitatively different than the other curves. The reason is that the early convex onset of the curve is controlled by the thermal scale and thus lasts for roughly ∆τ ∼ O(β), independent of the belt width. The rescaling in figure 17b magnifies the early time behavior of belts with χ Σ < 1 while it reduces the early time behavior of belts of width χ Σ < 1. Thus, while all other curves show the linear entanglement spreading for time scales τ ∼ O(χ Σ ) > O(β), the behavior of the blue curve is dominated by entanglement spreading through thermalization, since the belt width is of order of the thermalization scale. The quadratic 25 growth at times below the thermal scale is reminiscent of the 'pre-local-equilibration growth' described in [102,103]. Let us end with a few observations regarding the structure of entanglement spreading in our system. First, we note that the entanglement velocity (4.15) for Rindler spacetime with hyperbolic spatial slices differs from the analogous velocity √ d(d − 25 Note that time-reflection symmetry demands that the Page curve have an early time expansion containing only even powers of τ . For the zero-width, it is easily verified, at least numerically, from eq. (3.41) that √ 1 − r * ∼ τ so that the growth is indeed quadratic by eq. (3.42). For finite-width belts, plugging eqs. (3.55) and (3.56) into eq. (3.12) shows that early time evolution is equivalent to holding the cutoff at fixed ζ and shifting the ξ of the entangling surface by ∼ τ 2 , again leading to quadratic entanglement growth. [94] dual to AdS planar black holes. Furthermore, for d > 3, the entangling velocity for a CFT on hyperbolic space exceeds the butterfly velocity, eq. (4.17). Typically, whenever v ent > v but , one might worry about contradictions to entanglement monotonicity laws [101,107] which apply above the thermal scale. However, no immediate contradictions appear in the present case, as we now explain. 2) For concreteness, let us interpret eq. (3.44) as describing the entanglement growth in hyperbolic space without defects, specifically, computing the entropy for a region consisting of half-spaces χ > 0 on either side of the TFD. 26 This growth saturates eq. (4.13) with v ent > v but in d > 3 so one might worry that entanglement is spreading faster than the speed v but permitted by operator commutator growth [104]. Specifically, by applying an analysis similar to the one reviewed around eq. (4.17) to thermal relative entropies, [101,107] argue that, for regions and times above the thermal scale, entanglement growth must be bounded by the thermal entropy density s th times the volume between the entangling surface and a tsunami wavefront propagating with speed v but away from the entangling surface (in either direction). Said differently, the rate dS/dt of entanglement growth is bounded by s th v but times the area of the tsunami wavefrontthis is essentially eq. (4.13) with v but replacing v ent and the tsunami wavefront replacing the entangling surface. In flat space, the tsunami wavefront can be typically chosen to propagate in a direction away from the entangling surface such that it shrinks or does not grow in time (e.g., propagating inward from a spherical entangling surface). Thus, for the flat space equivalent of eq. (4.13) to be saturated, one must require v ent < v but . In hyperbolic space however, it is possible for the tsunami wavefront to grow in both directions away from the entangling surface. Indeed, this is precisely what happens for the hyperbolic half-space which has an entangling surface χ = 0 of minimal area; within a few thermal times, the tsunami wavefront propagating in either direction grows to an area exponentially large compared to the entangling surface. We thus see that, though the hyperbolic half-space saturates eq. (4.13) with v ent > v but , this does not contradict the bound on entanglement spreading due to the butterfly velocity. Extremal horizon in equilibrium with T = 0 bath Here we turn our attention to extremal black holes. In particular, we consider the same bulk geometry described in section 2, i.e., a backreacting codimension-one brane extending across the spacetime which locally has the geometry of AdS d+1 . However, we 26 To be precise, we should multiply eq. (3.44) by 1 2 · 1 4G bulk with the factor of 1/2 due to our focus on just two copies of the entangling surface χ = 0 -one on either side of the TFD. replace the AdS-Rindler coordinates introduced in eq. (3.6) with Poincaré coordinates, Of course, the coordinate singularity at z → ∞ corresponds to an extremal T = 0 horizon. Figure 18 illustrates the Poincaré patch in our bulk geometry. For the most part, we will be interested in limit of large tension (i.e., B L), for which the brane theory can be described as Einstein gravity coupled to two copies of the boundary CFT. As we describe in a moment, the brane geometry naturally inherits a Poincaré metric from the bulk geometry. Hence the brane supports an extremal black hole which is equilibrium with the T = 0 bath CFT on the asymptotic AdS boundary. We note that with Poincaré coordinates, we are examining the system in a new conformal frame where the bath CFT is living on flat d-dimensional Minkowski space, This brane perspective is illustrated in figure 19a. Of course, we may also have the boundary perspective where the d-dimensional CFT in Minkowski space is coupled to a codimension-one conformal defect. For simplicity, we insert the latter at x 1 = 0 for the metric in eq. (5.2) and so the induced geometry on the defect is also flat, i.e., (d-1)-dimensional Minkowski space. The Penrose diagram for this perspective is shown in figure 19b. Note that in contrast to the finite temperature TFD state (entangling two copies of the bath CFT) in section 3, here for the T = 0 scenario, we only have a single copy of the bath CFT, e.g., compare the above to figures 6 and 7. Of course, at T = 0, we are simply studying the vacuum state of the defect CFT in flat space (analogous to what was done in [1] but in a different conformal frame). 27 We may recall from [2] that for the extremal case in d = 2, one always finds islands for the analogous belt regions. This result is a consequence of two features which hold for d = 2: firstly, there always exists a bulk RT surface intersecting the brane to produce an island; secondly, the alternative no-island RT candidate surface has an additional IR divergence 28 and this surface is therefore subdominant. However, neither of these statements hold in d ≥ 3. Indeed, we will find in higher dimensions that quantum extremal islands do not appear in the large tension limit. Nonetheless, no information paradox arises since extremal black holes do not radiate, i.e., the black hole and the bath are not exchanging radiation. This contrasts with the non-extremal case in section 3, where the information paradox for the eternal black hole in the effective d-dimensional gravity theory arises because of the continuous exchange of quanta between the black hole and the bath. Of course, the paradox is avoided by the appearance of quantum extremal islands. The remainder of this section is organized as follows. We shall begin by first explicitly constructing the bulk and brane metrics to be used in the extremal case and by introducing the entanglement entropy calculation which we wish to consider. Then, in 27 Of course, this is a pure state, as is manifest in bulk since the Poincaré time slices constitute complete Cauchy slices. 28 Coming from integrating the length of the surface down to the extremal horizon. subsections 5.1 and 5.2, we carry out this calculation using RT surfaces corresponding to island and no-island phases, respectively. Finally, we collect these results in subsection 5.3 to determine when each phase dominates. The Poincaré coordinates (5.1) cover a wedge of the AdS d+1 vacuum geometry. However, in the present geometry with a backreacting brane, a portion of two such wedges would appear on either side of the brane -see figure 18. If we consider the coordinate transformation z = y sin θ, The induced metric on the brane then becomes and we may then read off the curvature scale of the brane as B = L/ sin θ B , as expected from eq. (2.4). Here, y is interpreted as the radial Poincaré coordinate running along the brane, and the Poincaré horizon on the brane, located at y → ∞, is inherited from the bulk. As usual, we wish to work in the regime L 2 / 2 B 1, or alternatively θ B 1. Following the brane perspective described above (and in section 2), eq. (5.5) is interpreted as an extremal black hole solution of the gravity theory induced brane at θ = θ B and the CFT of the flat asymptotic boundary at z = 0 becomes the zero temperature bath. This then provides a direct extension of the extremal scenario in [2] to d dimensions. The question which interests us here is then whether the entanglement wedge of certain subregions in the bath includes islands residing on the brane. Specifically, we consider the entanglement entropy calculation for a boundary region R that is the complement of a "belt" geometry centered on the defect at x 1 = 0, i.e., the boundary subregion R = (−∞, −b] ∪ [b, ∞). According to the RT formula we should consider codimension-two surfaces V sharing the same boundary ∂V = ∂R ≡ Σ CFT . To determine RT surface candidates among these surfaces, we must search for surfaces which extremize their area. As we discussed in the introduction, there are generally two sets of surfaces which achieves this extremization; the RT prescription then instructs us to choose the one with the smallest area. The first class of surfaces are those which intersect the brane, forming a quantum extremal island on the brane which belongs to the entanglement wedge of R -see figure 20. We will say that this RT surface is in the island phase. The second set of surfaces fall trivially into the bulk and do not produce islands on the brane, i.e., these surfaces are in the no-island phase. Island phase As a starting point, let us review the calculation for RT surfaces of belt geometries in pure AdS [16]. That is, we are considering the complement of R, but the RT calculations for this region and for its complement, R = [−b, b], are equivalent. Integrating out the x 2 , . . . , x d−1 directions in which the brane is constant, the area functional of a codimension-2 surface V becomes where vol ⊥ d−2 is the volume of transverse directions {x 2 , . . . , x d−1 }. 29 The RT surface Σ R is obtained by extremizing the area functional (5.6) with respect to the profile z(x 1 ). This functional, viewed as a Lagrangian, contains no explicit dependence on x 1 and hence the corresponding Hamiltonian is a constant along Σ R , allowing us to deduce for some constant z * . Further, the sign ± above is determined by whether we are on the portion of the RT surface heading into the bulk (+) or heading out of the bulk (−) with increasing x 1 -see figures 20 and 21. 30 From eq. (5.7), we see that dz/dx 1 = 0 at z = z * and therefore z * is the maximal z-value attained by Σ R . We can integrate eq. (5.7) to obtain the trajectory of the RT surface: Here ∆x 1 > 0 is the absolute x 1 -separation between a point on the RT trajectory and the initial (final) endpoint on the asymptotic boundary, on the portion of the RT surface heading into (out of) the bulk -see figure 21. If we evaluate this expression at z = z * , we obtain half of the width of the boundary strip (in the x 1 direction) defined by the RT surface. Denoting this width as D, which we emphasize is in the empty AdS vacuum (see figure 20), we have (5.9) 30 As noted previously, if we restrict our attention to positive tension T o , we will have 0 < θ B < π/2. In this case, the RT surface must be increasing in x 1 as one heads away from the boundary (z, x 1 ) = (0, −b), in order for the RT surface to meet the brane. Now returning to the geometry with the backreacting brane, each half of the RT surface Σ R on either side of the brane will follow the trajectory given in eq. (5.8) for pure AdS prior to meeting the brane. We have placed the defect at x 1 = 0 and the RT surface begins on the asymptotic boundary at x 1 = −b. Further, if we were to extend the RT surface past the brane, it would hit the asymptotic boundary again at In terms of eq. (5.8), x 1 along the trajectory is then given by when heading into bulk (towards z = z * ) D − ∆x 1 when heading out of bulk (away from z = z * ) . (5.10) In general, as illustrated in figure 20, D 2 = b, rather, the relation between D (or z * ) and b must be determined by demanding that the choice of the intersection σ R of the RT surface with the brane should extremize the RT surface's area (plus the area of the QES, when brane action includes an extra DGP term). As described in [1] and reviewed around eq. (3.21), this extremization leads to a boundary condition restricting the angle at which the RT surface meets the brane. Again, we may reduce this to a two-dimensional problem where we view the RT surface as a geodesic in an effective two-dimensional geometry 11) and the area becomes the length of the geodesic in this geometry. As before, we may use eq. (3.21) to determine the variation of the RT surface area under perturbations of σ R , the QES on the brane. Here, h ij is given by eq. (5.11), the deviation vector X j is chosen to be ∂ y , and the tangent T i determined from eq. (5.7), with both X j and T i normalized with respect to h ij . Hence, upon perturbing the intersection of the RT surface with the brane, the RT area varies as where θ QES is the angle between the RT surface and the brane, y QES is the y coordinate of σ R -see figure 20 -and the ± sign is the same one as introduced in eq. (5.7) and illustrated in figure 21. An extra factor of 2 is included to account for the two components of the RT surface on either side of the brane. From eq. (5.5), we read off the area of σ R : The extremality condition is satisfied if where λ b is defined in eq. (2.7). The relationship between z * and b may then be determined by substituting (x 1 , z) = (z QES cot θ B , z QES ) into eq. (5.10), and using eqs. (5.9), (5.8) and (5.15) where the top (bottom) signs chosen above if the RT surface intersects the brane to the left (right) of the extremal point z = z * . We have noted in the second equality of eq. (5.16) that all terms of the previous expression are linear in z * ; in particular, note in eq. (5.17) that the ratio z QES /z * is determined by eq. (5.15). In figure 22a, we have plotted the position of the intersection σ R between the RT surface and the brane as a function of the brane angle θ B for various λ b and d = 3. In section 5.3, we shall discuss the fact that, for θ B below some critical angle θ c , the extremal surfaces discussed here fail to exist. That is, y QES , the position of the QES on the brane, runs off to infinity as θ B → θ c from above. Having determined the profile of the RT surfaces, we may proceed to evaluate their corresponding entropies using the RT formula (1.2) -keeping in mind that we have not shown that these surfaces minimize the entropy functional yet. Inserting eqs. (5.7) and (5.13) into the generalized entropy functional, we find that the entropy of the belt (b) Critical brane angle as a function of the ratio of G bulk to G brane . Figure 22: Plots of the position of σ R , the intersection of the RT surface with the brane, and the critical brane angle at which this surface runs off to y QES → +∞. geometry R and hence of the complementary bath region R is given by where z = δ defines the UV cutoff surface near the asymptotic AdS boundary, and z QES and z * are linearly related to b by eqs. (5.15) and (5.17). For z QES z * , the hypergeometric function becomes 1 + O[(z QES /z * ) 2(d−1) ], giving where we have used eq. (5.9) to replace z * with D in the first line. Note from eq. (5.15) that z QES /z * ∼ [(λ b + 1)θ B ] 1/(d−1) so the correction is indeed smaller than the other terms shown here in high tension limit. Using the brane perspective, let us examine the various contribution to the generalized entropy on the right-hand side of eq. (5.19). Beginning with the leading term of the second line in eq. (5.19), we find that it corresponds to Bekenstein-Hawking of the QES, i.e., 1 4G eff times the area of σ R . It is interesting to note that that there are no higher curvature corrections to the generalized entropy of the QES as might have been expected from the Wald-Dong entropy formula. 31 Turning to the first term in the first line of eq. (5.19), we have the area law divergence associated with the two components of the entangling surface Σ CFT at x 1 = ±b. This leaves us with the second term in the first line. Upon closer examination can be recognized as the finite contribution to the entanglement entropy for a belt of width D, up to an additional factor of 2, e.g., see [16,17]. Further, we note that both contributions on the first line of eq. (5.18) contain a prefactor proportional to L d−1 /G bulk ∼ c T , which measures the number of degrees of freedom in the boundary CFT, e.g., [86]. We can see that these results correspond approximately to the expected entropy from the brane perspective as follows: We begin by considering the contribution from the CFT to one side of the conformal defect, say x 1 < 0. Imagine we begin with a single copy of the CFT in flat space (5.2), and evaluate the entropy of a belt of width D with entangling surfaces at x 1 = −b and x 1 = D − b. For this geometry, the holographic entanglement entropy becomes [16,17] (5.20) where we have separated the area law contributions of the two components of the entangling surface. Now from the brane perspective in our system, the bath CFT reside in flat space for x 1 < 0 but the corresponding copy of the CFT resides onto the AdS d geometry of the brane for x 1 > 0. However the latter can be produced by making a local Weyl transformation in the positive x 1 domain: Note that this is geometry is not the induced metric (5.5) but rather we are considering the standard conformal frame where one strips off the factor of (L/δ) 2 from the bulk 31 One can argue that all of the higher curvature corrections to the Wald-Dong entropy must cancel against one another as follows: In the present case, these terms would arise from integrating out the boundary CFT on the gravitating brane and so should be conformally invariant, e.g., see [108]. However, by a simply Weyl transformation, the brane metric becomes flat and further both the intrinsic and the extrinsic curvatures of σ R vanish. Hence in this flat conformal frame, the higher curvature corrections to the Wald-Dong entropy individually vanish. Hence while these curvatures do not vanish in the original conformal frame, the higher curvature entropy corrections must all cancel against one another. metric. 32 Now the net effect of this Weyl transformation on the entanglement entropy (5.20) is to modify the cutoff appearing in the area law contribution for the surface at z QES , where the latter assumes that θ B 1. Hence the entropy (5.20) becomes Now using eq. (2.7), the term on the second line can be recognized as the contribution of one of the boundary CFTs to the Bekenstein-Hawking entropy of the quantum extremal surface on the brane. Hence combining the above contribution (5.22) with that from the other copy of the boundary CFT (which extends to the bath for x 1 > 0) and the DGP contribution to the Bekenstein-Hawking entropy, we precisely recover the leading contributions in eq. (5.19). Hence this simple CFT argument allows us to match the leading contributions in the holographic result with the expected entanglement entropy. No-island phase Above, we studied the set of candidate RT surfaces which intersect the brane. In fact (for θ B < π/2), there exists another set of simple extremal surfaces which must also be considered under the RT prescription (1.2). These surfaces are constant x 1 planes anchored on the entangling surface Σ CFT on the asymptotic boundary and fall straight into the bulk. By reflection symmetry about x 1 = ±b, these planes trivially extremize the area functional, which becomes A factor of 2 has been included above to account for the two planes at x 1 = ±b. 33 Unlike the surfaces considered in section 5.1, these planes do not intersect the brane and thus no islands are formed on the brane. The entropy in this no-island phase is easily obtained from evaluating the area functional (5.23), which then yields where δ is again the UV cutoff in the boundary CFT. 32 Further, we are only performing the Weyl transformation (5.21) for x 1 > δ/ tan θ B , which corresponds to the intersection of the brane with the UV cutoff surface z = δ. 33 Further, let us note that for the special case d = 2, the integral produces an IR divergence at z → ∞. However, there is no such IR divergence for d ≥ 3. Islands at T = 0 for d > 2 Altogether, we have two candidate RT surfaces: the extremal surfaces described in section 5.1 which intersect the brane to form a quantum extremal island, and the extremal planes described in section 5.2 corresponding to the no-island phase. To determine which is the correct RT surface, we must first study the parameter space for which each surface exists. When both types surfaces exist simultaneously, the correct RT surface is given by the one which has the smallest generalized entropy, as in eq. (1.2). Below, we first observe that on a brane at angle θ B < π/2, an island can only possibly exist when −1 < λ b < 1; more specifically, for this range of the DGP parameter λ b , there is a critical angle θ c < π/2 which gives the minimum θ B that supports the island phase -recall that this critical angle was plotted in figure 22b. For θ B > θ c , the island phase exists and is dominant. At θ B = θ c the entropies computed by the island and noisland RT surfaces equalize, leading to a transition to the no-island phase below θ c . As we shall find that θ c scales as (1 + λ b ) 1 d−2 at its smallest, this precludes the possibility of islands in the regime where the brane is well-described by QFT on semiclassical gravity -see footnote 5. This differs from the d = 2 case, where the island phase always exists; furthermore, while the no-island RT surface in d > 3 has an IR-finite area, the analogous surfaces in d = 2 produce an IR divergence and thus are never dominant. Let us begin our analysis by constraining the parameter space in which each type of RT candidate surface exists. It is easy to see that the extremal planes of the no-island phase exist if and only if θ B ≤ π/2. 34 It is slightly more involved to determine when the extremal surfaces in the island phase exists. For a start, the first equality of eq. (5.15) indicates that for θ B < π/2, sensible extremal surfaces intersecting the brane can only possibly exist when −1 < λ b < 1. 35 From figure 22a, we see that this is the range of λ b for which there exists some θ B < π/2 such that the DGP gradient has not overpowered the bulk term of eq. (5.14) to push the QES to the asymptotic boundary y = 0 or to the horizon y = ∞. 34 Of course, this was our regime of interest, as this was the regime where a (nearly) massless graviton is induced on the brane. 35 Specifically, this can be seen as follows: Let us take the extreme case of λ b = 1 (λ b = −1). Then eq. (5.15) indicates that θ QES = θ B − π/2 (θ QES = θ B + π/2). For λ b = 1, this implies that when θ B > π/2, the RT surface falls straight into the bulk until it hits the brane, i.e., z * = ∞ -see figure 20. Now as θ B → π/2 from above, the QES runs off towards the horizon and consequently no QES exists for θ B < π/2. For λ b = −1, one can argue that for θ B < π/2, the QES is stuck to the defect, i.e., z QES = ∞. As increasing (decreasing) λ b beyond 1 (−1) means the DGP entropy contribution exerts a greater force pushing the QES towards the horizon (the defect), it follows that no QES exists for θ B < π/2 when λ b > 1 (λ b < −1). In these parameter ranges, the naive 'solutions' obtained from eq. (5.15) are unphysical, i.e., have the RT surface anchored in the unphysical region behind the brane. To be more precise, we must consider properties of the F function introduced in eq. (5.17). For −1 < λ b < 1, some (numerically deduced) facts about F(d, λ b , θ B ) are that it is decreasing in λ b and increasing in θ B . Moreover, Since the former diverges negatively while the latter diverges positively, it follows that there exists a critical angle θ c for which F(d, λ b , θ c ) = 0. For −1 < λ b < 1, we have 0 < θ c < π/2 with θ c → 0, π/2 as λ b → −1, 1, respectively. 36 The physical significance of θ c can be seen from the second equality of eq. (5.16): for θ B above θ c , there exist extremal surfaces which intersect the brane; as θ B → θ c from above, z * , z QES , y QES run off to +∞ as ∼ (θ B − θ c ) −1 ; finally, for θ B < θ c , no extremal surfaces exist which intersect the brane. In Figure 22b, we plot the critical angle θ c as a function of λ b for various d. Before continuing, let us briefly note a number of peculiarities which arise when |λ b | > 1. First, for λ b > 1, there exists a range of θ B π/2 for which no extremal surfaces of any kind exists, i.e., the RT prescription fails completely. This may indicate that there is no CFT plus defect theory which can be dual to a bulk with this range of parameters -of course, the brane has a negative tension in this regime and so there is no effective gravitational theory on the brane. Second, recall that as λ b → −1 from above, the coefficient of the Einstein-Hilbert action vanishes, leading to a breakdown of semiclassical Einstein gravity, as mentioned in footnote 5. Further taking λ b < −1 then corresponds leads to an unphysical ghost-like gravity action in the brane theory. At any rate, from here on, we shall restrict our focus to −1 < λ b < 1. Now we have two competing possible RT surfaces: for θ B ≤ π/2, extremal planes anchored on the entangling surfaces to either side of the brane, which correspond to a no-island phase; and, for θ B > θ c , extremal surfaces which intersect the brane, corresponding to an island phase. As both types of surfaces exist for θ c < θ B < π/2, the RT formula instructs us to choose the surface with the smallest area in this parameter space. Thus, we consider the area difference: where we have used eqs. (5.17), (5.18), (5.24), and the hypergeometric function iden- 36 In particular then, no islands form with λ b > 1 in the regime of interest with θ B ≤ π/2. tity 37 (5.28) From eq. (5.27), we see that whenever the island-and no-island-type surfaces coexist, the island-type surface always gives a lower area and is thus the surface picked out by the RT formula. Moreover, we see that entropy transitions continuously between the island and no-island phases at the critical angle θ c where F(d, λ b , θ c ) = 0. Altogether, we find that, for θ B < θ c , we are in the no-island phase where the RT surface is given by planes falling straight into the bulk, and, for θ B > θ c , we transition to an island phase where the RT surface is given by extremal surfaces which intersect the brane and form an island. To gain intuition for the critical angle θ c from the brane perspective, we note from eq. (5.19) that eq. (5.27) can may be approximated as in the small θ B limit. Building upon the discussion given below eq. (5.19), we interpret the RHS as giving a change in generalized entropy due to the introduction of the island in the effective theory of the asymptotic boundary and brane. Namely, comparing with the island rule (1.1), the first term on the RHS of eq. (5.29) gives the change in S QFT due to the introduction of the island, and the second term gives Bekenstein-Hawking entropy of the QES. Hence, for θ B > θ c , the island phase is favoured as the introduction of the island reduces generalized entropy. For θ B < θ c , the QES ceases to exist and only the no-island phase is possible. We briefly comment that, unlike for the CFT region considered in [1], the addition of topological terms to the bulk gravity theory does not change the favourability between the island and no-island phases of the belt geometry. This is because such a modification can only effect a topological contribution to the Wald-Dong entropy formula and, for the belt geometry, the RT surfaces in both phases have vanishing Euler characteristic. Namely, the RT surface of the island phase has the topology of an infinite strip while the RT surface of the no-island phase consists of two half-planes. Thus, the topological contribution would not favour one phase over the other. In closing, we note that, unlike the d = 2 case [2], we have found that in the small θ B limit, where an effective theory of gravity plus quantum matter emerges on the brane, islands typically do not exist for extremal black holes in d ≥ 3. To be more precise, eq. (5.25) and figure 22b suggest that θ d−2 c ∼ 1 + λ b . It is still possible to stay in the island phase by tuning 1 + λ b to scale as ∼ θ d−2 B . However, from eq. (2.7), we see that this limit λ b → −1 + corresponds to G eff → +∞, leading to a breakdown of the semiclassical description of the effective brane theory [1] (as mentioned in footnote 5.). We remark that, unlike for non-extremal black holes to be discussed in section 3, there is no immediate information paradox that arises as a result of the lack of islands in the extremal case here. Two dimensions revisited In this section, we specialize to the case of d = 2 which, as mentioned in the main text, requires a slightly different treatment. We begin with a discussion of the induced action on the brane, supplemented with JT gravity. Next, we review the bulk AdS 3 and brane AdS 2 geometries. Finally, we study extremal surfaces serving as candidate RT surfaces to determine the entropy in the two phases, with and without an island, leading to the Page curve. At leading order in an expansion in terms of small brane angles, i.e., θ B → 0, our results precisely agree to those of [2]. However, we can also retain the subleading terms, which produce corrections due to the finite UV cutoff on the brane. Brane action We begin by briefly reviewing the modifications for the induced brane action in two dimensions -a more complete discussion can be found in [1]. Let us start in the absence of JT gravity, considering only the brane action I induced induced by the bulk Einstein-Hilbert action (with cosmological constant) given in eq. (2.1), its corresponding Gibbons-Hawking action on the brane, and the brane tension term As we saw in section 2, the induced action evaluated for higher dimensions contains coefficients with factors of (d − 2) (see eq. (2.6)), which prevent a naive substitution d → 2. Instead, redoing the calculation specifically in two dimensions, the induced brane action is found to be where the two effective scales are Notice that while the first equality follows the same definition as in higher dimensions, the second one must be redefined for d = 2 (c.f. eq. (2.7)). The unusual logarithmic term above arises from the nonlocal Polyakov action [110], which appears from integrating out the two-dimensional CFT on the brane -see the discussion in [1]. In the absence of any DGP terms in the brane, extremization of I induced leads to an AdS 2 brane with radius of curvature B related to eff in the same way as in higher dimensions (i.e., through eqs. (2.4) and (2.7)): Thus, as in the higher dimensional case, the large tension limit leads to B L and a small brane angle θ B in eq. (2.4). In this limit, the brane moves towards the would-be AdS 3 boundary at θ = 0, giving rise to a logarithmic UV divergence in eq. (6.2) as L/ B → 0. Throughout the main text, we considered supplementing the brane action with a DGP term -compare eqs. (2.2) and (6.1). In two dimensions, an Einstein-Hilbert action is topological and so it is common to instead consider Jackiw-Teitelboim (JT) gravity [111,112] in the brane theory (e.g., see recent discussions of quantum extremal islands in d = 2, e.g., [2,13,15,113]). Following [1], we then choose the brane action as with the JT action taking the usual form (again as in section 2, we are omitting boundary terms), The Einstein-Hilbert term, though topological, still contributes to the generalized entropy with weight Φ 0 . With the addition of JT gravity on the brane in eq. (6.5), we arrive at the following induced action on the brane, where we have redefined the topological part of the dilaton upon collecting the coefficients multiplying an Einstein-Hilbert terms, i.e., Note that we have discarded the usual tension coefficient T o in eq. (6.5) and instead chosen the tension such that no cosmological constant appears in the first line of eq. (6.7) for simplicity. In eq. (6.7), it is clear that varying Φ yields an equation of motion simply setting the radius of curvature on the brane to B = JT . The limit of small brane angle θ B , related to B still through the first equality of eq. (2.4), is therefore obtained by taking JT L. Note that this leads to a logarithmic UV divergence in eq. (6.7) similar to the non-JT case, as mentioned below eq. (6.4). Similarly, the source-free equations of motion for the dilaton can then be obtained by varying the metric and further shifting the dilaton, as discussed in [1]. The above reviews our discussion of the induced action in [1]. However, we would like to compare our results for the quantum extremal surfaces and the Page curve to those derived in [2]. To facilitate this comparison, we make the following field redefinitions giving the bare and renormalized values of the dilaton -we shall clarify the meaning of this renormalization shortly. In terms of the latter, induced action (6.7) now reads Here, the first line eq. (6.11) may be interpreted as the renormalized effective action produced by integating out the brane CFT, and the second line contains the renormalized JT action, which can be compared to eq. (2) in [2]. Here, 'renormalized' means that we have absorbed the logarithmic UV divergence that would otherwise appear in the induced action 38 as L/ B → 0 into the JT action, which was achieved by the renormalization of φ 0 →φ 0 in eq. (6.10). As before, the dilatonφ acts as a Lagrange multiplier which fixes the brane geometry to be locally AdS 2 with radius of curvature B = JT . The equation of motion for the induced metricg ij , on the other hand, yields the dilaton equation of motion In the final expression, we evaluated the renormalized CFT stress tensor T CFT ij using the function f defined in eq. (6.4). 39 The standard discussions of JT gravity (e.g., [13,114]) refer to the source-free dilaton equation, i.e., the RHS vanishes, but this is easily accommodated by a further shift 40 (6.13) Bulk and brane geometries Let us now review the geometry for our current setup. Due to the simplicity of AdS 3 , we will find it convenient to describe RT surfaces using global coordinates, even though we will be considering Rindler time evolution, as in the main text. In global coordinates, we may write the bulk AdS 3 metric as ds 2 = L 2 cos 2r −dτ 2 + dr 2 + sin 2r dϕ 2 (6.14) whereτ ∈ R,r ∈ [0, π/2] and ϕ ∈ [−π, π]. In the AdS-Rindler coordinates, the AdS 3 geometry becomes 38 Recall that we also removed the power law divergence corresponding to the induced cosmological constant term by introducing a counterterm in eq. (6.5). 39 As noted in [1], f (L 2 / 2 JT ) = L 2 / 2 JT + O(L 4 / 4 JT ) and hence this expression yields the expected trace anomaly ( T CFT ) i i = 2 × c 24πR to leading order in L/ JT . But the latter also receives additional corrections due to the finite UV cutoff on the brane -see eq. (2.45) in [1]. Recall that the central charge of the boundary CFT is given by c = 3L/2G bulk and the extra factor of two in the trace anomaly arises because the brane supports two copies of this CFT. 40 Note that implementing this shift in the action (6.11) introduces a new cosmological constant term. Hence an alternative approach would be to introduce a general brane tension T o in eq. (6.5) and then tune the latter to absorb both the corresponding (power law) UV divergence in the induced action and the RHS of the dilaton equation (6.12). which is just the special case of eq. (3.6) for d=2. Here, τ, χ ∈ (−∞, ∞) and one exterior region is given by r > 1. As described in section 2, the AdS-Rindler coordinates are useful for the description of vacuum AdS as a topological black hole, such that the boundary CFT is in a thermofield double state. The inverse temperature with respect to time τ is 2π, giving the periodicity of iτ necessary for a smooth Euclidean continuation -we shall also define a dimensionful time and temperature shortly. Indeed, these coordinates describe a horizon at r = 1. Note that in d = 2, the boundary geometry is flat, i.e., it is simply two copies of R 2 . The AdS-Rindler coordinates (τ, r, χ) are related to the global coordinates (τ ,r, ϕ) in eq. (6.14) by tanh τ = sinτ cos ϕ sinr , tanh χ = sin ϕ sinr cosτ , r 2 = cos 2τ − sin 2 ϕ sin 2r cos 2r . (6.16) As described above in section 6.1, extremizing the brane action in eq. (6.7) with respect to Φ (or eq. (6.11) with respect toφ) fixes the intrinsic brane geometry to be AdS 2 with radius of curvature B = JT . This becomes the θ = θ B slice of the AdS 3 metric written as in eq. (2.3), where θ B is determined by as in eq. (3.8). We write the induced metric on the brane as The first line element with (τ, ρ) is simply the special case of AdS-Rindler coordinates given in eq. (3.10) with d = 2. The light-cone coordinates (y + , y − ) in the second line element are those used by [2], whose results we wish to compare against. The relationship between (τ, ρ) and (y + , y − ) is given by Given that the TFD has temperature 1 2π with respect to dimensionless time τ , we have introduced the dimensionful time t = y + +y − 2 where the temperature becomes T = 1/β. 41 On the brane, eq. (6.12) is easily solved for the dilaton profile in terms of ρ or y ± : where φ r is a constant introduced in [2] (see eq. (18) and discussion below (2) there). In the AdS-Rindler metric given in eq. (6.15), we introduce a surface of large constant r = r UV which will serve as the UV cutoff surface. Then following [2], we take the induced metric on this surface as the background metric for the bath CFT, i.e., with the conformal defect at χ = 0. Now the light-cone coordinates y ± can be extended to describe the geometry of AdS 3 bulk, and in particular the bath region on the asymptotic boundary near θ = π as well as the brane geometry given in eq. (6.18) at θ = θ B , by taking an AdS 3 metric in the form eq. (2.3). Indeed, on the asymptotic boundary, with metric given in eq. (6.21), y ± are related to (τ, χ) with 42 As in higher dimensions, we are interested in computing the entanglement entropy of a boundary region R comprised of all of the points with |χ| ≥ χ Σ in the two baths (associated with the two copies of the CFT entangled in the TFD state). That is, this region is the complement of two intervals ('belts') centered on the conformal defects in the two boundaries (which corresponds to the intersection of the brane with the asymptotic boundary -see figure 23). Focusing on a single Rindler wedge and on one side of the brane, the entangling surface is located at a fixed χ = −χ Σ < 0, which we define as for all Rindler times τ . Similar assignments apply for the patches covering the other portions of the boundary. Finally, we note that going to the asymptotic boundary (withr → π/2 and r → ∞), eq. (6.16) yields the relation of the global and Rindler coordinates on the boundary: tan ϕ = sinh χ cosh τ , tanτ = sinh τ cosh χ , (6.24) which allow us to simplify some calculations below. It will be useful to denote the (time-dependent) global coordinate angle of the entangling surface at χ = −χ Σ as ϕ Σ . 42 We should note that the geometry in [2] can be seen as a Z 2 orbifold of our setup (see section 2.2). Hence they would only consider χ < 0 of the flat boundary geometry in eq. (6.21). Therefore, the extension of the null coordinates that we are discussing here has to be considered separately for each side of the conformal defect. As a technical point, let us add that in [2], the sign of the spatial coordinate on the brane is reversed so that y + − y − > 0 describes the asymptotic boundary while y + −y − < 0 describes the brane. Here, y + −y − is always positive and θ = π, θ B correspond respectively to the bath and brane. Entropies: Island and no-island phases Now we turn to the problem of computing entropies using the RT formula in the background of the hyperbolic AdS 3 black hole coupled to the AdS 2 brane with JT gravity. Specifically, we wish to compute the entropy of the region R complementary to belts centered on the defects, as described at the end of subsection 6.2. In the island and no-island phases the RT formula equates the entropy to: . The RT variational problem instructs us to consider extremal co-dimension two surfaces Σ R in the bulk, which in AdS 3 are simply geodesics. Although we are primarily concerned with evolution in Rindler time, the boundaries of the entangling surface are simply four points; these can always be simultaneously placed on a surface of constant global time. This property, not present in higher dimensions, allows us to simplify the analysis by using global coordinates 43 as seen below. Now just as in higher dimensions, the minimization procedure yields two competing phases. At early times, the minimal surfaces cross the Rindler horizon avoiding the brane and the entropy is given purely by the bulk length of the RT surface, as in the second of the expressions (6.25). This length stretches with Rindler time and leads to a growing entropy. At late times the RT surfaces go across the brane instead, leading to an island where the contribution of the dilaton becomes important, as shown in the first of the expressions (6.25). As in the rest of the paper, we restrict to the regime of small brane angle θ B . We begin by considering geodesics and their lengths in global coordinates. As is well known, a convenient way to parametrize the RT surfaces on constant global timẽ τ is by using two anchoring points ϕ 1 , ϕ 2 , where geodesics are given by such that the curves hit the boundaryr → π/2 at ϕ 1 and ϕ 2 . The area (length in d = 2) of an RT surface with this trajectory is given by are respectively the opening angle of the RT surface and the UV cutoffs (in the global radial coordinate) at which the area integral is terminated, see figure 23. The leading order term in eq. (6.28) corresponds to the standard entanglement entropy formula of an interval on the circle [17,115] (but allowing now for two different UV cutoffs). We have also included the next-to-leading order terms as these will be important for computing corrections to entropy formulas on the brane. Now as usual, one must appropriately regularize the areas of the RT surfaces. As explained above, we place the cutoff surface at a large holographic radius r = r UV in the Rindler radial coordinate. In terms of global coordinates, this describes the surface sin 2 (τ ) = (sinr cos ϕ) 2 − (r 2 UV − 1) cos 2 (r) . (6.30) Expanding to leading order in r UV , one finds that the UV cutoff is associated with a length in eq. (6.29) given by where we have used eq. (6.24). Here and below, we shall use 1 to denote the cutoff at the end-point of the RT surface at the asymptotic boundary; 2 , on the other hand, will either be a cutoff at the asymptotic boundary or due to the brane, depending on whether we are in the no-island or island phase. Note that although the entropies diverge with the regulator r UV , these contributions will cancel once we consider the difference between the island and no island phases, as seen below. Equipped with this, we can now compute the generalized entropy in the two phases and reproduce the Page curve found in [2]. Figure 23: A slice of constant global time in AdS 3 , showing the two phases of the generalized entropy. The two cutoffs 1,2 involved in the computation are associated to the UV cutoff at the asymptotic boundary and the brane, respectively. The global coordinate angles ϕ 1 , ϕ 2 relate to the RT surface opening angle, while ϕ QES is the angle at which the RT surface intersects the brane and corresponds to the boundary of the island. Recall the geometry is cut at the brane and continued by gluing it to another copy. No-island phase. We begin with the no-island phase. Here once again due to the simplicity of AdS 3 , the minimal surfaces lie on constant global time slices. The RT surface consists of two pieces, one connected piece on either side of the brane with trajectory given by eq. (6.26) where ϕ 1 = −ϕ Σ and ϕ 2 = −π + ϕ Σ (recall the definition of ϕ Σ below eq. (6.24)). The total RT length is given by double eq. (6.28) (due to the two pieces) with both cutoffs 1 , 2 given by eq. (6.31). Substituting this into eq. (6.28) with ∆ϕ = π − 2ϕ Σ and using eq. (6.24), the associated entanglement entropy in the no-island phase is where we used the Brown-Henneaux central charge In the second line of eq. (6.32), we have expressed the answer in terms of the dimensionful time t, as in eq. (6.19) (see also below eq. (3.6)) and the short-distance cutoff in the boundary CFT δ = β 2π r UV (6.34) in the y ± coordinates on the boundary 44 . Eq. (6.32) matches the entropy from eq. (29) of [2], accounting for the fact that here the central charge is doubled since we include the regions on both sides of the brane 45 . For times much larger than the thermal scale, which corresponds to the linear growth predicted by Hawking. Island phase. Let us next consider the island phase. As explained in section 2, since translations in Rindler time are an isometry, we can use this symmetry to bring the problem to theτ = 0 = τ slice. Notice that this is also a symmetry of the dilaton profile as is clear from eq. (6.20). We will leave point 1 anchored on the cutoff surface near the asymptotic boundary at global coordinate ϕ 1 = −ϕ Σ , as in the no-island phase. But, the RT surface will now intersect the brane at its other endpoint. Here it is important to distinguish between two different angles appearing in the island calculation -see figure 23. First, 44 To be precise, eq. (6.32) computes the entropy of R in a CFT with metric −dy + dy − and short distance cutoff δ -here, δ is both the proper distance cutoff and the cutoff in y ± . We may equivalently take the CFT metric to be the induced metric − L δ 2 dy + dy − , in eq. (6.22), with coordinate cutoff δ in y ± , corresponding to a proper distance cutoff L as measured by the induced metric. 45 There is a typo in eq. (29) of [2]: inside the logarithm, it should be β/π rather than π/β. The UV cutoff δ is also hidden. The full answer is obtained by applying the conformal transformation (mapping the vacuum to a TFD) to the entropy formula where y ± R = t ± b and y ± L = t ∓ b are the entangling surfaces on the R and L sides respectively. We have used the notation S[ds 2 , δ] to denote entropy in a CFT living in ds 2 with proper distance cutoff δ as measured by ds 2 . Eq. (6.37) gives the length of the piece of the RT surface to one side of the brane; eq. (6.32) is then exactly double eq. (6.37). ϕ 2 (together with ϕ 1 ) characterize the trajectory of the RT surface, as in eq. (6.26), such that the trajectory, when maximally extended (even behind the brane), reaches the asymptotic boundary at ϕ 1 and ϕ 2 . The opening angle ∆ϕ appearing in eq. (6.28) is defined in terms of ϕ 1 and ϕ 2 as per eq. (6.29). Second, there is the global angular coordinate ϕ = ϕ QES of the QES where the RT surface intersects the brane. In the limit of vanishing brane angle θ B → 0, ϕ QES → ϕ 2 but, at finite θ B , ϕ QES = ϕ 2 . While 1 is still given by eq. (6.31), the regulator 2 is now provided by the brane position and is given by which we use below perturbatively in the regime of θ B 1. From eq. (6.28), the area of the RT surface (including the pieces to either side of the brane and to either side of the horizon) is given in terms of ϕ Σ and ϕ QES by We can also write this in terms of the y ± coordinates of [2], reviewed around eqs. (6.18) and (6.22) (see also footnote 42). Placing the belt boundary at θ = π, y + −y − 2 = b and the QES at θ = θ B , y + −y − 2 = a (matching the a and b of [2]), we find where, in the second line, we have written the answer in terms of the CFT central charge c and cutoff δ (in y ± ) in the bath, given in eqs. (6.33) and (6.34); we have also used the proper distance UV cutoffδ = L on the brane (hinted at earlier below eq. (6.12)) with the induced metric given in eq. (6.18) -see discussion in [1]. Using eq. (6.17) (as well as B = JT ), we can write Eq. (6.43) is to be interpreted as the von Neumann entropy of the effective CFT spanning the asymptotic boundary and the brane. The first term of eq. (6.43) precisely recovers the expected CFT result 46 , while the higher orders inδ/ JT may be interpreted as corrections due to the finite UV cutoff on the brane. Curiously, the leading order correction in eq. (6.43) vanishes for the case of a zero-width belt b = 0, i.e., when R completely contains the baths. We may add eq. (6.43) to the bare dilaton profile φ, given by eqs. (6.10) and (6.20), evaluated at the QES, to obtain the generalized entropy where we have included dilaton contributions from the QES points on both the left and the right of the TFD. Recall thatφ 0 conveniently absorbs the part of eq. (6.43) which becomes logarithmically divergent on the brane as we take the UV limitδ/ JT → 0see eqs. (6.10) and (6.13). This is unsurprising given that the renormalized entropy is 46 To see this, we may apply the transformation between w and y R written in eq. (6.35) to where we have used the notation S[•, •] introduced in footnote 45, and y ± Σ = t ± b and y ± QES = t ∓ a correspond to the entangling surface and the QES respectively. (In this footnote, we have swapped the sign of y + − y − on the AdS 2 brane relative to the main text, so that here y + − y − > 0 and y + − y − < 0 correspond respectively to the bath and brane.) Then, the first term of eq. (6.43) is precisely four times eq. (6.46). derivable from the renormalized matter effective action, and that the renormalization of φ 0 →φ 0 ∼φ 0 is precisely designed to eliminate the UV divergence of the matter effective action on the brane. The first line of eq. (6.47) matches exactly 47 eq. (19) of [2], accounting for the doubling and quadrupling of the dilaton and von Neumann entropies here (since eq. (19) of [2] considers only one side of the TFD and they work with an end of the world brane with bulk spacetime only to one side). The terms of higher order inδ/ JT are the corrections due to the UV cutoff, inherited from the von Neumann entropy in eq. (6.43). To find the location y + −y − 2 = a of the QES, the RT prescription instructs us to extremize the generalized entropy given in eq. (6.47). Symmetry has already allowed us to restrict the QES to the same slice of Rindler time τ ∝ t = y + +y − 2 as the anchoring point on the asymptotic boundary. It thus remains only to extremize eq. (6.47) in the spacial direction. Setting the derivative of eq. (6.47) in y + −y − 2 = a to zero, we obtain the extremization condition: (6.48) At leading order inδ/ JT , this matches eq. (20) in [2] accounting for the fact that we have two copies of the CFT versus a single copy of JT gravity. This equation can be solved for the QES position a in terms of the belt width b numerically or analytically with an additional expansion in φr cβ 1: matching eq. (21) in [2] at leading order inδ/ JT , again accounting for the doubling of the CFT. We see that the leading order correction due to finiteδ/ JT is to push the QES further from the bifurcation point at y + −y − 2 = +∞. Having found the location of the QES, we may re-evaluate the generalized entropy 47 In fact, the match between the first line if eq. (6.47) and (19) in [2] is exact even after keeping all terms collected in their "constant". This can be checked by keeping all constant terms in the von Neumann entropy calculation, described in eq. (6.46), as well as the topological dilaton contribution φ 0 . of the island phase by substituting eq. (6.49) into eq. (6.47), obtaining (6.50) (We have also dropped terms of order c 2 βδ 2 φr 2 JT as these are inherently smaller than either the cδ 4 / 4 JT or c 3 β 2 /φ 2 r corrections.) The first line simply evaluates the generalized entropy, given in eq. (6.47), at the bifurcation surface, i.e., taking a → +∞. In particular, we recognize the first term as giving the Bekenstein-Hawking result for the course-grained entropy of two black holes This classical contribution dominates eq. (6.50) in the limit S BH c and corresponds to eq. (30) in [2]. The other terms on the first line of eq. (6.50) evaluate the von Neumann entropy, given in eq. (6.43), after re-absorbing the UV divergence on the brane intoφ 0 . Specifically, the second term gives the UV contribution from the entangling surface on the asymptotic boundary (also appearing in the no-island phase in eq. (6.32)), and the third and fourth terms give finite contributions to the renormalized entropy including aδ 2 / 2 JT correction. Moving to the second line in eq. (6.50), we have a correction due to the displacement of the QES location a from the bifurcation point. Here, the dilaton and von Neumann components of generalized entropy both receive contributions at order φr β · c 2 β 2 φ 2 r ∼ c 2 β φr . Note that there are no dilaton corrections at orders φr β · cβ φr and φr β · cβδ 2 φr 2 JT because the bifurcation point extremizes the dilaton profile 48 . The orderδ 2 / 2 JT correction in the QES location given in eq. (6.49) is not visible at the order shown in eq. (6.50). Page curve Collecting together the results of the previous subsection, we have two phases. At early times, we have the no-island phase, with generalized entropy given by eq. (6.32). Over 48 It is helpful to consider the coordinate = ρ 2 − 1, in terms of which eq. (6.20) readsφ = φ 0 + 2πφr β 1 + 2 and the brane metric 2 JT ds 2 AdS2 = 2 JT − 2 dτ 2 + d 2 2 +1 , near the horizon = 0, resembles the standard flat metric − 2 dτ 2 +d 2 in polar coordinates with the usual radial coordinate. The dilaton and the von Neumann entropy in eq. time, this entropy grows at a rate proportional to the temperature 1/β and the number c of matter degrees of freedom participating in Hawking radiation, as emphasized in eq. (6.38). This growth, however is capped off by an island phase, where quantum extremal surfaces on the brane just outside the black hole horizon surround an island, containing a portion of the black hole interior, now belonging to the entanglement wedge of the bath. In this latter phase, generalized entropy is given by the constant value written in eq. (6.50) which is dominated by double the Bekenstein-Hawking black hole entropy, as given in eq. (6.51). Viewing eq. (6.51) as the course-grained entropy for the two sides of the black hole, this is precisely the expected maximal entropy of the system. To find the Page time τ P = 2πt P /β marking the transition between the two phases, we equate the corresponding generalized entropies given in eqs. (6.38) and (6.50): Overall, we recover a Page curve, with entropy growing linearly in a no-island phase up to the Page time, and saturating to a constant maximal value in an island phase after the Page time. In figure 24, we plot the Page curve after subtracting off the initial entropy (which includes the UV divergences from the asymptotic boundary). Discussion In this paper, we applied the framework introduced in [1], which uses Randall-Sundrum plus DGP gravity, to extend the discussion of quantum extremal islands in [2] to higher dimensional black holes. As reviewed in section 2, this setup precisely realizes the three different perspectives of the holographic system described in [15]. From the boundary perspective, the system is described in terms of the d-dimensional boundary CFT coupled to a conformal defect. The usual holographic dictionary then yields the bulk perspective, where the dual description is Einstein gravity in a (d + 1)-dimensional AdS bulk spacetime bi-partitioned by a d-dimensional brane. The brane perspective is an intermediate characterization of this system given by the d-dimensional effective theory induced by the bulk theory on the asymptotic boundary and the brane. That is, in this description, the boundary CFT spans the asymptotic boundary, which is non-gravitational, and the brane which supports a gravitational theory by the usual Randall-Sundrum mechanism. We have considered the vacuum state of the system with respect to global time, which simplifies the bulk geometry to be pure AdS. However, as discussed in sections 3 and 4, by viewing this setup in AdS-Rindler coordinates, the global vacuum can be re-interpreted as in terms of a massless hyperbolic black hole. This induces a similar description of the brane geometry as a black hole of one lower dimension. The 'two' asymptotic boundaries then play the role of bath CFTs in equilibrium with the black hole on the brane at a finite temperature T = 1 2πR . Similarly, as explained in section 5, viewing our setup in Poincaré coordinates, we have an extremal horizon in the bulk and on the brane. The latter was coupled to a (single) T = 0 bath CFT on the asymptotic boundary. While islands have been numerically studied previously in [30], our approach provides a relatively simple setting in which analytic calculations are possible. In particular, the doubly-holographic nature of our model reduces the entropy calculations involving islands in the presence of massless hyperbolic, or extremal black holes of arbitrary dimension to holographic entanglement entropy calculations in (locally) pure AdS in one dimension higher. From the d-dimensional brane perspective, when computing the entropy of a boundary region R in the island phase, a quantum extremal surface σ R marks the boundary of an island on the brane stretching to the horizon; this island belongs to the entanglement wedge of the bath region R. From the bulk perspective, the RT surface of R runs into the bulk from its anchoring surface Σ CFT = ∂R and intersects the brane at σ R . As noted in [30], the entanglement wedge of σ R stretches through the bulk and is manifestly connected to the island on the brane in this higherdimensional picture, despite the apparent disconnection in the effective d-dimensional theory. To determine the RT surface in an island phase, we must not only extremize the area functional locally within the bulk, but also extremize with respect to the intersection of the RT surface and the brane. Since the deep bulk (IR) and near-brane (UV) contributions (further modified by DGP contributions) to the RT area, respec-tively, can be interpreted as renormalized von Neumann and gravitational Wald-Dong entropies [1], this bulk calculation is equivalent to the island prescription of extremizing generalized entropy over candidate quantum extremal surfaces. The most striking difference between our holographic construction and the twodimensional model of [2] is that, as detailed in section 6, JT gravity does not appear automatically but has to be added by hand to the brane theory for d = 2, in analogy to the DGP terms in higher dimensions. However, this may be contrasted with the induced gravity on the branes in higher dimensions, where adding a DGP term provides finer control over the model, but is not strictly necessary for interpreting the brane perspective as an effective CFT coupled to gravity. Having added JT gravity as a DGP term, we showed in section 6 that applying the RT formula in the AdS 3 bulk and including the DGP entropy, as in the d = 2 analogue of eq. (1.2), correctly reproduces the results of [2] at leading order in an expansion in terms of small brane angles, i.e., θ B 1. A finite θ B imposes a finite UV cutoff in the effective brane theory, as shown in eq. (6.44), and therefore subleading corrections to entropy formulas appear in the island phase -see eq. (6.50). Of course, with a finite UV cutoff, we would not, for instance, expect the holographic entropy to precisely satisfy the CFT transformation rules of the entanglement entropy used by [2] in deriving their results [1]. These corrections have the effect of pushing the QES slightly further from the horizon, lowering the entropy of the island phase, and shifting the Page transition to an earlier time. As discussed extensively in [1], our braneworld construction clarifies further conceptual puzzles that appeared early discussions of quantum extremal islands in a holographic framework, e.g., [2,15,113]. One particularly confusing feature of the island rule is the (implicit) appearance of the entanglement entropy of the QFT degrees of freedom in the region R on both sides of eq. (1.1). Our model puts the explanation of this fact given in [2] on solid footing. The entanglement entropy in the left hand side of eq. (1.1) computes the full entanglement entropy in the UV complete picture (the boundary perspective), while the entropy on the right hand side is to be interpreted in an effective, semiclassical theory (our brane perspective). In partiular, as noted in section 2, the interpretation of the brane perspective as d-dimensional Randall-Sundrum gravity coupled to a CFT only holds for the low energy physics at scales longer than the short distance cutoffδ L. At shorter distance scales, gravity is no longer localized to the brane. In contrast, the boundary perspective or the bulk perspective gives a complete description of quantum state. 49 Non-extremal black holes in higher dimensions As noted above, in section 3, we considered AdS-Rindler coordinates in the bulk, providing a description of the pure AdS spacetime as a two-sided massless non-extremal black hole. A similar black hole geometry is induced on the brane, coupled to and in equilibrium with bath regions on the asymptotic boundary in both Rindler wedges. We considered the entropy of bath regions R complementary to belts centered around the defects in the two Rindler wedges. This setup, from the perspective of the effective theory on the brane and asymptotic baths, is analogous to the two-dimensional setup at finite temperature considered in [2]. We find, in particular, that the information paradox for eternal black holes and its resolution studied in [2] makes an expected re-appearance in higher dimensions, as reviewed in section 3.6. Again, this information paradox is resolved by the appearance of a quantum extremal island when a second quantum extremal surface minimizes the generalized entropy in the island rule (1.1). Our holographic construction translates this competition between quantum extremal surfaces to the usual competition between different possible RT surfaces in the holographic formula (1.2). In particular, at late times, the minimal RT entropy is provided by a second extremal surface with components which cross the brane, as illustrated in figure 2. From the brane perspective, the intersection of this RT surface with the brane becomes the quantum extremal surfaces bounding the island in the black hole background. The island belongs to the entanglement wedge of the bath region R. Without the appearance of islands, the entropy of bath subregions would grow ad-infinitum. With the islands however, the ever-growing entropy of the no-island phase is eventually capped off by the constant finite entropy of this island phase at late times. Further, our higher-dimensional discussion provides a simple explanation for the saturation of entropy: the connected pieces of the RT surface in the island phase are isolated to individual Rindler wedges and are thus invariant under time translation (i.e., forward boosts in both wedges). Recall that the global state is pure, i.e., from the boundary perspective, it is a thermofield double state of two copies of the boundary CFT plus conformal defect. Hence the entropy of R is identical to that of its complement R. This gives a useful alternative view of the evolution of the entropy. The region R consists of a belt region centered on the conformal defect in the two bath regions. Hence from this point of view, we are considering the entanglement entropy of two isolated boundary regions A and B on either side of the corresponding eternal black hole in the bulk. This is essentially the same system studied in [94], except that here the spatial sections of the bath geometry are hyperbolic in the present case. As in [94], the entropy grows at early times but then quickly thermalizes. In this case, the growth of the entropy stops, because it is bounded by subadditivity, i.e., S(A ∪ B) ≤ S(A) + S(B). In fact, for the holographic system, the late time entropy saturates this inequality which erases the mutual information between two boundary subregions. The primary difference between the framework studied in [94] and our setup, is the addition of a backreacting brane which creates extra spacetime geometry for the RT surfaces to traverse in this latetime island phase and so delays the onset of this phase where the entropy is saturated. From the boundary perspective, this longer thermalization time relative to [94] can be understood as a consequence of the large number of degrees of freedom introduced by the conformal defect. Further as in [2], we find that the island extends outside the event horizon, i.e., the quantum extremal surfaces appear outside of the horizon. If we focus on the entropy of R as above, this feature again has a simple explanation in our holographic setup, in terms of entanglement wedge nesting. Recall in the island phase, the individual components of the RT surface yield the entropy of the individual belt regions on the boundary of either Rindler wedge. Since these belts are subregions of the full hyperbolic slice on which the corresponding CFT resides, the RT surface must remain within the corresponding Rindler wedge. That is, the bifurcation surface of the Rindler horizon in the bulk is the RT surface corresponding to either of the copies of the CFT in the TFD state [93], and the Rindler wedge is the corresponding entanglement wedge. Hence, by entanglement wedge nesting [116,117], the RT surface and entanglement wedge for any subregion of H d−1 on the boundary must lie within the corresponding Rindler wedge. Finally it was straightforward to see from eq. (3.11) that the horizon on the brane is precisely the intersection of the Rindler horizon in the bulk with the brane. Hence the quantum extremal surface on the brane, i.e., the intersection of RT surface with the brane, must lie outside of the black hole horizon. This also means that if we consider regions R far away from the defect, the RT surface will pass close to the horizon. Thus, analogously to the situation discussed in [2], information about the horizon seems to be contained in the entanglement of CFT regions of the bath which are furthest from the black hole. Extremal black holes in higher dimensions In section 5, by taking a Poincaré patch of the bulk, we considered an extremal black hole on the brane coupled to a (single) bath CFT in a flat background. As in [2], we calculated the entanglement entropy for a bath region R which corresponded to points greater than some distance b from the conformal defect. In the case of extremal black holes, we did not find a transition as the system was time evolved, but instead found that the appearance of an island is linked to the choice of brane angle θ B (or brane tension) and the DGP coupling. Due to the scale invariance of Poincaré coordinates, it is clear that as we push the entangling surface out in the bath region, i.e., increase b, we proportionately reduce the size of the island. Again, this behaviour reproduces the intuition suggested in [2] that the region near the extremal horizon deep in the gravitating region (our brane) is can be contained within the far-away portion of the bath. Actually, our higher-dimensional picture shows that these regions are not far from each other at all -they are both close to the spatial infinity of the Poincaré patch which corresponds to a single point in the global frame. In the other extreme b → 0, we find that regions of the brane arbitrarily close to the asymptotic boundary can be recovered by portions of the bath sufficiently close to the defect. This is in contrast to the two dimensional JT model, where a maximum island size exists. Interestingly, a further qualitative deviation from the two-dimensional case is seen at small brane angles θ. Recall that, in the two-dimensional JT model, the island phase is always dominant for belt geometries in the extremal case [2]. In contrast, we have found in d > 3 that islands cease to exist for θ B below some critical θ c > 0. As θ B approaches θ c from above, the quantum extremal surface of the island phase runs off infinity (i.e., towards the extremal horizon). For θ B < θ c , no quantum extremal surface exists on the brane and the bulk RT surface is simply given by two planes on either side of the brane running straight into the bulk. Since the area of these latter surfaces is IR finite in d > 2, their candidacy for RT surfaces must be considered even when the alternative island-phase surfaces exist. In fact, we find that θ c is precisely the angle at which the entropies of the no-island-type and island-type surfaces match -above this angle, the island-type surfaces remain favourable as RT surfaces. The relevance of small θ B (and in particular θ B < θ c ) is that in this limit, the effective theory on the brane is described by Einstein gravity with small higher curvature corrections, which is the most interesting parameter regime. While the lack of islands for θ B < θ c is strikingly different from the two-dimensional case, we remark that, in the extremal case, islands are not required from an information-theoretic standpoint and their absence should perhaps not be terribly surprising. This is to be contrasted with the non-extremal case, where islands are necessary, at all brane angles, to tame the otherwise unbounded growth of black hole entropy at late times and avoid the information paradox. Of course, an interesting question may be to examine how varying the geometry of the entangling surface affects the appearance of quantum extremal islands at T = 0. For example, rather than belt geometries, one might consider spherical regions bisected by the conformal defect. Not an ensemble In order to derive the island formula, a crucial ingredient was the appearance of wormholes in the replica trick. In the two-dimensional models involving JT gravity studied so far [39,41], the existence of wormholes follows from the fact that JT gravity is defined by averaging over an ensemble of Hamiltonians. For example, JT gravity emerges as the low energy effective description of the SYK model [118][119][120][121], or has a definition in terms of a matrix model [122]. On the contrary, our construction relies only on the standard holographic rules of the AdS/CFT correspondence where there is no such averaging of the couplings in the boundary theory. This is in line with the general expectations for higher dimensional holography. This lack of averaging characterizes the UV-complete description of the system, i.e., the boundary perspective. Nonetheless, quantum extremal islands appear in the effective description of the brane perspective and once again one likes to understand them as remnants of replica wormholes in the limit n → 1 [1]. One might then wonder why -despite the absence of ensemble averaging -replica wormholes should appear and connect the gravitating region in different copies of replica trick calculations. In fact, this is not a problem, since the different effective gravity theories in the brane picture are UV completed by a single theory of gravity in the bulk and so it is natural to consider geometries connecting the branes, i.e., replica wormholes in the effective theory. In fact considering Renyi entropy calculations in the boundary theory, one sees that the corresponding bulk geometry induces connections between the different copies of the brane theories, i.e., replica wormholes on the brane [1]. This becomes particularly clear in our setup where the brane lives in the bulk and does not serve as a boundary of spacetime. We emphasize that here this discussion implicitly relies on the standard derivation of the RT prescription for holographic entanglement entropy [123,124] in the bulk perspective, where again we assume that there is no ensemble averaging. 50 Following the logic of [52], one might be tempted to turn the logic around and, given the appearance of wormholes in the brane description of our model, conclude that there is some form of ensemble averaging in the dual boundary theory. However, this line of argument implicitly assumes a precise equivalence between the boundary theory and the 'bulk' gravity theory (containing wormholes). We stress that this equivalence does not hold in our construction. Rather the gravitational theory on the brane is an effective theory and so the arguments of [52] do not extend to this situation. Instead, in our situation replica wormholes appear, but wormholes connecting independent instances of the boundary theory do not play a role. For example, this implies that higher powers of the partition function of the boundary CFT with a conformal defect will still factorize. Nonetheless, this issue is certainly worth further examination since in two dimensions, replica wormholes have now been shown to play an important role in a variety of situations, e.g., calculations of Renyi entropies [41,50], the spectral form factor [122,125], correlation functions [90,126], and overlap of black hole microstate wavefunctions [41,50]. Apart from Renyi entropies, it is not clear how to reproduce these effects in our construction, or in higher dimensions more generally. Furthermore, it was suggested in [41,127] that in non-averaged theories wormholes might appear as a result of some diagonal approximation. To obtain a full quantum gravitational answer, additional off-diagonal terms need to be added. Given that we have a system, where wormholes appear in an approximate formulation, while at the same time having some control over a UV complete description, one might hope that studying our system will give an idea of how this suggestion might be realized. Future directions Having produced a setup in which quantum extremal islands can be studied with relative ease, some possible avenues of further investigation were suggested above, but a number of other possible extensions to the present work also come to mind. For example, one may consider information-theoretic questions similar to those raised in [2]. There, the authors investigated whether a protocol can be implemented to retrieve information from the island. In particular, the entanglement wedge of the complete left system plus an interval of the right bath contains an island that naively appears causally disconnected from the left and the right bath interval. However, by acting with operators in the left and right baths, it was argued that sufficient negative null energy can be generated to pull information from this region into the left exterior, to be picked up by the left defect and bath. One could try to reproduce this protocol in our higher-dimensional setup using insertions of operators on the left and right asymptotic boundaries. The negative null energy produced would then shift the bulk horizon and hence the induced horizon on the brane. Recall that above, we described how in the present discussion the appearance of the quantum extremal surfaces outside of the horizon was a simple result of the nesting of entanglement wedges from the bulk perspective. However, another question raised by [2] is whether this protrusion of islands outside the horizon violates causality. In particular, the portion of the island of the baths outside the horizon appears to be causally connected to the defects. Naively, this appears to allow communication between the baths and defects even if the coupling between these systems is severed. The resolution of this paradox comes from noting that a splitting quench between the defect and bath systems would inevitably create a positive energy shock causing an outward shift of the horizon. It was argued in [2], using a JT version of the quantum focusing conjecture [128,129], that this shift would have the final event horizon swallow the island, preventing post-quench communication between the bath and defect. It would be interesting to re-create this problem in our setup to probe the quantum focusing conjecture in higher dimensions. From the bulk perspective, a splitting quench would be implemented by a bulk end-of-the-world brane anchored asymptotically on the splitting surface [130]. In d = 2, the splitting surface on the asymptotic boundary can be obtained by a conformal transformation from a full plane; in the bulk, the end-of-the-world brane can similarly be obtained by a diffeomorphism from a planar brane in pure AdS. In d > 3, however, the calculations will become more complicated, e.g., the end-of-the-world brane will, in general, backreact on the geometry such that the bulk is no longer locally pure AdS. Returning to the issue of extracting information from the island, entanglement wedge reconstruction [24,[131][132][133][134][135][136][137] allows us to recover information about the island with data from the boundary CFT in the corresponding boundary subregion. One interesting question would be to evaluate the expectation value of various CFT operators in the island, e.g., reconstructing T ij in the vicinity of the horizon. 51 The latter is particular interesting because while the appearance of quantum extremal islands pointed out a new resolution of the information paradox, this does not directly address the issue of firewalls [138,139]. Here asking if the black hole horizon develops a firewall in the late time phase of the Page curve can be addressed by evaluating T ij on the horizon. While a direct boundary reconstruction of the latter remains to be done, we are confident that no singularities arise in our framework. The reason is that in the bulk, the system is in the vacuum state and we are simply examining this state from a Rindler frame of reference. Hence in fact, we expect that T ij = 0 on the horizon and throughout the black hole solution on the brane. 52 This is related to the fact that in the present paper, for the sake of simplicity, we have chosen to work with a bulk that is pure AdS, i.e., the temperature was tuned to T = 1 2πR . The Rindler horizon in this geometry consequently corresponds to a massless hyperbolic black hole. An obvious extension would then be to consider massive black 51 We thank Ahmed Almheiri for raising this question. 52 The vanishing of the stress tensor on the brane is an essential feature of our construction as the AdS d brane geometry must be a solution of the corresponding gravitational equations. That is, the CFT on the brane cannot provide a source in these equations (at least to leading order for large c T ) otherwise the geometry would deviate from AdS space. Recall that while the brane CFT is in its vacuum state, the bath CFT is coupled to the brane along an accelerating trajectory -see discussion under eq. (3.11). This acceleration allows the bath CFT to achieve equilibrium at a finite temperature. holes. Again, calculations will be made difficult by the fact that the brane and bulk equations of motion must be solved simultaneously with the former back-reacting on the latter. In particular, the equilibrium configuration will now involve excitations of the CFT on the brane, i.e., the effective Einstein equations on the brane will be sourced by the stress tensor of the boundary CFT residing there. Yet another direction to take would be to consider our setup from the perspective of tensor networks and error correction codes [135,[140][141][142]. For instance, as noted in [94], the MERA-like tensor network constructing the time-evolved CFT thermofield double state on the asymptotic boundary shares a similar geometry to codimension-one bulk spatial slices stretching through the bulk wormhole. One might then be motivated, as in [141], to view these spatial slices as supporting tensor networks implementing quantum error correction codes between the bulk and boundary. It would be interesting to see what such a network would tell us about the effective theory (see e.g., [143,144]) on the brane and how information on the brane is ultimately encoded in the asymptotic CFT and defect theory. On a related note, one might also study the complexity of these brane configurations, for example, using the higher-dimensional bulk to probe holographic complexity conjectures [145][146][147][148], e.g., see [149]. Above, we emphasized the effective character of the gravitational theory on the brane with the appearance of a short distance cutoff in Randall-Sundrum gravity. However, as discussed in [1], the brane perspective also provides an effective description of the coupling of the bath CFT to the conformal defect. In particular, it only accounts for the couplings localized at the defect, which dominate at low energies, but ignores the subtle 'nonlocal' couplings, which can seen as coming through the AdS d+1 geometry with the bulk description. Given the simplicity of our construction, it may provide a useful framework in which to further understand these nonlocal couplings, which implicitly provide subtle correlations between the island degrees of freedom and those in the bath CFT [1,150]. Lastly, in order to explain the fast growth of entanglement at early times for large regions, in section 4.2 we computed bounds on entanglement growth in hyperbolic space. While they display the expected qualitative behavior, they are not particulary tight. Instead, the difference between bounds and numerical data becomes bigger as χ Σ grows. It would be interesting to improve these bounds.
41,862.8
2020-09-30T00:00:00.000
[ "Physics" ]
Strategies for synergies . Working in interdisciplinary teams : This paper presents the approach and results of our workshop “Strategies for Synergies ” hold at the 12 th EAD Conference. The main aim was to develop innovative ideas to establish interdisciplinary working in the education of designers and researchers. In this context, we wanted to elaborate which competences are necessary to work in interdisciplinary teams and how these competences could be integrated in design and research curricula. Our results show, that empathy, emotional intelligence, and communication skills, are identified as key capabilities for the success of interdisciplinary work. Educational formats, fostering those competences, are focused on practice-oriented experiential learning, to prepare students with first hand experiences. Introduction Interdisciplinarity has attained a heightened significance in research and research policy (Metzger & Zare 1999;Blackwell et al. 2009): Processes of boundary-crossing, mutual learning and co-creation are supported and funded to harness grand socio-technical challenges or specific problems that go beyond the boundaries of one discipline.In these contexts, interdisciplinarity, and the integration of different kinds of knowledge, are often considered as key drivers for innovation (Blackwell et al. 2009).In recent years, the design research community has increasingly discussed this nexus of interdisciplinarity and innovation.In this context, many researcher emphasize designers' ability to facilitate and argue that the integration of designers and design methods foster processes of knowledge production and innovation (e.g.Sanders & Stappers 2008;Christensen & Junginger 2014;Sanders & Stappers 2014;Heidingsfelder et al. 2015). However, when it comes to building and managing interdisciplinary teams, methodological challenges arise: People with different educational backgrounds and professional experiences in one discipline have learned specific terminologies and language styles (e.g.Wear 1999;Bracken & Oughton 2006); their needs and requirements deviate (e.g.Brewer 1999); and, moreover, they often try to achieve different objectives within joint projects (Blackwell 2009: 4).These differences often result in communication barriers and unbalanced power structures (Stember 1991;Pellmar & Eisenberg 2000). Against this background, our workshop primarily aimed at increasing awareness with regard to the opportunities and challenges of interdisciplinary work.Secondly, it aimed at initiating an exchange of ideas, perspectives and strategies.For this purpose, we invited researchers and practitioners from design education and design research, to discuss the opportunities and risks mentioned above; and to develop strategies for interdisciplinary projects.In the second phase of the workshop, we aimed at developing innovative ideas to establish interdisciplinary collaboration in the education of designers and researchers: We wanted to discuss necessary competences for interdisciplinary working with researchers and practitioners and develop strategies to integrate these competences in design and research curricula.Beyond that, the workshop provided a space for exchanging experiences and practical advises. Workshop Setup The 60-minutes-workshop had two parts: A theoretical input and a hands-on session in which participants worked on new education concepts.First, we compared typical research and design processes and compared similarities and differences.Based on examples from our work in an interdisciplinary research team, we showed strategies and challenges for interdisciplinary cooperation. In the following hands-on session, we aimed at answering the following questions: "What can researchers and designers learn from each other"; and "How can interdisciplinary synergies be addressed and trained in education?" As warm-up and introduction, we asked the participants to locate themselves on a scale between research and design to visualize their professional background and to develop a common understanding of both terms.We expected most of the participants to be designers, but the majority positioned themselves in the field of research: Most of them were trained as professional designers and changed their focus to design research later on in their career. In the following session, participants worked on three different stations that represented the exploration, development and dissemination phase of research and design processes.Within this session, each participant could bring in his or her expertise at each station.Considering that design methods can be used to facilitate between different disciplines by providing effective tools for interdisciplinary, solution-oriented collaboration (Lindberg et al., 2016;Sanders, 2014), we deliberately used methods from design in every round.For this purpose, we prepared different materials and templates to support the interdisciplinary work during the co-designing process. Round 1: Competences In the first round, we focused on defining competences of the different disciplines and their impact on research and/or design processes.Questions were: "Which competences are important for the exploration, development and dissemination phase?"; and "What can designers and researchers contribute to a research and/or design process?" To support the participants in exploring necessary competences we used simple illustrations that present relations and power structures between two (or more) entities.These illustrations are characterized by two qualities: On the one hand, they are simple and easy to understand, and on the other hand, they are ambiguous and open to different perspectives.Combining those apparently opposite properties, they function as boundary objects: They are plastic enough to adapt to different contexts, yet robust enough to maintain a common identity (e.g.Star & Griesemer 1989;Ewenstein & Whyte 2009;Klerkx et al. 2012).We asked participants to choose one illustration and to connect it with a competence or some typical habit of a researcher or designer.The illustrations and the description of the competences were placed on a template. Using simple and "ready-made" illustrations helped participants to articulate their thoughts, and to generate new ideas.Hence, their ambiguity hindered some of the participants.Based on this learning, we will adjust this method for future workshops. Round 2: Education Formats In the second round, participants were asked to develop educational formats to foster interdisciplinary approaches and methodologies.The leading question was: "How can we make sure that future designers and researchers get to know and value each other's' competences?" Based on the discussions of the previous round, the participants created educational formats and documented them on templates, as visualized in picture 2. Most of the developed formats aim at design education, but some also address formats for the cooperation of professional designers and researchers.As empathy, emotional intelligence and communication skills were identified as key skills, most formats aim at fostering either one or all of these competences. To prepare design students for working in interdisciplinary teams, participants proposed formats such as tandem projects with other disciplines, work shadowing or group works with distributed roles.One participant also proposed to integrate studies of other disciplines into the design education.Another suggestion was intended to foster university-industry projects in order to prepare design students for working in interdisciplinary teams within real-life situations. To enable and enhance professional designers' and researchers' capacities for interdisciplinary cooperation, participants also suggested to work in tandems.Additionally, they developed a "break out workshop" for researchers: This workshop should make use of speculative design to make researchers think about future scenarios and "out of the box". Round 3: Future Scenarios In the last round there was time for discussion and evaluation of produces ideas and concepts.All participants presented their ideas for new educational formats.To conclude, we asked participants to write down their visions for interdisciplinary working in the year 2025 -whether for designers, for researchers or for both of them. The developed scenarios illustrate a great interest in breaking and blurring the borders of disciplines, as illustrated in these statements: In 2025, the collaboration between design and research will look like an equal relationship where both are purpose-driven, rather than "discipline-driven." In 2025, all designers will learn to teach others how to design. In 2025, all researchers will use design to innovate their methodologies. Outlook Reflecting the workshop and its results, our workshop provided a good format to initiate a discussion on opportunities and challenges of interdisciplinary work.Secondly, it opened up a space to create ideas and strategies for a cross-disciplinary education of researchers and designers.In order to explore the topic of interdisciplinary teamwork, and to enlarge our findings, our aim is to repeat the workshop at further design conferences.Since the participants had similar backgrounds and experiences in working interdisciplinary, it would be valuable to widen the scope in future workshops.Thus, our aim is to include participants who position themselves clearly on the research or the design side of our scale, for instance artists and natural scientists or engineers that have no previous experiences in working with either designers or researchers.Furthermore, lecturers and people in charge of creating curricula could add useful perspectives and insights to explore impeding institutional barriers.Future results will be accessible by further publications and on our website.
1,919.4
2017-07-28T00:00:00.000
[ "Education", "Engineering" ]
Root Cause Analysis in Post Project Phases as Application of Knowledge Management : This paper is focused on the root cause analysis of post project phases. The research has been linked to the identification of the 21 most common reasons for not executing post project phases. The main aim of this paper is to identify the root causes of not executing selected post project phases. The empirical research was performed as qualitative research employing the observation and inquiry methods in the form of a controlled semi-structured interview. The research was realised in the Czech Republic in 2017 and 2018. The key performances for ensuring a functional, effective and systematic post project process are based on the principles of knowledge management. The identified causes were used as inputs for the proposed measures with the aim to make the post project process more effective. The main contribution of the paper is the overview of techniques that may be recommended for post project analysis. These techniques are demonstrated in detail on particular examples of the analysis of the most common reasons for failure to implement post project phases. The described examples demonstrate the procedure to be followed in order to identify the root cause of the analysed phenomenon. At the same time, the paper also describes proposals of recommended measures that should minimize the root causes resulting in negative outcomes. The paper explicitly emphasizes and shows the connection between Introduction The Agenda for Sustainable Development, adopted by the United Nations General Assembly in 2015, introduced a new coherent way of thinking concerning the connection between the issues concerning 17 Sustainable Development Goals (SDGs) as an "indivisible whole."These ambitious goals can be achieved (among other things), especially and just by using the knowledge management and project management principles [1,2].In their article, Nilsson, Griggs and Visbeck point out that the creators of the global sustainable development projects lack tools to determine which goal interactions are the most significant with regard to the solutions of current problems [3,4].The authors of this paper believe that apart from the information in the SDG material of the International Council for Science, high quality execution of post project phases of implemented sustainability projects may be very helpful. The paper elaborates on previous research of the authors focused on identification of critical success factors of sustainability projects.In particular, it follows from the 21 reasons for failure of post project phase implementation in project practice identified earlier.The main objective of the paper is to identify the most significant root causes of selected reasons for post project phase implementation failure by means of Root Cause Analysis (RCA) and to propose measures and recommendations in order to increase efficiency of post project stage execution, directly based on the results of the conducted research [5]. The issue of sustainability is not often solved from the point of view of project management in the Czech Republic (see the research of IPMA Czech Republic [6]) what is a mistake.Therefore the authors welcome that the Sustainability journal dedicated this topic to the special issue (Applications of System Engineering Approaches in Complex Project Management) and realizes that sustainability needs to be solved through project management. It is very important for project teams to pay attention to the post project phases, as these, among the other things, reduce the percentage of unsuccessful projects.Researches show (see Section 2) that there is still high percentage of unsuccessful projects, including sustainability projects. It is necessary to realize that in a concrete company, which implements projects for example, the development of its products, will be interested in the implementation of the post project analysis.Qualified company management will demand and organize post project phases. The situation is very different for sustainability projects that are being implemented in different locations with different focus and especially with a great variety of project stakeholders.As a consequence, the post project analyses are not carried out, funded and required. Literature Review Post project stages constitute full project management processes.However, research of renowned companies (The Standish Group, EY-Ernst & Young, etc.) focused on project success show unsatisfactory results [7]. The aggregate results of EY in project management for the Czech Republic in 2017 present the following key conclusions.In terms of areas, the most frequently executed projects include IT modernization (54%), new product development or launch (32%), process restructuralization/reorganization projects (29%) and cost reduction projects (24%).In terms of project evaluation according to the basic factors of time and costs, 41% of the projects were completed after the scheduled deadline and with exceeded budget, while 61% of the projects exceeded the allocated budget by EUR 100,000.In general the ratio of large projects has been increasing, which probably also means an increased risk of failure.Nearly half of the projects valued EUR 40,000 to EUR 100,000 exceed the budget, deadline or get cancelled.The research also states that two of the three main reasons for project failure are insufficient company top management support and insufficient management of project risks.However, both factors are fully under control of the management, that is, they may be directly influenced by the management.The issue of project priority definition is also evaluated negatively.There are no priorities defined for about 20% of the projects or the priority determination process is non-transparent.Moreover, the project teams focus mostly on project phases within the project life cycle, they pay less attention to pre-project phases and they mostly ignore the post project phases [8].Due to the lack of concrete statistical studies for sustainability projects, were used at least published statistical studies for commercial projects to justify the problem.Considering that sustainability projects are very complex and problematic, it can be assumed that the results of statistical studies will be even worse in relation to commercial projects.Unfortunately, the research we have available does not have the sustainability category as an explicit category.Which is the weakness of these researches.In addition, there are other characteristics that should be monitored in these researches.On the IPMA Czech Republic conferences [9] are the results of the research regularly presented and discussed.Sustainability projects were also identified as part of these discussions. Inspiration for the use of KM in post project phases can be also found in CMMI.The Capability Maturity Model Integration (CMMI) model [10] posits systematic improvement of all management processes from 3rd level for project management.Explicitly so at the 5th level.Correct use of knowledge management plays an important role in the improvement [11][12][13].The issue of analysing the causal link of cause-effect needs to be mastered in particular [14].The CMMI model is very practical and easy to understand.There are defined characteristics for each maturity level that the processes need to meet (for more details see [10]).A brief description of the respective process maturity levels is given below: 0. Non-existent management: Processes and their management are completely chaotic.1. Initial: Processes are executed ad hoc. 2. Repeatable: There is a certain level of discipline necessary for the execution of basic repeatable processes. 3. Defined: Processes of the organization are documented. 4. Managed: Processes are managed and their performance is measured using Key Performance Indicators (KPIs). 5. Optimized: Processes are constantly improved, there is a process and management innovation cycle. If we consider the fact that sustainable development is one of the key issues of human society preservation on this planet, we naturally conclude these problems need to be solved systematically and professionally.Application of project management principles seems effective and efficient [15].In their research, Harich, Bangerter and Durlacher present the results of a seven-year-long root cause analysis for solving extensive social system problems, such as sustainability.For this type of problem, they proposed a formal solution process and identified four main sub-problems.For each sub-problem, they identified and analysed the root causes, including proposed measures to minimize them [16]. As one reaction to the increasingly frequent problem of non-compliance with project schedules (project delays), Ansah and Sorooshian proposed the 4P concept (project, practices, participants, procurement) for identification, analysis and classification of delays in the internal environment of the project development process in the construction industry.They state that the application of the proposed 4P concept would allow identification of key reasons for delays, their risks and use for project delay assessment and classification.They emphasize that the project delay issue is generally discussed separately; however, a comprehensive framework for systematic analysis of delays of completed projects is rarely applied [17]. In his research, Rongfa focused on quality management in software projects.He observed that causal analysis plays an important role in this area.He discussed possible use of the decision tree method for classification of causes (problems) related to software project quality.Using the Ishikawa diagram method, he designed a method for root cause classification and identification in various categories.Furthermore, he pointed out a new research area related to the issue of machine learning integration in the process of software project quality management in order to increase its efficiency [18]. In their research, Usman and Rendy presented the use of RCA in solving project problems and risks in the construction industry.They observed the application of this approach has a positive impact on the project planning process in the context of Critical Chain Project Management [19]. In their research, Lakshman, Kannan and Bhojraj presented the application of the RCA method in the context of the DMAIC methodology (Define, Measure, Analyse, Improve, Control) with the aim to identify the causes of deviations between plan and reality in a spacecraft development project.Within the practical approach, they worked with three key RCA methods: "Why-Why" Technique, Cause and Effect Analysis and Pareto Analysis.These techniques were described extensively in order to promote their application in all crucial project management areas, including post project phases.The authors also state there is a long-term and common goal in the space industry to identify the factors with negative effects on project progress.Current research in this field shows that traditional methods, such as the Critical Path Method (CPM) [20]), are not sufficient for successful project management and attention must also be paid to utilization of knowledge management for project improvement.Critical factors of project success are even related for example, to multi-project environment, shared project resources and so forth but the sustainability projects are executed in global multi-project environment these days [21]. The area of knowledge management is usually described in terms of "generations."First generation thought of knowledge as an object and it was focused on the use of IT/ICT (explicit knowledge).For example in the empirical research, the author Erkut focuses on the enterprise resource planning (ERP) software market.Author solved the research question of whether the emergence of the ERP market was a necessary, strong or weak consequence of the product innovation of Systems, Applications & Products in Data Processing (SAP).The author's results confirmed that the emergence of the ERP software market is contingent and was not predetermined; path dependencies play a big role in the way how this market segment emerged [22].Second generation was focused on the people and organisations (tacit knowledge and communities).Third generation recognised the importance of contextual contingencies and influences in aligning knowledge management strategies with business strategies.In their research, Handzic and Durmic presented a new conceptual model that merges knowledge management with project management.The model advocates the need for convergence between knowledge management and other disciplines.This is the nowadays trend in knowledge management, called "new fourth generation" [23].This topic is solved by the authors Handzic and Bassi too.They presented novel theoretical approaches and clear empirical evidence of the value of integrating the two distinct fields-project management and knowledge management.In their research they help practitioners develop more suitable knowledge management solutions for successful project [24]. Materials and Methods The methodology applied in research is based on the fundamental principles of scientific and research work.The selected scientific and research methods are based on a predefined research goal.Principles of system methodology combining system approach, system thinking, system disciplines and system algorithms were applied to the solution of a scientific problem.These principles were applied in the context of general management theory with focus on post project phases. The method of secondary analysis of relevant data was applied to analyse the current state of the solved issue.The application method was the process of retrieving and processing secondary data, where the new data and information were obtained from the original data an information.The source for formulating the theoretical framework of the problem were articles published in scientific journals and conference papers.Another important sources were experiences from the field among project managers and project team members. During the empirical part elaboration with the aim to obtain primary data, qualitative research methods were used in the form of semi-structured interviews.This identified 21 reasons for non-implementation of the project phase (primary data).The empirical research respondents were mostly employees of small and medium-sized enterprises from various industrial fields operating in the Czech Republic.The most common fields included engineering, construction, electrical engineering, power engineering, IT/ICT.At the same time, these respondents were attending courses focused on project management.Large enterprises are usually owned by foreign parent companies that often organize their own internal project management trainings and courses and therefore they usually do not send their employees to external courses.Courses focused on project planning and management covered knowledge defined for level D (the lowest level of certification) of IPMA competence [25][26][27].Training had the form of company and public courses "open" to entrepreneurs from all over the Czech Republic.The total number of respondents was 150. 100 respondents were project team leaders (PTL).The remaining 50 respondents held various project team member roles. The obtained data were subjected to RCA [28] in order to identify the root causes of selected reasons for failure to implement post project phases.The root cause analysis was performed by the research team of Brno University of Technology consisting of experts in project management who have been teaching project management courses in companies and are well familiar with team research work and the described methods.Application of RCA was the main approach for solving the defined research goal.It was applied in the context of system methodology principles to identify all possible relations, if it is possible.[29]. System approach to RCA represented process execution consisting of the following steps: 1) Initiation-determination of the need, purpose and scope of RCA validity.2) Establishing Facts-establishing facts concerning what, where, when and with whom happened. Pareto Analysis was used in this stage.3) Analysis and Validation-establishing causes and ways of fact manifestation and their validation. The Ishikawa diagram method, the "5 Whys" method and the Cause tree method was used for the root cause analysis.4) Presentation of results-proposal of recommended measures focused on identified root causes. Pareto diagram is an important tool for managerial decision making.It allows to penetrate the essence of phenomena.It helps to separate the key factors of solved problem from the less significant ones.It also helps to identify the main causes of the problems.Thanks graphically presentation is very useful tool for prioritizing of corrective measures.This diagram is named after economist and sociologist Vilfredo Pareto.He described the uneven effect of distributing wealth.His conclusion was that the distribution of income and wealth is asymmetric, because only a small group of people in the country controls most of the property.Based on research from different countries, the result is that approximately 80% of wealth is managed by about 20% of the country's population.This rule was put into practice by Joseph Moses Juran, who dealt with quality management.He applied the idea to industry.He defined that about 80% of production shutdowns are caused by 20% of the machines of a particular industrial building.Has generalized this phenomenon to the statement which is called Pareto Rule nowadays.Pareto diagrams are basically histograms that show the distribution of problems into several meaningful categories.These categories are usually ranked in decreasing order of importance.It is possible to identified which of the causes are the most relevant in the solved problem thanks this diagram [28]. Ishikawa diagram is a technique that helps to identify, analyse and present possible causes of significant problems.This technique was invented by Kaoru Ishikawa.It presents graphically the relationship between events and all factors.This technique is also called Fishbone Diagram because it looks like it [30]. "5 Whys" Method is an iterative interrogative technique used to explore the cause-and-effect relationships underlying a particular problem.The main aim of the technique is to determine the root cause of a defect or problem by repeating the question "why"?Each answer forms the basis of the next question.The number 5 in the name derives from an anecdotal observation on the number of iterations needed to resolve the problem [31]. Causes tree method is the approach which used deductive logic and intends to make the construction of the trees and the logic easier to apply.The basic principle of the method is that an accident results from changes or variations in the normal process.The expert must identify the changes in the system, list the changes, organize them into the tree diagram and define their relationships [28]. Results This section presents the results of RCA.As initiation of the analysis of causes of post project phase implementation failure (significant event in terms of causal analysis defined), the identified 21 reasons for failure of post project phase implementation were first logically classified in four generally used 4P categories: place, procedures, people, policies.The classification results are summarized in Table 1.The classification could be different because the 4P method does not exactly define the content of the categories.The decisive criterion for inclusion in the relevant category was always considering the nature of the individual problems of non-post project phases.The classification has provided initial predictors of possible causes in the four defined categories. 1. Excited by the success of a completed project, the workers start to feel there is no need to analyse or improve anything. 3. Under the load of more and more new projects and everyday issues, there is no time for such analysis 21.The current time is VUCA (Volatile, Uncertain, Complex, Ambiguous).Therefore, it makes no sense to prepare for anything by analysing the past.Everything will be different and nothing can be predicted and no past experience can be used.9. Unlike the project execution, it is often not explicitly required. So it is not done. 2. Devastated by the project failure, the project participants and all the stakeholders try to forget the project as fast as possible 4. Since any possible "easy and possible financial savings" are made in the already tight project budgets, the post implementation analysis is usually one of them so it is not even planned. 10.The project team does not want to point out mistakes they have made (why would they?) and pointing out success, on the other hand, is considered boasting. 12. The analytical teams are repeatedly comprised of incompetent staff members, so the results do not correspond to the expended resources or time and the analysis is cancelled. 5. Such a thing is considered unnecessary pondering and an obstruction to proper work. 11. Since the workers do it wrong, the results are not satisfactory, so after some time, the activity is discontinued due to "inefficiency." 15.In the Czech Republic, many people consider themselves to be very smart and believe they do everything right and don't need to learn anything anymore. 6.There is a worry among the project team members that even well-intended, (self-) critical conclusions may turn against them (e.g., reduction of project remuneration). 16.There are still many people who remember a document titled "Lessons Learnt from Critical Development..." that did not bring success to its authors!(Generation-specific and Czech-specific reason) 17.In the chaos and hurry of everyday work on the project, it simply gets forgotten. 8. The analysis was done once but the recommendations were put aside ad acta, so the whole thing inevitably seemed to have been a waste of time and considerable efforts and so nobody wants to risk needless work. 19.There is no project documentation, sometimes there are no project participants anymore, so the question is what in particular should be responsibly analysed? 18.A lot of people often refuse to look back; they only want to look ahead-A common attitude of many young people.(There is not so much time in their past but a relatively long time in their future.) 14.Most companies lack a system of company experience accumulation, so it is not required for projects either.(Must be required by company top management.)20.A number of project management methodology materials still do not mention these phases, as well as pre-project phases and focus solely on immediate project management, from start to completion. Note: The 21 identified causes were serially numbered. At the first sight, the analysis shows that the majority of causes of failure to implement post project phases are related to the categories of procedures, people and policies.Thus the root causes must be related to the problems of project management (procedures), human resources management (people) and company management (policies). Due to the relatively high number of identified root causes (see Table 1), significant causes had to be separated from the less significant ones.Pareto Analysis was used for this purpose (see Table 2) [32].Graphic depiction of the Pareto Analysis results is given in the Pareto diagram (see Figure 1).It is a combination of a bar and line graph, which includes the Lorenz curve, depicting the increase (of the cumulative value) of the individual cause share in the whole.Graphic depiction of the Pareto Analysis results is given in the Pareto diagram (see Figure 1).It is a combination of a bar and line graph, which includes the Lorenz curve, depicting the increase (of the cumulative value) of the individual cause share in the whole.With the acceptance of the Pareto principle (80/20), the analysis shows that approximately 80% of all reasons for failure to implement post project phases are the following 12 reasons: Reason No. 7, 13,12,8,3,11,14,20,4,6,5,19 (for verbal cause description see Table 2).Out of the total 21 reasons, the above 12 reasons constitute approximately 20% of all reasons.With regard to the frequency value obtained from the respondents in the empirical research, the first two reasons with the highest frequency value were selected for root cause identification, that is, reason No. 7 and reason No. 13 (see subchapters 4.1 and 4.2). Root Causes Analysis -Reason No. 7 Non-Implementation of Post Project Phases For identification of root causes of reason No. 7 "People don't know how to perform it practically so they prefer not to do it."First the Ishikawa diagram technique -Ishikawa diagram was used (see Figure 2).The chart was compiled by the research team of Brno University of Technology as a result of respondents' answers.Initially, four main causes (Methods, Knowledge, Project Team, Organization) of the post project phase and lower level causes (e.g., Bad education organization) were identified for Reason No. 7. In the Ishikava diagram the causes at each level are equally important.With the acceptance of the Pareto principle (80/20), the analysis shows that approximately 80% of all reasons for failure to implement post project phases are the following 12 reasons: Reason No. 7, 13, 12, 8, 3, 11, 14, 20, 4, 6, 5, 19 (for verbal cause description see Table 2).Out of the total 21 reasons, the above 12 reasons constitute approximately 20% of all reasons.With regard to the frequency value obtained from the respondents in the empirical research, the first two reasons with the highest frequency value were selected for root cause identification, that is, reason No. 7 and reason No. 13 (see Seticons 4.1 and 4.2). Root Causes Analysis-Reason No. 7 Non-Implementation of Post Project Phases For identification of root causes of reason No. 7 "People don't know how to perform it practically so they prefer not to do it."First the Ishikawa diagram technique-Ishikawa diagram was used (see Figure 2).The chart was compiled by the research team of Brno University of Technology as a result of respondents' answers.Initially, four main causes (Methods, Knowledge, Project Team, Organization) of the post project phase and lower level causes (e.g., Bad education organization) were identified for Reason No. 7. In the Ishikava diagram the causes at each level are equally important. With the use of the Ishikawa diagram, a total of 15 causes (major and minor) were identified and graphically depicted.The Ishikawa diagram was evaluated for identification of cause priorities.The analytical team members evaluated the identified causes by the technique of pure final point number allocation (total of 6 points).The results imply the most probably causes are "Bad training organization" and "Non-existent guidelines." Root Causes Analysis-Reason No. 13 Non-Implementation of Post Project Phases For identification of root causes of reason No. 13 "Because its need and existence are essentially denied or ignored-This belongs to "quality," not to "projects," the Cause tree method was used as suitable, see Figure 3. The Cause tree method was used to demonstrate the possibility of using another appropriate method for root cause analysis.In specific cases, the team can choose a method that is principally closer to it or thinks that this other approach is more appropriate for the case. Discussion It should be emphasized that the procedure and results presented in Chapter 4 (Results) are not based on "hard statistics" but rather constitute "soft processes," which is more appropriate in this case.On the other hand, the results thus obtained can be used as general recommendations for most sustainability projects.Because their own evaluation is incorrect. Because evaluation criteria are not set correctly. Why isn't there a guideline or instruction? Why are general principles of project management improvement not applied? Because the process was not system and systematic. Because company training was underestimated. Because organization in terms of post project phases was underestimated. Because improvement needs to be continuous (nobody is perfect). Because the obtained knowledge is used in the future. Because the general idea of continuous improvement is ignored. Discussion It should be emphasized that the procedure and results presented in Section 4 (Results) are not based on "hard statistics" but rather constitute "soft processes," which is more appropriate in this case.On the other hand, the results thus obtained can be used as general recommendations for most sustainability projects. Table 2 and Figure 1 show the suitability and effectiveness of Pareto Analysis.It clearly shows the two reasons for failure to implement post project phases, that is, those that are most critical and should get primary attention (reason No. 7 and reason No. 13). Figure 2, Tables 3 and 4 show how the root causes of 7 "People don't know how to perform it practically so they prefer not to do it" may be gradually analysed and identified using the Ishikawa diagram and the "5 Whys" method.The analysis of the root causes of reason No. 7 shows that the key cause is related mostly to knowledge and methodology, although the post project phase implementation methodology is described for example, in documents related to the description of process improvement principles in the set of ISO 9000 international quality standards. Even the IPMA ICB (version 4.0 from 2015) [25] and PMI PMBOK [33] documents used for project managers' knowledge certification do not explicitly require the applicants to be familiar with post project phases and aware of post project phase techniques.This is due to the sad fact that certificate applicants are not tested for knowledge of these processes.Yet process improvement (including project processes) is required by the ISO 9001 quality standard.For this reason, the authors recommend testing of this knowledge to be included in the certification process. Specialized project management periodicals usually emphasize the need of courses for certification preparation.However, they fail to emphasize the need of systematic company training of project team members and line managers in project management [34].This, combined with often incorrectly set or missing knowledge management processes, causes the direct project participants lack the necessary knowledge concerning project management, including the knowledge related to post project phases and recommended methods and techniques. Figure 4 shows that the Ishikawa diagram can be drafted for a subset of measures minimizing the cause impact on a certain reason for identified non-quality.The diagram provides an overall presentation of the proposed recommendations that need to be applied in order to minimize the causes of reason No. 7 for failure to implement post project phases.In the context of Figure 2, the identified causes and reasons are symmetrically paired (cause → recommendation).This will be appreciated mostly in the implementation phase and retrospect check to evaluate the effectiveness of proposed recommendations. In the process of searching for the root cause of reason No. 13 "Because its need and existence are essentially denied or ignored-This belongs to "quality," not to "projects," the Cause tree diagram with operators AND/OR branches was used (Figure 3).In comparison to the previous case, there is a total of 8 cause. On the basis of this research and the fact that it has been identified and analysed, the authors recommend the workers of project offices, respectively project managers and top managers: 1. Focus on analysis of project management training and verify the direct connection between training, project success and systematic and quality training of direct project participants. 2. Focus on the quality of corporate guideline and instruction compilation process since it may considerably increase project success. 3. Focus on corporate culture in order to promote the 6 reasons (see Figure 3) leading to good understanding of post project phase benefits to achieve continuous company improvement. The uniting feature of these recommendation is system approach to knowledge [35] together with the use of current information technology [36]. Figure 4 shows that the Ishikawa diagram can be drafted for a subset of measures minimizing the cause impact on a certain reason for identified non-quality.The diagram provides an overall presentation of the proposed recommendations that need to be applied in order to minimize the causes of reason No. 7 for failure to implement post project phases.In the context of Figure 2, the identified causes and reasons are symmetrically paired (cause → recommendation).This will be appreciated mostly in the implementation phase and retrospect check to evaluate the effectiveness of proposed recommendations.In the process of searching for the root cause of reason No. 13 "Because its need and existence are essentially denied or ignored -This belongs to "quality," not to "projects," the Cause tree diagram with operators AND/OR branches was used (Figure 3).In comparison to the previous case, there is a total of 8 cause. On the basis of this research and the fact that it has been identified and analysed, the authors recommend the workers of project offices, respectively project managers and top managers: Conclusions The concept of Knowledge Management often evokes complicated procedures bordering on Artificial Intelligence and simple yet effective methods revealing the causes of project failure are neglected (e.g., those presented and applied in this paper). In this paper, the authors focused on searching for causes of failure to implement post project phases.However, that is not enough.Next, it is necessary that all of companies and institutions (not only executing or participating in sustainability projects) draft proposals to eliminate or minimize the identified negative causes. The paper contributes to the support of system approach to project management.System approach in project proposal and execution is often neglected.The main contribution of the paper is the overview of techniques that may be recommended for post project analysis.These techniques are demonstrated in detail on particular examples of the analysis of the most common reasons for failure to implement post project phases.The described examples demonstrate the procedure to be followed in order to identify the root cause of the analysed phenomenon.At the same time, the paper also describes proposals of recommended measures that should minimize the root causes resulting in negative outcomes.The paper explicitly emphasizes and shows the connection between knowledge management and post project phase effectiveness. The authors are preparing research to find and present general rules and principles leading to the elimination of the remaining established causes in order to improve post project stage implementation and project management quality, including sustainability projects. Improvement of sustainability projects would also be very beneficial if there was a portal containing the results of sustainability project analysis, used methods and recommended procedures leading to elimination or minimization of negative causes.The authors propose next to be initiated an international researches focused on the quality of sustainability projects both in terms of content and project management by sustainability project stakeholders.The researches mentioned in the Section 2 should be organized for the sustainability projects too.These researches could contribute both to the identification of the current state and as an inspiration for improvement. By the way, the paper primarily points out the most important causes of failure to implement post project phases but at the same time it can serve as an example of how to analyse results obtained from completed projects. Figure 1 . Figure 1.Pareto Chart of non-implementation of post project phases (Source: processed by the authors). Figure 1 . Figure 1.Pareto Chart of non-implementation of post project phases (Source: processed by the authors). Sustainability 2019 , 16 Figure 3 . Figure 3. Root Causes Analysis Tree Diagram -Reason No. 13 non-implementation of post project phase (Source: processed by the authors). implement post project phases when they are good?Why do they claim they are good when objectively measurable indicators show they are not?Why should post project phases be implemented when the project is successful?Why do they claim the project is successful when objectively measurable indicators show it is not? Figure 3 . Figure 3. Root Causes Analysis Tree Diagram-Reason No. 13 non-implementation of post project phase (Source: processed by the authors). Figure 4 . Figure 4. Ishikawa diagram -Recommendations to reduce reason No. 7 non-implementation of post project phase (Source: processed by the authors). 1 . Focus on analysis of project management training and verify the direct connection between training, project success and systematic and quality training of direct project participants. Figure 4 . Figure 4. Ishikawa diagram-Recommendations to reduce reason No. 7 non-implementation of post project phase (Source: processed by the authors). Table 1 . Classification of 21 reasons why companies fail to process post project phases using 4P methods (Source: processed by the authors). Table 2 . The results of Pareto analysis (Source: processed by the authors).
7,900.2
2019-03-19T00:00:00.000
[ "Computer Science" ]
A Hierarchical Coordinative Control Strategy for Solid State Transformer Based DC Microgrids A solid state transformer (SST), as a kind of energy router in the Energy Internet, provides a unified access point for AC or DC distributed power subjects. However, the DC-link capacitors inside the SST will suffer huge voltage fluctuations when the output power of the microgrid changes dramatically. With respect to this problem, caused by the random and intermittent characteristics of distributed generation (DG), a hierarchical coordinative control strategy is proposed. Compared with the common independent control, the proposed method not only makes full use of the regulation capacity of super capacitors, but also enhances the dynamic power tracking speed and reduces the speed difference between different stages of an SST. The dynamic voltage response under the proposed method is analyzed in frequency domain and compared with the independent control. To validate the effectiveness of the coordinative control strategy, a simulation model of an SST-based grid-connected DC microgrid system is established, and the topology of the SST is improved. The voltage stability of the DC bus is compared under different control strategies, and the coordinative control strategy is also verified, effectively under transition conditions. Introduction The microgrid, as an economical and efficient form of electric energy organization, has attracted worldwide attention [1,2]. A solid state transformer (SST) can serve as a kind of energy router in microgrids [3], which not only coordinates the energy transmission between the utility grid and the microgrid, but also controls the interaction between the AC and DC microgrids. Compared with the traditional transformer, the multi-stage SST provides a direct interface to the DC microgrid, and the energy density can be improved through a high frequency transformer (HFT). The SST is expected as a crucial equipment for utilization of distributed generations (DGs) [4] and construction of the Energy Internet [5,6]. The multi-stage SST commonly contains three stages: input stage, isolation stage, and output stage, as shown in Figure 1. The input stage mainly adopts a modular multilevel converter (MMC) and a cascaded H-bridge (CHB) [7]. A dual active bridge (DAB) converter is often employed as the isolation stage. At present, the main factors affecting the practical application of SSTs are low-energy transmission efficiency and high cost. The efficiency of an SST can reach 93.72% with a CHB-type input stage [8], which is relatively low for practical application. However, the required number of power electronic switches and high-frequency transformers can be reduced by using an MMC as the input stage, which can significantly reduce the switching losses of power transistors and the cost of the SST [9] as well. Under the voltage level of the distribution network, power loss of the SST isolation stage is usually much higher than that of the input stage. Before extensive commercial use of power devices such as SiC and GaN, CLLC resonant converters can effectively solve for the inherent inefficiency of the DAB converter, whose conversion efficiency exceeds 96% [10]. In [11], a 5 kV CLLC resonant converter prototype was built and its efficiency reached 97%. In this paper, the efficiency of the CLLC resonant converter can reach beyond 98% at the rated power. Due to the randomness and intermittence of DGs, the power flow direction of an SST will change frequently, and the voltage across the port of each stage will suffer fluctuation, flicker, sag, and interruption [12], which brings serious damage to electric motors and, precisely, operated power electronic instruments and other power equipment [13,14]. Integration of DGs requires a comprehensive control of the SST. As for the multi-stage SST, traditional control at each stage is independent: the input stage is responsible for the voltage stability of the high-voltage DC (HVDC) bus, and the isolation stage is responsible for the voltage stability of the low-voltage DC (LVDC) bus [15][16][17][18]. In [15], an all-SIC device-based multistage SST was established, where the input stage of the SST adopted a front-end converter (FEC) and the isolation stage was with a DAB. The HVDC bus voltage and LVDC bus voltage were set as the control target of FEC and DAB, respectively. The control strategies of FEC and DAB do not interact with each other, which lacks collaborative operation between them. A global system control structure is described in [16], which can handle the different power flow directions and DC voltage unbalance using battery energy storage. The global control strategy is based on independent control strategy, and DC voltage fluctuation can be suppressed with assistance of the battery contribution, but it may increase the investment. In [17], a SiC-based multiple active bridge was chosen as the isolation stage of the SST to reduce the number of power devices, and a first order approximation-based mathematical model was analyzed, which helped to simplify a voltage balance controller design. The balance control of submodule DC voltage was also researched in [18] to solve the problem of unbalanced submodule voltage after using multilevel converter modules. The addressed control strategy focused on the input stage of the SST without collaboration among different stages. However, such independent control strategy ignores the inherent differences in the power response speed between different stages, which is prone to cause power difference on both sides of the DC-link capacitors and exacerbate the DC voltage fluctuation. Many efforts have been made to solve this problem. A coordinated control strategy is put forward in [19], to reduce the voltage fluctuations of the LVDC bus, that combines the isolation stage and the output stage. However, it focuses on the dual active bridge cascaded with an inverter (DABCI), which does not consider the input stage in a complete SST. The coordinated control in three stages of the SST should be further studied. Based on a simple SST model, an energy feed-forward scheme for the isolation stage and a direct power feed-forward scheme for the rectifier are proposed in [20], which improve the dynamic performance of the DC-link voltage during load fluctuation. A data-driven coordinated controller architecture for a multiple active bridge-based SST is proposed in [21]. This controller optimizes the target value by collecting and analyzing the global information of Due to the randomness and intermittence of DGs, the power flow direction of an SST will change frequently, and the voltage across the port of each stage will suffer fluctuation, flicker, sag, and interruption [12], which brings serious damage to electric motors and, precisely, operated power electronic instruments and other power equipment [13,14]. Integration of DGs requires a comprehensive control of the SST. As for the multi-stage SST, traditional control at each stage is independent: the input stage is responsible for the voltage stability of the high-voltage DC (HVDC) bus, and the isolation stage is responsible for the voltage stability of the low-voltage DC (LVDC) bus [15][16][17][18]. In [15], an all-SIC device-based multistage SST was established, where the input stage of the SST adopted a front-end converter (FEC) and the isolation stage was with a DAB. The HVDC bus voltage and LVDC bus voltage were set as the control target of FEC and DAB, respectively. The control strategies of FEC and DAB do not interact with each other, which lacks collaborative operation between them. A global system control structure is described in [16], which can handle the different power flow directions and DC voltage unbalance using battery energy storage. The global control strategy is based on independent control strategy, and DC voltage fluctuation can be suppressed with assistance of the battery contribution, but it may increase the investment. In [17], a SiC-based multiple active bridge was chosen as the isolation stage of the SST to reduce the number of power devices, and a first order approximation-based mathematical model was analyzed, which helped to simplify a voltage balance controller design. The balance control of submodule DC voltage was also researched in [18] to solve the problem of unbalanced submodule voltage after using multilevel converter modules. The addressed control strategy focused on the input stage of the SST without collaboration among different stages. However, such independent control strategy ignores the inherent differences in the power response speed between different stages, which is prone to cause power difference on both sides of the DC-link capacitors and exacerbate the DC voltage fluctuation. Many efforts have been made to solve this problem. A coordinated control strategy is put forward in [19], to reduce the voltage fluctuations of the LVDC bus, that combines the isolation stage and the output stage. However, it focuses on the dual active bridge cascaded with an inverter (DABCI), which does not consider the input stage in a complete SST. The coordinated control in three stages of the SST should be further studied. Based on a simple SST model, an energy feed-forward scheme for the isolation stage and a direct power feed-forward scheme for the rectifier are proposed in [20], which improve the dynamic performance of the DC-link voltage during load fluctuation. A data-driven coordinated controller architecture for a multiple active bridge-based SST is proposed in [21]. This controller optimizes the target value by collecting and analyzing the global information of the SST, and improves the MVDC-link and LVDC-link voltage quality of the SST under grid-side abnormal conditions. In this paper, a three-stage SST that adopts an MMC as the input stage and a CLLC resonant converter, as the isolation stage is modeled, and a hierarchical coordinative control strategy is proposed. The control strategy aims to improve the DC port voltage quality of the SST when the microgrid output is fluctuating. Compared with other control strategies, a coordinated control strategy between DGs and local energy storage equipment is designed by utilizing super capacitors, which can quickly compensate the power fluctuation of DGs. In addition, the control loop proposed in this paper can eliminate the current imbalance at both ends of the buffer capacitor caused by the power response speed difference of the MMC and CLLC. This control strategy is not only applicable to the situation of power fluctuation in microgrids, but also effective to other kinds of transient conditions, such as the microgrid isolation during fault and the islanding process. The rest of this paper is organized as follows: in Section 2, an overview of an SST-based microgrid is presented. The mathematical model and problems existing in the independent control strategy are introduced. In Section 3, a hierarchical coordinative control is proposed to improve the DC bus voltage stability. A detailed frequency-domain analysis, compared with independent control, is discussed. Finally, simulation results under different conditions are demonstrated in Section 4, followed by the conclusions in Section 5. Mathematical Model of the SST The topology of the solid state transformer used in this paper is shown in Figure 2. Compared with the traditional topology, a three-phase MMC is selected to replace the CHB as the input stage, and a CLLC resonant converter is used to replace the DAB converter as the isolation stage. As mentioned above, an MMC can reduce the number of power switching devices and high frequency transformers. Under the comparative conditions in the literature [22], an MMC can reduce the number of switching devices by 20% and the number of high-frequency transformers by 50%, in contrast to CHB. The CLLC resonant converter is a new type of structure, which has not been fully covered in the literature on SSTs. E a , E b , and E c are the voltages of the three-phase distribution network; i a , i b , and i c are the currents of distribution network; L s and L a represent the smoothing inductor on the grid side and the arm inductor of MMC, respectively; U a , U b , and U c are the AC output voltages of MMC; C H and C L are the series HVDC link capacitor and parallel CLLC output capacitor, respectively; C s represents the MMC submodule capacitor; N denotes the number of arm submodules of MMC, and the same number for the CLLC submodule; i o and i L represent output-stage load current and inductance current, respectively; and L f and C f represent the LC filter parameters. The topology of the SST-based DC microgrid is shown in Figure 3. In this system, the SST functions as an energy center to balance the power flow among the regional distribution network, DC microgrid, and the household LVAC load. The voltage level of the distribution network is 10 kV. The DC microgrid consists of photovoltaic (PV) power supply, energy storage system (ESS), and DC loads. It is connected to the distribution grid through the 700 V LVDC port of the SST. The topology of the SST-based DC microgrid is shown in Figure 3. In this system, the SST functions as an energy center to balance the power flow among the regional distribution network, DC microgrid, and the household LVAC load. The voltage level of the distribution network is 10 kV. The DC microgrid consists of photovoltaic (PV) power supply, energy storage system (ESS), and DC loads. It is connected to the distribution grid through the 700 V LVDC port of the SST. In synchronous rotating coordinates (d-q coordinates), the dynamic equations of the MMC can be described in Equation (1): AC where E d and E q are the AC voltages of the distribution grid under d-q coordinates; i d and i q are the AC currents; U d and U q are the line-frequency output voltages of the MMC under d-q The topology of the SST-based DC microgrid is shown in Figure 3. In this system, the SST functions as an energy center to balance the power flow among the regional distribution network, DC microgrid, and the household LVAC load. The voltage level of the distribution network is 10 kV. The DC microgrid consists of photovoltaic (PV) power supply, energy storage system (ESS), and DC loads. It is connected to the distribution grid through the 700 V LVDC port of the SST. In synchronous rotating coordinates (d-q coordinates), the dynamic equations of the MMC can be described in Equation (1): where E d and E q are the AC voltages of the distribution grid under d-q coordinates; i d and i q are the AC currents; U d and U q are the line-frequency output voltages of the MMC under d-q In synchronous rotating coordinates (d-q coordinates), the dynamic equations of the MMC can be described in Equation (1): where E d and E q are the AC voltages of the distribution grid under d-q coordinates; i d and i q are the AC currents; U d and U q are the line-frequency output voltages of the MMC under d-q coordinates; ω is the fundamental frequency; U dc is the HVDC bus voltage; C He is the equivalent shunt capacitor of the HVDC link; i 1i is the input current of the CLLC submodule i; and i 1 is the sum of the N submodule input currents of CLLC. The topology of a single submodule of the CLLC resonant converter is shown in Figure 4, where i dc is the input current of the HVDC link; U dci and U out are the input and output DC voltages of the submodule; i 1i and i 2i are the input and output currents; i Li is the outlet current of submodule i, and i L Appl. Sci. 2020, 10, 6853 5 of 17 is the outlet current of the overall input-series-output-parallel (ISOP) CLLC resonant converter; C r1 , L r1 and C r2 , L r2 are the resonant capacitors and inductors on the primary and secondary sides; L m is the magnetic inductor, which can be integrated inside the HFT; n is the transformation ratio of the HFT; NC L represents the equivalent parallel capacitance of the LVDC link; and i inv and i mc are the input currents of the output stage and local DC microgrid. coordinates; ωis the fundamental frequency; U dc is the HVDC bus voltage; C He is the equivalent shunt capacitor of the HVDC link; i 1i is the input current of the CLLC submodule i; and i 1 is the sum of the N submodule input currents of CLLC. The topology of a single submodule of the CLLC resonant converter is shown in Figure 4, where i dc is the input current of the HVDC link; U dci and U out are the input and output DC voltages of the submodule; i 1i and i 2i are the input and output currents; i Li is the outlet current of submodule i, and i L is the outlet current of the overall input-series-output-parallel (ISOP) CLLC resonant converter; C r1 , L r1 and C r2 , L r2 are the resonant capacitors and inductors on the primary and secondary sides; L m is the magnetic inductor, which can be integrated inside the HFT; n is the transformation ratio of the HFT; NC L represents the equivalent parallel capacitance of the LVDC link; and i inv and i mc are the input currents of the output stage and local DC microgrid. Under single-phase-shift control, the transmission power of the CLLC resonant converter can be obtained as: where P CLLCi is the transmission power of submodule i; φ i is the phase shift angle between two Hbridges; and |Z eqi | and θ i are the magnitude and phase of the equivalent impedance Z eqi , which can be calculated as: where Z 1 , Z 2 , Z m , and Z loadi represent the primary-side LC resonant impedance, secondary-side resonant impedance, the excitation impedance of the HFT, and load impedance (all equivalent to the primary side). From Equations (2) and (3), the input and output currents of the isolation stage can be calculated as: and the dynamic equation of output voltage can be expressed as: where i 2 is the sum of the output currents of N CLLC submodules. Without loss of generality, the voltage and power balance control between the CLLC submodules are not considered in this paper, that is to say, the operating variables of N submodules are assumed to be the same, so the ISOP CLLC resonant converter can be equivalent to one submodule. Under single-phase-shift control, the transmission power of the CLLC resonant converter can be obtained as: where P CLLCi is the transmission power of submodule i; ϕ i is the phase shift angle between two H-bridges; and |Z eqi and θ i are the magnitude and phase of the equivalent impedance Z eqi , which can be calculated as: where Z 1 , Z 2 , Z m , and Z loadi represent the primary-side LC resonant impedance, secondary-side resonant impedance, the excitation impedance of the HFT, and load impedance (all equivalent to the primary side). From Equations (2) and (3), the input and output currents of the isolation stage can be calculated as: and the dynamic equation of output voltage can be expressed as: where i 2 is the sum of the output currents of N CLLC submodules. Without loss of generality, the voltage and power balance control between the CLLC submodules are not considered in this paper, that is to say, the operating variables of N submodules are assumed to be the same, so the ISOP CLLC resonant converter can be equivalent to one submodule. Problems Existing in the Independent Control Strategy In the three-stage independent control, the input-stage MMC adopts voltage and current double-loop control to maintain the HVDC bus voltage and adjust the power factor on the distribution grid side; the CLLC adopts single-phase-shift control to regulate the parallel output voltage of the isolation stage. The control diagram is shown in Figure 5. In this figure, Q is the reactive power of the distribution network; G udc , G id , G q , and G iq represent the double-loop controller functions of the MMC; and G uo is the controller function for the CLLC, which adopts single voltage-loop control, so no high-precision current sensor is needed. The part inside the dotted box represents the relation between ϕ i and i 2i , which has been given by the second line of Equation (4). It is obvious that G ps is a nonlinear function, since the CLLC usually works near the resonant frequency point, which means that θ i ≈ 0. Therefore, a linear approximation for G ps is carried out in this paper. The approximate expression after linearization is as follows (0 < ϕ < π): Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 18 Problems Existing in the Independent Control Strategy In the three-stage independent control, the input-stage MMC adopts voltage and current doubleloop control to maintain the HVDC bus voltage and adjust the power factor on the distribution grid side; the CLLC adopts single-phase-shift control to regulate the parallel output voltage of the isolation stage. The control diagram is shown in Figure 5. In this figure, Q is the reactive power of the distribution network; G udc , G id , G q , and G iq represent the double-loop controller functions of the MMC; and G uo is the controller function for the CLLC, which adopts single voltage-loop control, so no high-precision current sensor is needed. The part inside the dotted box represents the relation between and i 2i , which has been given by the second line of Equation (4). It is obvious that G ps is a nonlinear function, since the CLLC usually works near the resonant frequency point, which means that θ i ≈ 0. Therefore, a linear approximation for G ps is carried out in this paper. The approximate expression after linearization is as follows (0 < φ < π): As mentioned previously, the topology of the grid-connected DC microgrid system is shown in Figure 3. A boost chopper circuit is usually adopted as the interfacing converter for PV to realize the maximum power point tracking (MPPT) control [23,24]. The topology and control diagram are shown in Figure 6a,b. For the energy storage system, which includes batteries and super capacitors, the interface converter must realize a bidirectional power transmission, so the current reversible chopper circuit is usually used. Common control methods include constant DC voltage control and voltage droop control. The topology and control diagram are shown in Figure 6c,d. As mentioned previously, the topology of the grid-connected DC microgrid system is shown in Figure 3. A boost chopper circuit is usually adopted as the interfacing converter for PV to realize the maximum power point tracking (MPPT) control [23,24]. The topology and control diagram are shown in Figure 6a,b. For the energy storage system, which includes batteries and super capacitors, the interface converter must realize a bidirectional power transmission, so the current reversible chopper circuit is usually used. Common control methods include constant DC voltage control and voltage droop control. The topology and control diagram are shown in Figure 6c In normal operation, the SST acts as the balance node of the system to provide the voltage and frequency support, the DGs output at maximum power, and the energy storage equipment responses to the control-center command. In this case, fast and continuous fluctuations of the distributed power output will be sufficiently compensated in real time by the SST. It can be seen from Figure 5 that the output current of the microgrid is the interference to the CLLC control loop, which means that the change of i Li leads to the fluctuation of i 2i and U out . The current of the CLLC port is the interference to the MMC active power control loop, which means that the change of i 2i will further lead to the fluctuation of U dc . In addition, the difference of power response speed between the MMC and CLLC resonant converter will aggravate the situation. From Figure 5, the transfer function of the CLLC output DC current i 2i to its load current i Li can be obtained. Since the CLLC can be equivalent to one submodule, so the transfer function of i 2 to i L will be the same as i 2i to i Li , as illustrated in Equation (7): Similarly, the transfer function of the MMC output DC current i dc to the CLLC input current i 1 can be expressed as: where G idd is the transfer function of the grid connected current in the d-axis, and its expression is: According to the control parameters listed in Table 1 and the system hardware simulation parameters listed in Table 2 in Section 4, the current response Bode diagrams of HVDC and LVDC links can be drawn, as shown in Figure 7. In Table 1, the proportion and integral parameters of PI controllers are distinguished by the subscripts P and I. For example, K uo = K Puo + K Iuo . It can be seen from Figure 7 that the closed-loop bandwidth of i 2 to i L is 2930.81 rad/s, which is much larger than the 547.19 rad/s of i dc to i 1 . It indicates that the power adjustment speed of the CLLC is much faster than that of the MMC. Therefore, when the output power of the microgrid fluctuates dramatically, In normal operation, the SST acts as the balance node of the system to provide the voltage and frequency support, the DGs output at maximum power, and the energy storage equipment responses to the control-center command. In this case, fast and continuous fluctuations of the distributed power output will be sufficiently compensated in real time by the SST. It can be seen from Figure 5 that the output current of the microgrid is the interference to the CLLC control loop, which means that the change of i Li leads to the fluctuation of i 2i and U out . The current of the CLLC port is the interference to the MMC active power control loop, which means that the change of i 2i will further lead to the fluctuation of U dc . In addition, the difference of power response speed between the MMC and CLLC resonant converter will aggravate the situation. From Figure 5, the transfer function of the CLLC output DC current i 2i to its load current i Li can be obtained. Since the CLLC can be equivalent to one submodule, so the transfer function of i 2 to i L will be the same as i 2i to i Li , as illustrated in Equation (7): Similarly, the transfer function of the MMC output DC current i dc to the CLLC input current i 1 can be expressed as: where G idd is the transfer function of the grid connected current in the d-axis, and its expression is: According to the control parameters listed in Table 1 and the system hardware simulation parameters listed in Table 2 in Section 4, the current response Bode diagrams of HVDC and LVDC links can be drawn, as shown in Figure 7. In Table 1, the proportion and integral parameters of PI controllers are distinguished by the subscripts P and I. For example, K uo = K Puo + K Iuo . It can be seen from Figure 7 that the closed-loop bandwidth of i 2 to i L is 2930.81 rad/s, which is much larger than the 547.19 rad/s of i dc to i 1 . It indicates that the power adjustment speed of the CLLC is much faster than that of the MMC. Therefore, when the output power of the microgrid fluctuates dramatically, such as switching from absorbing electricity to generating electricity to the distribution network, the power response speed of the CLLC is much faster than that of the MMC. such as switching from absorbing electricity to generating electricity to the distribution network, the power response speed of the CLLC is much faster than that of the MMC. According to the dynamic Equation (1), the capacitance voltage fluctuation of the HVDC link depends on the current difference between its two sides, that is, the capacitance voltage itself is determined by the difference of the current integral value: The Hierarchical Coordinative Control for the SST-Based DC Microgrid System With respect to the problems existing in the independent control, a hierarchical coordinative control strategy for the SST and microgrid composite system is proposed. In general, the hierarchical coordinative control mitigates the local microgrid's external power fluctuation and realizes a joint action of the MMC and the CLLC during the dynamic process. The comparison of control logic between independent control and coordinative control is shown in Figure 8. According to the dynamic Equation (1), the capacitance voltage fluctuation of the HVDC link depends on the current difference between its two sides, that is, the capacitance voltage itself is determined by the difference of the current integral value: The Hierarchical Coordinative Control for the SST-Based DC Microgrid System With respect to the problems existing in the independent control, a hierarchical coordinative control strategy for the SST and microgrid composite system is proposed. In general, the hierarchical coordinative control mitigates the local microgrid's external power fluctuation and realizes a joint action of the MMC and the CLLC during the dynamic process. The comparison of control logic between independent control and coordinative control is shown in Figure 8. The overall coordinative control diagram is shown in Figure 9. This novel control strategy contains two layers: the microgrid layer and the SST layer. In Figure 8a, G UES and G IES represent the ESS double-loop PI controller; G UPV and G IPV are the control functions for the PV controller; K PWMES and K PWMPV are the equivalent gain of the pulse width modulation (PWM) links for ESS and PV, respectively (K PWMES ≈ U dcES , K PWMPV ≈ U dcPV ); and d ES and d PV are the equivalent duty cycles. The meaning of the remaining parameters of Figure 9 has been given in Figure 6. The overall coordinative control diagram is shown in Figure 9. This novel control strategy contains two layers: the microgrid layer and the SST layer. In Figure 8a, G UES and G IES represent the ESS double-loop PI controller; G UPV and G IPV are the control functions for the PV controller; K PWMES and K PWMPV are the equivalent gain of the pulse width modulation (PWM) links for ESS and PV, respectively (K PWMES ≈ U dcES ,K PWMPV ≈ U dcPV ); and d ES and d PV are the equivalent duty cycles. The meaning of the remaining parameters of Figure 9 has been given in Figure 6. The overall coordinative control diagram is shown in Figure 9. This novel control strategy contains two layers: the microgrid layer and the SST layer. In Figure 8a, G UES and G IES represent the ESS double-loop PI controller; G UPV and G IPV are the control functions for the PV controller; K PWMES and K PWMPV are the equivalent gain of the pulse width modulation (PWM) links for ESS and PV, respectively (K PWMES ≈ U dcES ,K PWMPV ≈ U dcPV ); and d ES and d PV are the equivalent duty cycles. The meaning of the remaining parameters of Figure 9 has been given in Figure 6. In the microgrid layer control, the additional control branch feeds the variation of PV outlet current back to the inner current loop of super capacitors inside the ESS, giving full play to the fast regulation capability of super capacitors. Although mechanical and battery energy storage systems are still the key elements in system operation, super capacitors nowadays are drawing an increasing interest [25], due to the advantages of high efficiency and high power density [26]. This paper adopts a hybrid energy storage device combining super capacitors and batteries, as discussed in [27][28][29]. In the additional control branch, k mc is the coordinative control coefficient, which reflects the compensation degree of PV output fluctuation by the ESS. G mc is the coordinative transfer function, and the fluctuation inspection section represents the instruction-receiving and triggering module. In the SST layer control, the control branch 1 feeds the load current of the CLLC forward to the inner current loop as a new control variable to improve the dynamic current response speed of the CLLC stage. The additional control branch 2 not only feeds the output of the CLLC voltage controller back to the inner loop of the MMC, bringing the two parts of the SST together to complete the task of LVDC bus voltage control, but also feeds the CLLC load current back, to eliminate the influence of the CLLC current fluctuation on the HVDC link at the same time. Similar to the microgrid layer control, k 1 and k 2 are additional control coefficients; G f b and G cc are the corresponding transfer functions. The expressions are as follows: Comparative Analysis of Microgrid Power Fluctuation Response of the DC Link From Figures 4, 5 and 9b, the voltage response functions of the SST DC links to the CLLC load current i L under the independent control can be given in Equation (12), where L 0 is equal to L s + L a 2 and is the equivalent inductance on the AC side of the MMC. Since the independent control is only for three-stage SST converters, the proposed microgrid layer control is not considered in the comparative analysis. From Figure 9b, the corresponding disturbance response functions under the SST-layer coordinative control can be expressed as: As can be seen from Equations (12) and (13), the additional control branches change the numerator of the disturbance response functions. If the value of the control parameter k 1 is close to 1, the dynamic voltage fluctuation problem of HVDC and LVDC buses caused by microgrid output power swing can be completely solved in theory. The coefficient k 2 aims to improve the dynamic power tracking speed difference between the MMC and the CLLC, so as to realize the joint regulation of these two stages in the dynamic process. Although this effect is not clearly reflected in the above formulas, it can be seen from the disturbance transfer function of HVDC bus voltage to the CLLC input current i 1 before and after the introduction of the k 2 branch, as shown in Equation (14). The introduction of k 2 improves the tracking ability of the MMC to current i 1 , and the whole coordinative control branch (2) does not need additional sensors. The system simulation parameters are shown in Table 2. According to Equations (12) and (13), the Bode diagrams of the disturbance response functions can be obtained, as shown in Figure 10. From this figure, it can be seen that the coordinative control strategy proposed in this paper can significantly improve the low-frequency amplitude gain, indicating that this coordinative strategy has a better suppression for the DC bus voltage fluctuating during a same dynamic process. The Bode diagram of Equation (14) is similar to that of Equation (12), since the coefficient k 2 acts on the numerator, and so it is not repeated here. The system simulation parameters are shown in Table 2. According to Equations (12) and (13), the Bode diagrams of the disturbance response functions can be obtained, as shown in Figure 10. From this figure, it can be seen that the coordinative control strategy proposed in this paper can significantly improve the low-frequency amplitude gain, indicating that this coordinative strategy has a better suppression for the DC bus voltage fluctuating during a same dynamic process. The Bode diagram of Equation (14) is similar to that of Equation (12), since the coefficient k 2 acts on the numerator, and so it is not repeated here. Simulation Results The model of the SST-based DC microgrid system, as shown in Figure 3, is built by PSCAD/EMTDC to verify the proposed coordinative control. PSCAD/EMTDC is a professional electromagnetic transient simulation software, which is also suitable for power electronics system simulations. The simulation parameters are listed in Table 2. The rated capacity of the SST is 1.5 MVA, and the distribution grid keeps running under unity power factor. In steady-state operation, the output stage of the SST has a 1 MW three-phase unbalanced load. At this time, the working waveform of the CLLC is shown in Figure 11. It can be seen that the voltage zero-crossing point of the primaryside switch is ahead of the turn-on signal, and the current zero-crossing point of the secondary-side switch is ahead of the turn-off signal, thereby reducing switching losses. Figure 12 shows the efficiency curve of the CLLC resonant converter under different load conditions. It can be seen that, when the power is transmitted bidirectionally, the half-load efficiency can reach above 97%, and the efficiency can reach above 98% at a rated power of 1.5 MW. Simulation Results The model of the SST-based DC microgrid system, as shown in Figure 3, is built by PSCAD/EMTDC to verify the proposed coordinative control. PSCAD/EMTDC is a professional electromagnetic transient simulation software, which is also suitable for power electronics system simulations. The simulation parameters are listed in Table 2. The rated capacity of the SST is 1.5 MVA, and the distribution grid keeps running under unity power factor. In steady-state operation, the output stage of the SST has a 1 MW three-phase unbalanced load. At this time, the working waveform of the CLLC is shown in Figure 11. It can be seen that the voltage zero-crossing point of the primary-side switch is ahead of the turn-on signal, and the current zero-crossing point of the secondary-side switch is ahead of the turn-off signal, thereby reducing switching losses. Figure 12 shows the efficiency curve of the CLLC resonant converter under different load conditions. It can be seen that, when the power is transmitted bidirectionally, the half-load efficiency can reach above 97%, and the efficiency can reach above 98% at a rated power of 1.5 MW. Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 18 At 1.5 s, the DC microgrid is connected to the system, the PV output is 0.3 MW, and the DC load is 0.2 MW. Starting from 2.7 s, the output power of the PV supply in the microgrid fluctuates continuously in a wide range, and the variation rule is 0.3 MW-2 MW-0.3 MW-2 MW. In this process, the power flow direction of the SST will change frequently under independent control. Figure 13 shows the response of the LVDC and HVDC bus voltages. It can be seen from the comparison that the hierarchical coordinative control strategy proposed in this paper can quickly stabilize the bus voltage and greatly improve the power quality in the dynamic process. Using traditional control strategy, the maximum transient voltage changes of the LVDC and HVDC links are about 47 V and 580 V, respectively. However, under the proposed coordinative control strategy, the DC bus voltage has almost no fluctuation. The microgrid layer control utilizes the fast power response of super capacitors to smooth the power fluctuation of the DGs. The current waveform of the SST during power fluctuation is shown in Figure 14. It can be seen that the compensated DG output current does not cause frequent fluctuations in the SST power transmission direction, and the change of SST current is relatively smooth. At 1.5 s, the DC microgrid is connected to the system, the PV output is 0.3 MW, and the DC load is 0.2 MW. Starting from 2.7 s, the output power of the PV supply in the microgrid fluctuates continuously in a wide range, and the variation rule is 0.3 MW-2 MW-0.3 MW-2 MW. In this process, the power flow direction of the SST will change frequently under independent control. Figure 13 shows the response of the LVDC and HVDC bus voltages. It can be seen from the comparison that the hierarchical coordinative control strategy proposed in this paper can quickly stabilize the bus voltage and greatly improve the power quality in the dynamic process. Using traditional control strategy, the maximum transient voltage changes of the LVDC and HVDC links are about 47 V and 580 V, respectively. However, under the proposed coordinative control strategy, the DC bus voltage has almost no fluctuation. The microgrid layer control utilizes the fast power response of super capacitors to smooth the power fluctuation of the DGs. The current waveform of the SST during power fluctuation is shown in Figure 14. It can be seen that the compensated DG output current does not cause frequent fluctuations in the SST power transmission direction, and the change of SST current is relatively smooth. The microgrid layer control utilizes the fast power response of super capacitors to smooth the power fluctuation of the DGs. The current waveform of the SST during power fluctuation is shown in Figure 14. It can be seen that the compensated DG output current does not cause frequent fluctuations in the SST power transmission direction, and the change of SST current is relatively smooth. It is worth noting that the proposed coordinative control strategy can actually be applied to mitigate dynamic voltage fluctuation conditions. For example, when the system detects a fault in the DC microgrid and trips it off at 2.7 s, the corresponding voltage response is shown in Figure 15. Before removal, the output of the PV power supply is 1 MW. It can be seen from the figure that the SST layer coordinative control at this time can rapidly compensate for the power shortage and restore the DC It is worth noting that the proposed coordinative control strategy can actually be applied to mitigate dynamic voltage fluctuation conditions. For example, when the system detects a fault in the DC microgrid and trips it off at 2.7 s, the corresponding voltage response is shown in Figure 15. Before removal, the output of the PV power supply is 1 MW. It can be seen from the figure that the SST layer coordinative control at this time can rapidly compensate for the power shortage and restore the DC link voltages to the given value more quickly. (c) It is worth noting that the proposed coordinative control strategy can actually be applied to mitigate dynamic voltage fluctuation conditions. For example, when the system detects a fault in the DC microgrid and trips it off at 2.7 s, the corresponding voltage response is shown in Figure 15. Before removal, the output of the PV power supply is 1 MW. It can be seen from the figure that the SST layer coordinative control at this time can rapidly compensate for the power shortage and restore the DC link voltages to the given value more quickly. When the control strategy shown in Figure 9a is slightly changed to make the super capacitors dynamically compensate for the input current of the output-stage inverter in the island switching process, i.e., i inv in Figure 4, the coordinative control can also smooth the island switching process. The voltage waveform of the LVDC bus under this condition is shown in Figure 16. Although only the microgrid layer coordinative control plays a role at this time, the bus voltage fluctuation could still be suppressed by a considerable extent. Under the microgrid layer coordinative control, the maximum voltage sag is about 37 V. While under the independent control, the maximum voltage sag is about 90 V, and the voltage needs longer time to recover. Appl. Sci. 2020, 10, x FOR PEER REVIEW 15 of 18 When the control strategy shown in Figure 9a is slightly changed to make the super capacitors dynamically compensate for the input current of the output-stage inverter in the island switching process, i.e., i inv in Figure 4, the coordinative control can also smooth the island switching process. The voltage waveform of the LVDC bus under this condition is shown in Figure 16. Although only the microgrid layer coordinative control plays a role at this time, the bus voltage fluctuation could still be suppressed by a considerable extent. Under the microgrid layer coordinative control, the maximum voltage sag is about 37 V. While under the independent control, the maximum voltage sag is about 90 V, and the voltage needs longer time to recover. The strategies to improve the SST power quality in previous studies are mostly considering the power-side disturbances [30] and load fluctuations [31]. However, power-side disturbances and load fluctuations do not directly act on the DC buses of the SST, and they can always be isolated by adjacent inverters. In both cases, independent control strategies can maintain good power quality. At present, there are few researches on the power quality problem of SST DC ports caused by the output fluctuation of microgrid. The strategies to improve the SST power quality in previous studies are mostly considering the power-side disturbances [30] and load fluctuations [31]. However, power-side disturbances and load fluctuations do not directly act on the DC buses of the SST, and they can always be isolated by adjacent inverters. In both cases, independent control strategies can maintain good power quality. At present, there are few researches on the power quality problem of SST DC ports caused by the output fluctuation of microgrid. Conclusions In this paper, the topology of the SST is improved, and a hierarchical coordinative control strategy for an SST integrated DC microgrid system is proposed to solve the dynamic voltage quality problem of SST DC ports caused by the random and intermittent characteristics of distributed generators. Through mathematical analysis and simulation, conclusions can be drawn as follows: (1) At present, the main factors affecting the practical application of SSTs are high cost and low energy transmission efficiency. The adopted input-stage MMC instead of cascaded H-bridge can significantly reduce the required number of power electronic switches and high-frequency transformers in the SST, thus reducing the system cost. As for the isolation stage that accounts primarily for SST power losses, the efficiency of the CLLC resonant converter adopted in this paper can reach above 98% at rated power, which is greatly improved, compared with the DAB converter, whose efficiency is only about 90%. (2) Under the independent control, the power response speed of the SST is slow, and the output fluctuation of the microgrid cannot be compensated quickly and timely, bringing serious dynamic voltage stability problems for the DC link voltages. The hierarchical coordinative control strategy proposed in this paper can not only improve the external power characteristics of the DC microgrid, but also realize a coordinated action between the input stage and isolation stage of the SST, which can greatly suppress the voltage fluctuation of the HVDC and LVDC bus. Conflicts of Interest: The authors declare that there are no conflicts of interest.
10,864
2020-09-29T00:00:00.000
[ "Engineering", "Environmental Science" ]
Bregman Three-Operator Splitting Methods The paper presents primal–dual proximal splitting methods for convex optimization, in which generalized Bregman distances are used to define the primal and dual proximal update steps. The methods extend the primal and dual Condat–Vũ algorithms and the primal–dual three-operator (PD3O) algorithm. The Bregman extensions of the Condat–Vũ algorithms are derived from the Bregman proximal point method applied to a monotone inclusion problem. Based on this interpretation, a unified framework for the convergence analysis of the two methods is presented. We also introduce a line search procedure for stepsize selection in the Bregman dual Condat–Vũ algorithm applied to equality-constrained problems. Finally, we propose a Bregman extension of PD3O and analyze its convergence. Introduction We discuss proximal splitting methods for optimization problems in the form minimize f (x) + g(Ax) + h(x), where f , g, and h are convex functions, and h is differentiable.This general problem covers a wide variety of applications in machine learning, signal and image processing, operations research, control, and other fields [11,19,31,40].In this paper, we consider proximal splitting methods based on Bregman distances for solving (1) and some interesting special cases of (1).Recently, several primal-dual first-order methods have been proposed for the three-term problem (1): the Condat-Vũ algorithm [20,50,53], the primal-dual three-operator (PD3O) algorithm [51], and the primal-dual Davis-Yin (PDDY) algorithm [44].Algorithms for some special cases of (1) are also of interest.These include the Chambolle-Pock algorithm, also known as the primal-dual hybrid gradient (PDHG) method [10,12] (when h = 0), the Loris-Verhoeven algorithm [15,23,34] (when f = 0), the proximal gradient algorithm (when g = 0), and the Davis-Yin splitting algorithm [21] (when A = I).All these methods handle the nonsmooth functions f and g via the standard Euclidean proximal operator. To further improve the efficiency of proximal algorithms, proximal operators based on generalized Bregman distances have been proposed and incorporated in many methods [2,3,6,14,24,27,35,46,48]. Bregman distances offer two potential benefits.First, the Bregman distance can help build a more accurate local optimization model around the current iterate.This is often interpreted as a form of preconditioning.For example, diagonal or quadratic preconditioning [29,33,41] has been shown to improve the practical convergence of PDHG, as well as the accuracy of the computed solution [1].As a second benefit, a Bregman proximal operator of a function may be easier to compute than the standard Euclidean proximal operator, and therefore reduce the complexity per iteration of an optimization algorithm.Recent applications of this kind include optimal transport problems [16], optimization over nonnegative trigonometric polynomials [13], and sparse semidefinite programming [30]. Extending standard proximal methods and their convergence analysis to Bregman distances is not straightforward because some fundamental properties of the Euclidean proximal operator no longer hold for Bregman proximal operators.An example is the Moreau decomposition which relates the (Euclidean) proximal operators of a closed convex function and its conjugate [37].Another example is the simple relation between the proximal operators of a function g and the composition with a linear function g(Ax) when AA T is a multiple of the identity; see, e.g., [4,19].This composition rule is used in [39] to establish the equivalence between some well-known first-order proximal methods for problem (1) with A = I and with general A. The purpose of this paper is to present new Bregman extensions and convergence results for the Condat-Vũ and PD3O algorithms.The main contributions are as follows. • The Condat-Vũ algorithm [20,50] exists in a primal and a dual variant.We discuss extensions of the two algorithms that use Bregman proximal operators in the primal and dual updates. The Bregman primal Condat-Vũ algorithm first appeared in [12, Algorithm 1], and is also a special case of the algorithm proposed in [52] for a more general convex-concave saddle point problem.We give a new derivation of this method and its dual variant, by applying the Bregman proximal point method to the primal-dual optimality conditions.Based on the interpretation, we provide a unified framework for the convergence analysis of the two variants, and show an O(1/k) ergodic convergence rate, which is consistent with previous results for Euclidean proximal operators in [20,50] and Bregman proximal operators in [12].We also give a convergence result for the primal and dual iterates. • We propose an easily implemented backtracking line search technique for selecting stepsizes in the Bregman dual Condat-Vũ algorithm for problems with equality constraints.The proposed backtracking procedure is similar to the technique in [36] for the special setting of PDHG with Euclidean proximal operators, but has some important differences even in this special case.We give a detailed analysis of the algorithm with line search and recover the O(1/k) ergodic rate of convergence for related algorithms in [30,36]. • We propose a Bregman extension for PD3O and establish an ergodic convergence result. The paper is organized as follows.Section 2 gives a precise statement of the problem (1), and reviews the duality theory that will be used in the rest of the paper.In Section 3 we review some well-known first-order proximal methods and establish connections between them.Section 4 provides some necessary background on Bregman distances.In Section 5 we discuss the Bregman primal and dual Condat-Vũ algorithms and analyze their convergence.The line search technique and its convergence are discussed in Section 6.In Section 7 we extend PD3O to a Bregman proximal method and analyze its convergence.Section 8 contains results of a numerical experiment. Duality theory and merit functions This section summarizes the facts from convex duality theory that underlie the primal-dual methods discussed in the paper.We also describe primal-dual merit functions that will be used in the convergence analysis.We use the notation x, y = x T y for the standard inner product of vectors x and y, and x = x, x 1/2 for the Euclidean norm of a vector x.Other norms will be distinguished by a subscript. Problem formulation In (1) the vector x is an n-vector and A is an m × n matrix.The functions f , g, h are closed and convex, and h is differentiable, i.e., h(x) ≥ h(x ′ ) + ∇h(x ′ ), x − x ′ for all x, x ′ ∈ dom h, where dom h is an open convex set.We assume that f + h and g are proper, i.e., have nonempty domains. An important example of ( 1) For C = {b} the constraints are a set of linear equations Ax = b.This special case actually covers all applications of the more general problem (1), since (1) can be reformulated as minimize f (x) + g(y) + h(x) subject to Ax = y, at the expense of increasing the problem size by introducing a splitting variable y. Dual problem and optimality conditions The dual of problem (1) where (f + h) * and g * are the conjugates of f + h and g: The conjugate (f + h) * is the infimal convolution f * and h * , denoted by f * h * : The primal-dual optimality conditions for (1) and (2) are Here ∂f and ∂g * are the subdifferentials of f and g * .We often write the optimality conditions as Throughout the paper, we assume that the optimality conditions (3) are solvable.We will refer to the convex-concave function as the Lagrangian of (1).We follow the convention that L(x, z) The objective functions in (1) and the dual problem (2) can be expressed as Solutions x ⋆ , z ⋆ of the optimality conditions (3) form a saddle-point of L, i.e., satisfy inf In particular, L(x ⋆ , z ⋆ ) is the optimal value of ( 1) and (2). Merit functions The algorithms discussed in this paper generate primal and dual iterates and approximate solutions x, z with x ∈ dom(f + h) and z ∈ dom g * .The feasibility conditions Ax ∈ dom g and −A T z ∈ dom(f + h) * are not necessarily satisfied.Hence the duality gap sup may not always be useful as a merit function to measure convergence.If we add constraints x ′ ∈ X and z ′ ∈ Z to the optimization problems on the left-hand side of (5), where X and Z are compact convex sets, we obtain a function defined for all x ∈ dom(f + h) and z ∈ dom g * .This follows from the fact that the functions f + h + δ X and g * + δ Z are closed and co-finite, so their conjugates have full domain [43,Corollary 13.3.1].If η(x, z) is easily computed, and η(x, z) ≥ 0 for all x ∈ dom(f + h) and z ∈ dom g * with equality only if x and z are optimal, then the function η can serve as a merit function in primal-dual algorithms for problem (1). If dom(f +h) and dom g * are bounded, then X and Z can be chosen to contain dom(f +h) and dom g * .Then the constraints in (6) are redundant and η(x, z) is the duality gap (5).Boundedness of dom(f + h) and dom g * is a common assumption in the literature on primal-dual first-order methods. A weaker assumption is that (1) has an optimal solution x ⋆ ∈ int(X) and ( 2) has an optimal solution z ⋆ ∈ int(Z).Then η(x, z) ≥ 0 for all x ∈ dom(f + h) and z ∈ dom g * , with equality η(x, z) = 0 only if x, z are optimal for (1) and (2).To see this, we first express the two terms in (6) as where σ X = δ * X and σ Z (v) = δ * Z are the support functions of X and Z, respectively.Consider the problem of minimizing η(x, z).By expanding the infimal convolutions in the expressions for the two terms of η, this convex optimization problem can be formulated as with variables x, y, z, w.The dual of this problem is with variables x, z.The optimality conditions for ( 7) and ( 8) include the conditions Ax−y ∈ N Z (z) and −A T z −w ∈ N X (x), where N X (x) = ∂δ X (x) is the normal cone to X at x, and N Z (z) = ∂δ Z (z) the normal cone to Z at z.By assumption, there exist points x ⋆ ∈ int(X) and z ⋆ ∈ int(Z) that are optimal for the original problem (1) and its dual (2).It can be verified that (x, y, z, w) = (x ⋆ , Ax ⋆ , z ⋆ , −A T z ⋆ ), (x, z) = (x ⋆ , z ⋆ ) are optimal for ( 7) and ( 8), and that η(x ⋆ , z ⋆ ) = 0. Now let (x, ẑ) be any other minimizer of η, i.e., η(x, ẑ) = 0. Then x, ẑ and the corresponding minimizers ŷ, ŵ in (7), must satisfy the optimality conditions with the optimal dual variables x = x ⋆ , z = z ⋆ .In particular, The objective value of (7) at this point then reduces to 0 = f (x) + h(x) + g(Ax) + g * (ẑ) + (f + h) * (−A T ŵ), the duality gap associated with the original problem and its dual.This shows that η(x, ẑ) = 0 implies that x, ẑ are optimal for problem (1) and (2). Consider for example the primal and dual pair Here g = δ {b} .If we take The first three terms are the primal objective augmented with an exact penalty for the constraint Ax = b. As another example, consider minimize This is an example of (1) with f (x) = x 1 , h(x) = 0, and g the indicator function of {y | y ≤ b}. The domains dom f and dom g * are unbounded.If we choose Hence, for this example the merit function ( 6) is The second term is an exact penalty for the primal constraint Ax ≤ b.The last term is an exact penalty for the dual constraint A T z ∞ ≤ 1. First-order proximal algorithms: survey and connections In this section, we discuss several first-order proximal algorithms and their connections.We start with four three-operator splitting algorithms for problem (1): the primal and dual variants of the Condat-Vũ algorithm [20,50], the primal-dual three-operator (PD3O) algorithm [51], and the primal-dual Davis-Yin (PDDY) algorithm [44].For each of the four algorithms, we make connections with other first-order proximal algorithms, using reduction (i.e., setting some parts in (1) to zero) and the "completion" reformulation (based on extending A to a matrix with orthogonal rows and equal row norms) [39].We focus on the formal connections between algorithms.The connections do not necessarily provide the best approach for convergence analysis or the best known convergence results. The proximal operator or proximal mapping of a closed convex function f : R n → R is defined as If f is closed and convex, the minimizer in the definition exists and is unique for all y [37].We will call (9) the standard or the Euclidean proximal operator when we need to distinguish it from Bregman proximal operators defined in Section 4. Condat-Vũ three-operator splitting algorithm We start with the (primal) Condat-Vũ three-operator splitting algorithm, which was proposed independently by Condat [20] and Vũ [50], (primal) Condat-Vũ (10) (primal) PDHG Loris-Verhoeven with shift (11) proximal gradient The stepsizes σ and τ must satisfy στ A 2 2 + τ L ≤ 1, where A 2 is the spectral norm of A, and L is the Lipschitz constant of ∇h with respect to the Euclidean norm.Many other first-order proximal algorithms can be viewed as special cases of (10), and their connections are summarized in Figure 1.When h = 0, algorithm (10) reduces to the (primal) primal-dual hybrid gradient (PDHG) method [10,12,42], or PDHGMu in [26].When g = 0 in (10) (and assuming z (0) = 0), we obtain the proximal gradient algorithm.When f = 0, we obtain a variant of the Loris-Verhoeven algorithm [15,23,34], We refer to this as Loris-Verhoeven with shift, for reasons that will be clarified later.Furthermore, when A = I in PDHG, we obtain the Douglas-Rachford splitting (DRS) algorithm [18,25,32].Conversely, the "completion" technique in [39] shows that PDHG coincides with DRS applied to a reformulation of the problem.Similarly, when A = I in the primal Condat-Vũ algorithm (10), we obtain a new algorithm and refer to it as the reduced primal Condat-Vũ algorithm.Conversely, the reduced primal Condat-Vũ algorithm reverts to (10) via the "completion" trick.We can also set f = 0 in the reduced Condat-Vũ algorithm or A = I in (11), and obtain the reduced Loris-Verhoeven algorithm with shift: Finally, due to the absence of f in (12), it is not clear how to apply the "completion" trick to (12) to obtain (11). Condat [20] also discusses a variant of (10), which we will call the dual Condat-Vũ algorithm: (13) dual PDHG dual Loris-Verhoeven with shift (14) proximal gradient Figure 2 summarizes the proximal algorithms derived from (13).When h = 0, algorithm (13) reduces to PDHG applied to the dual of (1) (with h = 0), which is shown to be equivalent to linearized ADMM [40] (also called Split Inexact Uzawa in [26]).Setting g = 0 in (13) yields the proximal gradient algorithm.When f = 0, we obtain a new algorithm: Following the previous naming convention, we call it dual Loris-Verhoeven algorithm with shift.Furthermore, setting A = I in (13) gives the reduced dual Condat-Vũ algorithm.Conversely, applying the "completion" trick to this reduced algorithm recovers (13).Similarly, setting A = I in dual PDHG gives dual DRS, i.e., DRS with f and g switched, and conversely, the "completion" trick recovers dual PDHG from dual DRS.We can also set A = I in ( 14) or f = 0 in the reduced dual Condat-Vũ algorithm, and obtain the reduced dual Loris-Verhoeven algorithm with shift: Primal-dual three-operator (PD3O) splitting algorithm The third diagram, Figure 3, starts with the primal-dual three-operator (PD3O) splitting algorithm [51] x Compared with the Condat-Vũ algorithm (10), PD3O seems to have slightly more complicated updates and larger complexity per iteration, but the requirement for the stepsizes is looser: στ A 2 2 ≤ 1 and τ ≤ 1/L.When h = 0, (16) reduces to the (primal) PDHG.The classical proximal gradient algorithm can be obtained by setting g = 0.When f = 0, it reduces to the iterations Davis-Yin (primal) Douglas-Rachford proximal gradient PD3O (16) (primal) PDHG Loris-Verhoeven ( 17) dual PDHG Loris-Verhoeven ( 17) This algorithm was discovered independently as the Loris-Verhoeven algorithm [34], the primaldual fixed point algorithm based on proximity operator (PDFP 2 O) [15], and the proximal alternating predictor corrector (PAPC) [23].Comparison with (11) reveals a minor difference between these two algorithms: the gradient term in the z-update is taken at the newest primal iterate x (k+1) in Loris-Verhoeven (17) and at the previous point x (k) in the shifted version.This difference is inherited in the proximal gradient algorithm and its shifted version (12).Furthermore, when A = I and σ = 1/τ in PD3O, we recover the well-known Davis-Yin splitting (DYS) algorithm [21].We can also set A = I in ( 17) and obtain the iterations The stepsize conditions require στ ≤ 1 and τ ≤ 1/L.Thus we can set σ = 1/τ and apply Moreau decomposition.The resulting algorithm is exactly the proximal gradient method.The only difference in the z-update between ( 12) and ( 18) is the point at which the gradient of h is taken.The second algorithm uses the most up-to-date iterate x (k+1) when evaluating the gradient of h, and this choice allows a larger stepsize τ . Primal-dual Davis-Yin (PDDY) splitting algorithm The core algorithm in Figure 4 is the primal-dual Davis-Yin (PDDY) splitting algorithm [44] z The requirement for stepsizes is the same as that in PD3O: στ A 2 2 ≤ 1 and τ ≤ 1/L. Figure 4 is almost identical to Figure 3 with the roles of f and g exchanged.When h = 0, PDDY reduces to the dual PDHG.In addition, when A = I and σ = 1/τ , PDDY reduces to the Davis-Yin algorithm, but with f and g exchanged.Similarly, when h = 0, A = I and σ = 1/τ , PDDY reverts to the Douglas-Rachford algorithm with f and g switched. We have seen that the middle and right parts of Figure 4 are those of Figure 3 with f and g switched.However, when one of the functions f or g is absent, the algorithms reduced from PD3O and PDDY are exactly the same.In particular, when f = 0, PDDY reduces to the Loris-Verhoeven algorithm. Bregman distances In this section we give the definition of Bregman proximal operators and the basic properties that will be used in the paper.We refer the interested reader to [9] for an in-depth discussion of Bregman distances, their history, and applications. Let φ be a convex function with a domain that has nonempty interior, and assume φ is continuous on dom φ and continuously differentiable on int(dom φ).The generalized distance (or Bregman distance) generated by the kernel function φ is defined as the function = argmin It is assumed that for every a and every y ∈ int(dom φ) the minimizer x = prox φ f (y, a) is unique and in int(dom φ). The distance generated by the kernel φ The corresponding Bregman proximal operator is the standard proximal operator applied to y − a: prox φ f (y, a) = prox f (y − a).For this distance, closedness and convexity of f guarantee that the proximal operator is well defined.The questions of existence and uniqueness are more complicated for general Bregman distances.There are no simple general conditions that guarantee that for every a and every y ∈ int(dom φ) the generalized proximal operator ( 20) is uniquely defined and in int(dom φ).Some sufficient conditions are provided (see, for example, [8, Section 4.1], [3, Assumption A]), but they may be quite restrictive or difficult to verify in practice.In applications, however, the Bregman proximal operator is used with specific combinations of f and φ, for which the minimization problem in (20) is particularly easy to solve.In those applications, existence and uniqueness of the solution follow directly from the closed-form solution or availability of a fast algorithm to compute it.A typical example will be provided in Section 8. From the expression (21) we see that x = prox φ f (y, a) satisfies Equivalently, by definition of subgradient, for all x ∈ dom f ∩ dom φ. 5 Bregman Condat-Vũ three-operator splitting algorithms We now discuss two Bregman three-operator splitting algorithms for the problem (1).The algorithms use a generalized distance d p in the primal space, generated by a kernel φ p , and a generalized distance d d in the dual space, generated by a kernel φ d .The first algorithm is and will be referred to as the Bregman primal Condat-Vũ algorithm.The second algorithm will be called the Bregman dual Condat-Vũ algorithm: The two algorithms need starting points x (0) ∈ int(dom φ p ) ∩ dom h, and z (0) ∈ int(dom φ d ). Conditions on stepsizes σ, τ will be specified later.When Euclidean distances are used for the primal and dual proximal operators, the two algorithms reduce to the primal and dual variants of the Condat-Vũ algorithm ( 10) and ( 13), respectively.Algorithm (23) has been proposed in [12]. Here we discuss it together with (24) in a unified framework.In Section 5.1 we show that the proposed algorithms can be interpreted as the Bregman proximal point method applied to a monotone inclusion problem.In Section 5.2 we analyze their convergence.In Section 5.3 we discuss the connections between the two algorithms and other Bregman proximal splitting methods. Assumptions Throughout Section 5 we make the following assumptions.The kernel functions φ p and φ d are 1-strongly convex with respect to norms • p and • d , respectively: for all (x, x ′ ) ∈ dom d p and (z, z ′ ) ∈ dom d d .The assumption that the strong convexity constants are equal to one can be made without loss of generality, by scaling the norms (or distances) if needed.We also assume that the function Lφ p − h is convex for some L > 0.More precisely, dom φ p ⊆ dom h and Note that this assumption is looser than the one in [12,Equation (4)].We denote by A the matrix norm where • p, * and • d, * are the dual norms of • p and • d . It is also assumed that the primal-dual optimality conditions (3) have a solution (x ⋆ , z ⋆ ) with x ⋆ ∈ dom φ p and z ⋆ ∈ dom φ d . Derivation from Bregman proximal point method The Bregman Condat-Vũ algorithms ( 23) and ( 24) can be viewed as applications of the Bregman proximal point algorithm to the optimality conditions (3).This interpretation extends the derivation of the Bregman PDHG algorithm from the Bregman proximal point algorithm given in [30].The idea originates with He and Yuan's interpretation of PDHG as a "preconditioned" proximal point algorithm [28]. The Bregman proximal point algorithm [9,24,27] is an algorithm for monotone inclusion problems 0 ∈ F (u).The update u (k+1) in one iteration of the algorithm is defined as the solution of the inclusion ∇φ(u where φ is a Bregman kernel function.Applied to (3), with a kernel function φ pd , the algorithm generates a sequence (x (k) , z (k) ) defined by Primal-dual Bregman distances We introduce four possible primal-dual kernel functions: the functions where σ, τ > 0, and the functions The subscripts in φ + and φ − refer to the sign of the inner product term z, Ax .The subscripts in φ pcv and φ dcv indicate the algorithm (Bregman primal or dual Condat-Vũ) for which these distances will be relevant.If these kernel functions are convex, they generate the following Bregman distances.The distances generated by φ + and φ − are respectively, and the distances generated by φ dcv and φ pcv are We now show that φ + and φ − are convex if and strongly convex if στ A 2 < 1, and that the functions φ dcv and φ pcv are convex if and strongly convex if στ A 2 + τ L < 1. Bregman Condat-Vũ algorithms from proximal point method The Bregman primal Condat-Vũ algorithm ( 23) is the Bregman proximal point method with the kernel function φ pd = φ pcv .If we take φ pd = φ pcv in (28), we obtain two coupled inclusions that determine x (k+1) , z (k+1) .The first one is This shows that x (k+1) solves the optimization problem The solution is the x-update (23a) in the Bregman primal Condat-Vũ method.The second inclusion is This shows that z (k+1) solves the optimization problem The solution is the z-update (23b). Convergence analysis The derivation in Section 5.1 allows us to apply existing convergence theory for the Bregman proximal point method to the proposed algorithms ( 23) and (24).In particular, Solodov and Svaiter [45] have studied Bregman proximal point methods with inexact prox-evaluations for solving variational inequalities, which include the monotone inclusion problem as a special case.The results in [45] can be applied to analyze convergence of the Bregman Condat-Vũ methods with inexact evaluations of proximal operators.The literature on the Bregman proximal point method for monotone inclusions [24,27,45] focuses on the convergence of iterates, and this generally requires additional assumptions on φ p and φ d (beyond the assumptions of convexity made in Section 5.1).In this section we present a selfcontained convergence analysis and give a direct proof of an O(1/k) rate of ergodic convergence.We also give a self-contained proof of convergence of the iterates x (k) and z (k) . We make the assumptions listed in Section 5.1: the strong convexity assumption (25) for the primal and dual kernels φ p and φ d , and the relative smoothness property (26) of the function h.We assume that the stepsizes σ, τ satisfy (30), and that the primal-dual optimality condition (3) has a solution (x ⋆ , z ⋆ ) ∈ dom φ p × dom φ d . For the sake of brevity we combine the analysis of the Bregman primal and the Bregman dual Condat-Vũ algorithms.In the following, d, d, φ are defined as for Bregman dual Condat-Vũ (24). Proof.We write ( 23) and ( 24) in a unified notation as where x and z are defined in the following table: The optimality condition (22) for the proximal operator evaluation (33a) is that for all x ∈ dom f ∩ dom φ p .The optimality condition for (33b) is that for all z ∈ dom g * ∩ dom φ d .Combining the two inequalities gives for all x ∈ dom f ∩ dom φ p and all z ∈ dom g * ∩ dom φ d .The second inequality follows from convexity of h.Substituting the expressions for x and z in the Bregman primal Condat-Vũ algorithm (23), we obtain for the last line of ( 35) . If we substitute the expressions for x and z in the Bregman dual Condat-Vũ algorithm, the last line of (35) becomes Therefore, for both algorithms, (35) implies that if we select the minus sign in ∓ for the Bregman primal Condat-Vũ algorithm, and the plus sign for the Bregman dual Condat-Vũ algorithm.For the primal method, this shows For the dual method, k+1) , z (k+1) ; x (k) , z (k) ). Ergodic convergence We define averaged iterates for k ≥ 1.We show that for all x ∈ dom f ∩ dom φ p and z ∈ dom g * ∩ dom φ d . Proof.From (32), since L(u, v) is convex in u and concave in v, for all x ∈ dom f ∩ dom φ p and z ∈ dom g * ∩ dom φ d .The last step follows from (31) with More generally, if X ⊆ dom φ p and Z ⊆ dom φ d are compact convex sets that contain optimal solutions x ⋆ , z ⋆ in their interiors, then the merit function ( 6) is bounded by To show convergence of the entire sequence (x (k) , z (k) ), we substitute (x, ẑ) in (32): Since the left-hand side is nonnegative, we have d(x, ẑ; ) for all k ≥ 1.This further implies that for all k ≥ k i .By the second additional assumption mentioned above, the right-hand side converges to zero.Then the left-hand side also converges to zero and, from (41) x (k) → x and z (k) → ẑ. Relation to other Bregman proximal algorithms Following similar steps as in Section 3, we obtain several Bregman proximal splitting methods as special cases of ( 23) and (24).The connections are summarized in Figure 5 and Figure 6.A comparison of Figures 1 and 5 shows that all the reduction relations (A = I) are still valid.However, it is unclear how to apply the "completion" operation to algorithms based on non-Euclidean Bregman distances.When h = 0, (23) reduces to Bregman PDHG [12].When g = 0, g * = δ {0} (and assuming z (0) = 0), we obtain the Bregman proximal gradient algorithm [3].When f = 0 in (23), we obtain the Bregman Loris-Verhoeven algorithm with shift: Furthermore, when A = I in (23), we recover the reduced Bregman primal Condat-Vũ algorithm.Similarly, setting A = I in Bregman PDHG yields the Bregman Douglas-Rachford algorithm.Last, when we set A = I in (43), we have the Bregman reduced Loris-Verhoeven algorithm with shift: Bregman proximal gradient Similarly, the Bregman dual Condat-Vũ algorithm ( 24) can be reduced to some other Bregman proximal splitting methods, as summarized in Figure 6.In particular, when f = 0 in (24), we obtain the Bregman dual Loris-Verhoeven algorithm with shift: Moreover, setting A = I in (45) yields the reduced Bregman Loris-Verhoeven algorithm with shift: 6 Bregman dual Condat-Vũ algorithm with line search The algorithms ( 23) and ( 24) use constant parameters σ and τ .The stepsize condition (30) involves the matrix norm A and the Lipschitz constant L in (26).Estimating or bounding A for a large matrix can be difficult.As an added complication, the norms • p and • d in the definition of the matrix norm ( 27) are assumed to be scaled so that the strong convexity parameters of the primal and dual kernels are equal to one.Close bounds on the strong convexity parameters may also be difficult to obtain.Using conservative bounds for A and L results in unnecessarily small values of σ and τ , and can dramatically slow down the convergence.Even when the estimates of A and L are accurate, the requirements for the stepsizes (30) are still too strict in most iterations, as observed in [1].In view of the above arguments, line search techniques for primal-dual proximal methods have recently become an active area of research.Malitsky and Pock [36] proposed a line search technique for PDHG and the Condat-Vũ algorithm in the Euclidean case.The algorithm with adaptive parameters in [49] focuses on a special case of (1) (i.e., f = 0) and extends the Loris-Verhoeven algorithm (17).A Bregman proximal splitting method with line search is discussed in [30] and considers the problem (1) with h = 0 and g = δ {b} .In this section, we extend the Bregman dual Condat-Vũ algorithm ( 24) with a varying parameter option, in which the stepsizes are chosen adaptively without requiring any estimates or bounds for A or the strong convexity parameter of the kernels.The algorithm is restricted to problems in the equality constrained form This is a special case of (1) with g = δ {b} , the indicator function of the singleton {b}. The details of the algorithm are discussed in Section 6.1 and a convergence analysis is presented in Section 6.2.The main conclusion is an O(1/k) rate of ergodic convergence, consistent with previous results for related algorithms [30,36]. Assumptions We make the same assumptions as in Section 5.1, but define where • is the Euclidean norm.The matrix norm A is defined accordingly as In the line search algorithm, the parameters θ k , τ k , σ k are determined by a backtracking search.At the start of the algorithm, we set τ −1 and σ −1 to some positive values.To start the search in iteration k we choose θk ≥ 1.For i = 0, 1, 2, . .., we set , and compute zk+1 , x k+1 , z k+1 using (48).For some δ ∈ (0, 1], if we accept the computed iterates z(k+1) , x (k+1) , z (k+1) and parameters θ k , σ k , τ k , and terminate the backtracking search.If (49) does not hold, we increment i and continue the backtracking search.The backtracking condition ( 49) is similar to the condition in the line search algorithm for PDHG with Euclidean proximal operators [36,Algorithm 4], but it is not identical, even in the Euclidean case.The proposed condition is weaker and allows larger stepsizes than the condition in [36, Algorithm 4]. Convergence analysis The proof strategy is the same as in [30,Section 3.3], extended to account for the function h.The main conclusion is an O(1/k) rate of ergodic convergence, shown in equation (57). Lower bound on algorithm parameters We first show that the stepsizes are bounded below by where The lower bounds imply that the backtracking eventually terminates with positive stepsizes σ k and τ k . Proof.Applying (31) , together with the Lipschitz condition (26), we see that the backtracking condition (49) holds at iteration k if 0 < δ < 1 and Then mathematical induction can be used to prove (50).The two lower bounds (50) hold at k = 0 by the definition of τ min and σ min .Now assume τ k−1 ≥ τ min , σ k−1 ≥ σ min , and consider the kth iteration.The first attempt of θ k is θ k = θk ≥ 1.If this value is accepted, then Otherwise, one or more backtracking steps are needed.Denote by θk the last rejected value.Then θ2 k τ 2 k−1 β A 2 + θk τ k−1 L > δ 2 and the accepted θ k satisfies Therefore, Proof.The optimality condition for the primal prox-operator (48b) gives for all x ∈ dom f ∩ dom φ p .Hence The second inequality follows from the convexity of h, i.e., h(x The dual update (48c) implies that This equality at The equality (53) 1) . We evaluate this at z = z (i) and add it to the equality at z = z (i−2) multiplied by θ i−1 : Now we combine (52) for k = i − 1, with (54) and (55).For i ≥ 1, which is the desired result (51).The first inequality follows from (52).In the second last step we substitute (54) and (55).The last step uses the line search exit condition (49) at k = i − 1. Ergodic convergence We define the averaged primal and dual sequences for k ≥ 1.We show that for all x ∈ dom f ∩ dom φ p and all z.This holds for any choice of δ ∈ (0, 1] in (49).If we compare ( 56) and ( 37), we note that the two left-hand sides involve different dual iterates (z avg as opposed to z (k) avg ). Proof.From (51), Since L(u, v) is convex in u and affine in v, Dividing by k i=1 τ i−1 gives (56). Substituting x = x ⋆ and z = z ⋆ in (58) yields since Ax ⋆ = b.More generally, suppose X ⊆ dom f ∩ dom φ p is a compact convex set containing an optimal solution x ⋆ in its interior, and Z = {z | z ≤ γ} contains a dual optimal z ⋆ , then the merit function η defined in (6) satisfies The second line follows from (56) and the third line follows from (50). Monotonicity properties and convergence of iterates For x = x ⋆ , z = z ⋆ , the left-hand side of ( 51) is nonnegative and we obtain for k ≥ 0.Moreover, These inequalities hold for any value δ ∈ (0, 1].In particular, the last inequality implies that z(i+1) − z (i) → 0. When δ < 1 it also implies that d p (x (i+1) , x (i) ) → 0 and, by the strong convexity assumption on φ p , that x (i+1) − x (i) → 0. With additional assumptions similar to those in Section 5.2.3, one can show the convergence of iterates; see [30,Section 3.3.4]. Bregman PD3O algorithm In this section we propose the Bregman PD3O algorithm, another Bregman proximal method for the problem (1).Bregman PD3O also involves two generalized distances, d p and d d , generated by φ p and φ d , respectively, and it consists of the iterations The only difference between Bregman PD3O and Bregman primal Condat-Vũ algorithm ( 23) is the additional term τ (∇h(x (k) ) − ∇h(x (k+1) )).Thus the two algorithms (23) and (59) reduce to the same method when h is absent from problem (1).The additional term allows PD3O to use larger stepsizes than the Condat-Vũ algorithm.If we use the same matrix norm A and Lipschitz constant L in the analysis for the two methods, then the conditions are The range of possible parameters is illustrated in Figure 7.In Section 7.1 we provide the detailed convergence analysis of the Bregman PD3O method.The connections between Bregman PD3O and several other Bregman proximal methods are discussed in Section 7.2.Assumptions Throughout Section 7 we make the following assumptions.The kernel functions φ p and φ d are 1-strongly convex with respect to the Euclidean norm and an arbitrary norm • d , respectively: The assumptions that the strong convexity constants are one can be made without loss of generality, by scaling the distances.The definition of A follows ( 27) and reduces to We also assume that the gradient of h is L-Lipschitz continuous with respect to the Euclidean norm: dom h = R n and h(y) − h(x) − ∇h(x), y − x ≤ L 2 y − x 2 , for any x, y ∈ dom h. The parameters τ and σ must satisfy Finally, we assume that the optimality condition (3) has a solution (x ⋆ , z ⋆ ) ∈ dom φ p × dom φ d .Note that (62) is a stronger assumption than (26).(Combined with the first inequality in (61), it implies (26).)We will use the following consequence of (62): for all x, y [38, Theorem 2.1.5]. A primal-dual Bregman distance We introduce a primal-dual kernel where σ, τ > 0. If φ pd3o is convex, the generated Bregman distance is given by We now show that φ pd3o is convex if στ A 2 ≤ 1. Proof.It is sufficient to show that d pd3o is nonnegative: In step 1 we use the strong convexity assumption (61), the definition of A (27) with • p = • , and the assumption στ A 2 ≤ 1.The bound on d d (z, z ′ ) follows from Note that the convexity of φ pd3o only requires the first inequality in the stepsize condition (63).Although the Bregman PD3O algorithm (59) is not the Bregman proximal point method for the Bregman kernel φ pd3o , the distance d pd3o will appear in the key inequality (67) of the convergence analysis. One-iteration analysis We first show that the iterates x (k+1) , z (k+1) generated by Bregman PD3O (59) satisfy for all x ∈ dom f ∩ dom φ p and z ∈ dom g * ∩ dom φ d . Proof.Recall that Bregman PD3O differs from the Bregman primal Condat-Vũ algorithm (23) only in an additional term in the dual update.The proof in Section 6.2.2 therefore applies up to (35), with Substituting the above (x, z) into (34) and applying the definition of d − (29) yields Step 3 follows from definition of d pd3o (65).In step 4 we use the Lipschitz condition (64) and the second inequality in the stepsize condition (63).The last step follows from the fact that d pd3o is nonnegative (66). Ergodic convergence The iterates generated by Bregman PD3O (59) satisfy for all x ∈ dom f ∩ dom φ p and all z ∈ dom g * ∩ dom φ d , where the averaged iterates are defined in (36). Proof.From (67), since L(u, v) is convex in u and concave in v, for all x ∈ dom f ∩ dom φ p and z ∈ dom g * ∩ dom φ d .The third inequality follows from (65): Relation to other Bregman proximal algorithms The proposed algorithm (59) can be viewed as an extension to PD3O (16) using generalized distances, and reduces to several Bregman proximal methods by reduction.These algorithms can also be organized into a diagram similar to Figure 3. Figure 8 starts from Bregman PD3O (59), and summarizes its connection to several Bregman proximal methods.When h = 0, (59) reduces to Bregman PDHG, and when g = 0, (59) reduces to the Bregman proximal gradient algorithm.The Bregman Loris-Verhoeven algorithm is Bregman PD3O with f = 0: This algorithm has been discussed in [17] under the name NEPAPC.Setting A = I (with σ = 1/τ ), we obtain a new variant of Bregman proximal gradient algorithm: x (k+1) = argmin x ∇h(x (k) ) − z (k) , x + 1 τ d p (x, x (k) ) (69a) The difference between (69) and ( 44) is the additional term τ (∇h(x (k) ) − ∇h(x (k+1) )), the same as the difference between ( 23) and (59).When the Euclidean proximal operator is used, (69) reduces to the proximal gradient method.However, the new algorithm (69) does not seem to be equivalent to the Bregman proximal gradient algorithm due to the lack of Moreau decomposition in the generalized case.Nevertheless, the new algorithm (69) may still be interesting on its own, especially when the generalized proximal operator of g * is easy to compute while the (Euclidean or generalized) proximal operator of g is computationally expensive.Finally, setting A = I (and σ = 1/τ ) in Bregman PD3O (59) gives a Bregman Davis-Yin algorithm. Numerical experiment In this section we evaluate the performance of the Bregman primal Condat-Vũ algorithm ( 23), Bregman dual Condat-Vũ algorithm with line search (48), and Bregman PD3O (59).The main goal of the example is to validate and illustrate the difference in the stepsize conditions (60), and the usefulness of the line search procedure.We consider the convex optimization problem minimize ψ(x) = λ Ax 1 + 1 2 Cx − b 2 subject to 1 T x = 1, x 0, (70) where x ∈ R n is the optimization variable, C ∈ R m×n , and A ∈ R (n−1)×n is the difference matrix This problem is of the form of (1) with and δ H is the indicator function of the hyperplane H = {x ∈ R n | 1 T x = 1}.We use the relative entropy distance in the primal space.This distance is 1-strongly convex with respect to ℓ 1 -norm [5] (and also ℓ 2norm).With the relative entropy distance, all the primal iterates x (k) remain feasible.In the dual space we use the Euclidean distance.Thus, the matrix norm (27) in the stepsize condition (30) for the Bregman Condat-Vũ algorithms is the (1,2)-operator norm where a i is the ith column of A. In the Bregman PD3O algorithm, we use the squared Euclidean distance d p (x, y) = 1 2 x − y 2 , and the matrix norm in the stepsize condition (63) is the spectral norm A 2 .For the difference matrix (71), A 2 is bounded above by 2, and very close to this upper bound for large n. The Lipschitz constant for h with respect to the ℓ 1 -norm is the largest absolute value of the elements in C T C, i.e., L 1 = max i,j |(C T C) ij |.This value is used in the stepsize condition (30) for the Bregman Condat-Vũ algorithms.The Lipschitz constant with respect to the ℓ 2 -norm is , which is used in the stepsize condition (63) for Bregman PD3O.The matrix norms and Lipschitz constants are summarized as follows: In the example we use the exact values of L and the (Euclidean) proximal operator of g * is the projection onto the infinity norm ball: The experiment is carried out in Python 3.6 on a desktop with an Intel Core i5 2.4GHz CPU and 8GB RAM.We set m = 500 and n = 10, 000.The elements in the matrix C ∈ R m×n and b ∈ R m are randomly generated from independent standard Gaussian distributions.For the constant stepsize option, we choose These two choices, as well as the range of possible parameters, are illustrated in Figure 9.The two choices are on the blue and red curve, respectively, and satisfy the requirement (60) with equality.For the line search algorithm, we set θk = 1.2 to encourage more aggressive updates, and , which is consistent with the choice in (72).We solve the problem (70) using the Bregman primal Condat-Vũ algorithm (23), the Bregman dual Condat-Vũ algorithm with line search (48), and Bregman PD3O (59). the relative distance between the function values to the optimal value ψ ⋆ , which is computed via CVXPY [22].Comparison between the Bregman primal Condat-Vũ algorithm and Bregman PD3O shows that Bregman PD3O converges faster.Figure 10 also compares the performance between the Bregman primal Condat-Vũ algorithm with constant stepsizes and Bregman dual algorithm with line search.One can see clearly that the line search significantly improves the convergence.On the other hand, the line search does not add much computation overhead, as the plots of the CPU time and the number of iterations are roughly identical.In these experiments Bregman PD3O and the Bregman dual Condat-Vũ algorithm with line search have a similar performance, without one algorithm being conclusively better than the other. 1 . 1 and existing results on Bregman proximal point method[27, Theorem 3.1],[45, Theorem 3.2].Here we provide a self-contained proof under additional assumptions about the primal and dual distance functions.The following two assumptions are common in the literature on Bregman distances[9,14,24,27].For fixed x and z, the sublevel sets {x ′ | d p (x, x ′ ) ≤ γ} and {z ′ | d d (z, z ′ ) ≤ γ} are closed.In other words, the distances d p (x, x ′ ) and d d (z, z ′ ) are closed functions of x ′ and z ′ , respectively.Since a sum of closed functions is closed, the distance d(x, z; x ′ , z ′ ) is a closed function of (x ′ , z ′ ), for fixed (x, z). Figure 7 : Figure 7: Acceptable stepsizes in Condat-Vũ algorithms and PD3O.We assume the same matrix norm A and Lipschitz constant L are used in the analysis of the two algorithms.The light gray region under the blue curve is defined by the inequality for the Condat-Vũ algorithms in (60).The region under the red curve shows the values allowed by the stepsized conditions for PD3O. 1 and L 2 , The Bregman proximal operator of f has a closed-form solution: Figure 9 : Figure 9: The blue and red curves show the boundaries of the stepsize regions for Bregman Condat-Vũ algorithms and Bregman PD3O, respectively.The blue and red points indicate the chosen parameters in (72) (red for for PD3O, blue for Condat-Vũ).In the Bregman dual Condat-Vũ algorithm with line search, the stepsizes are selected on the dashed straight line.The solid line segment shows the range of stepsizes that were selected, with dots indicating the largest, median, and smallest stepsizes.
11,149.6
2022-03-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Optimization and Standardization of Thermal Treatment as a Plasma Prefractionation Method for Proteomic Analysis Prefractionation is a prerequisite step for deep plasma proteomics. Highly abundant proteins, particularly human serum albumin (HSA) and immunoglobulin G (IgG), typically interfere with investigation of proteins with lower abundance. A relatively simple preparation method based on high temperature can precipitate thermolabile proteins, providing a strategic window to access the thermostable plasma subproteome. This study aimed to optimize thermal treatment as a reliable prefractionation method and to compare it with two commercial kits, including HSA and IgG immunodepletion (IMDP) and combinatorial peptide ligand libraries (CPLL), using untreated plasma as a control condition. By varying the temperature and the incubation period, the optimal condition was found as treatment at 95°C for 20 min, which maintained about 1% recovery yield of soluble proteins. Consistency and reproducibility of thermal treatment-derived plasma subproteome were checked by two-dimensional electrophoresis. The coefficient of variation regarding protein spot numbers was less than 10% among three independent specimens. Highly abundant protein depletion of the thermal treatment was evaluated by immunoblotting against HSA and IgG as compared to the untreated plasma, IMDP, and CPLL. Multidimensional comparison based on 489 unique peptides derived from the label-free quantitative mass spectrometry revealed that the thermal treatment, IMDP, and CPLL provided distinct sets of plasma subproteome compared to untreated plasma, and these appeared to be complementary to each other. Comparing the characteristics of the three procedures suggested that thermal treatment was more cost-effective and less time-consuming than IMDP and CPLL. This study proposes the use of thermal treatment as a reliable and cost-effective method for plasma prefractionation which provides benefits to large-scale proteomic projects and biomarker studies. Introduction Plasma is an important biological sample for clinical investigations and biomedical research. Plasma is relatively easy to access and can show significant changes in biological markers, which often relate to pathological conditions. However, the broad dynamic range of plasma proteins (>10 orders of magnitude) and the overwhelming presence of high abundant proteins particularly human serum albumin (HSA) and immunoglobulin G (IgG), which constitute more than 2 BioMed Research International 60-70% of whole plasma proteins, represent challenges for plasma proteomics [1,2]. Plasma prefractionation is therefore a prerequisite step to reduce the plasma protein complexity and increase the chance of discovering clinical-relevant biomarkers. General approaches in plasma prefractionation include immunodepletion, affinity enrichment, and fractionation [3]. Choosing the proper prefractionation method can improve the outcome of plasma proteomic projects [3]. To date, the standard prefractionation methods such as HSA and IgG immunodepletion (IMDP) and combinatorial peptide ligand libraries (CPLL) are commercially available and very effective. However, disadvantages such as small sample loading capacity (which affects downstream analyses), the complexity of the procedure (which reduces sample throughput and productivity), or high unit cost (which burdens large-scale studies) are of concern. In this context, development of a prefractionation method which is reproducible, time-saving, and costeffective would be beneficial to large-scale proteomic studies and for the future development of clinical proteomic assays [3]. Thermal treatment has been used in biomedical research [5][6][7][8][9][10], and once applied in proteomic studies [11,12]. Thermal treatment separates plasma proteins based on their physical properties under high temperature into two fractions, namely, a thermostable (TS) protein-soluble fraction and a thermolabile protein-precipitate [3]. This process is also known as heat-induced gelation of plasma proteins [13,14]. Fourier transform Raman spectroscopy showed changes in the secondary structures of albumin and globulins, i.e., reduced -helix, disulfide bond interactions, aberrant exposure, and buriedness of hydrophobic residues, together with formation of -sheet induced by fibrinogens, mainly contributing to this heat-induced gelation process of plasma [15]. Thermal treatment has several characteristics useful as a prefractionation method for large-scale proteomic studies, since it has high sample loading capacity, is simple to perform with relatively low cost, and can likely be readily automated. However, this fractionation method has not been well standardized or shown to provide highly reproducible results. These concerns need to be addressed to demonstrate the value of thermal prefractionation for application in plasma proteomic projects. This study aimed to optimize and standardize the thermal treatment for plasma prefractionation by varying the temperature and incubation period, measuring the recovery yield, and evaluating the reproducibility of TS plasma subproteome by two-dimensional electrophoresis (2-DE). The optimized thermal treatment was then compared against two standard methods, i.e., IMDP and CPLL, using Western blot analysis and label-free quantitative mass spectrometry, where the untreated plasma served as the control condition. Finally, the characteristics of the thermal treatment, IMDP, and CPLL procedures were compared. This study provided evidence to support future application of thermal treatment in large-scale plasma proteomic projects. [16]. Human blood plasma samples were collected from 3 healthy volunteers (two males and one female, age 34.0±6.1 years). Blood was drawn into a 3ml EDTA blood collection tube using a 21-gauge needle and stored at 4 ∘ C. Within 4 h after blood collection, plasma was collected by centrifugation using a swinging bucket rotor at 1,500 x g for 10 min at 4 ∘ C. The obtained plasma was aliquot and kept at -80 ∘ C until use. Thermal Treatment. Three hundred microliters of plasma were transferred into a 1.5-mL polypropylene conical microcentrifuge tube (Eppendorf #022364111; Eppendorf North America, Hauppauge, NY) and incubated at 65, 75, 85, and 95 ∘ C for 20 min in Eppendorf ThermoMixer-C incubator (Eppendorf AG, Hamburg, Germany). After obtaining the optimum temperature, the incubation time was varied for 5, 10, 20, and 30 min at the fixed optimum temperature. After thermal treatment, the sample was immediately placed on ice for 5 min to allow the denatured plasma protein to aggregate and then subjected to centrifugation at 12,000 x g for 10 min. The supernatant containing TS proteins (TS soluble fraction) was collected. Protein concentration and recovery yield were estimated by the Bradford's assay. Ten micrograms of proteins in each condition were resolved on 12.5% SDS-PAGE and visualized using the blue silver CBB-G250 staining [4]. Immunodepletion (IMDP). Depletion of albumin and immunoglobulin, two most abundant plasma proteins, was performed using Pierce Top2 abundant protein depletion spin column (#85161, Thermo Fisher Scientific Inc., IL, USA) according to the manufacturer's instructions. Briefly, 10 l of plasma sample was directly added to the immunodepletion spin column containing 62% slurry in 10 mM PBS, 0.15 M NaCl, 0.02% sodium azide, pH 7.4 and mixed gently. The mixture was then incubated for 30 min at room temperature with gentle end-over-end mixing every 5 min. The unbound fraction was harvested by centrifugation at 1,000 × g for 2 min and kept at -80 ∘ C until further analysis. Combinatorial Peptide Ligand Libraries (CPLL). Enrichment of low-abundance plasma proteins using the CPLL column (ProteoMiner; #163-3006, Bio-Rad Laboratories, Inc., CA, USA) was performed according to the manufacturer's instructions. Briefly, the CPLL column was prepared by adding 200 l wash buffer (BioRad) and rotating the column several times over a 5 min period. The wash buffer was removed by centrifugation at 1,000 × g for 1 min. This step was repeated once. Thereafter, 200 l of plasma was added to the column followed by incubation for 2 h at room temperature with gentle mixing. The unbound proteins were then removed by 1000 x g centrifugation for 1 min, and the column was washed twice using 200 L wash buffer (BioRad) and additionally washed by 200 L deionized water to remove unbound proteins and salt contamination. The bound proteins were eluted by adding 20 l of elution reagent (BioRad) and then incubation for 15 min with intermittent gentle mixing. The eluted proteins were collected by centrifugation at 1,000 × g for 30-60 sec. This elution step was repeated twice. The eluate was kept at -80 ∘ C until further analysis. and Protein Spot Analysis. Fifty micrograms proteins from the untreated plasma and the thermal treatment conditions (3 individuals per condition) were mixed with a rehydration buffer (7 M urea, 2 M thiourea, 4% CHAPS, 0.5% (v/v) IPG buffer pH 3-10, 60 mM DTT and 40 mM Tris) and rehydrated into a 7-cm IPG strip (pH 3-10 nonlinear and/or pH 4-7 linear; GE Healthcare, Little Chalfont, UK) for 16 h at room temperature. Isoelectric focusing (IEF) was performed by the Ettan IPGphor III IEF System (GE Healthcare) at 20 ∘ C using a stepwise voltage increase to reach 9,000 Vh. The focused IPG strip was equilibrated with an equilibration buffer (6 M urea, 130 mM DTT, 112 mM Tris-HCl pH 8.8, 4% SDS, 30% glycerol and 0.002% bromophenol blue) for 15 min at room temperature with agitation, followed by another equilibration for 15 min in the same solution in which DTT was replaced by 135 mM iodoacetamide. The proteins on the equilibrated strip were separated on 12.5% SDS-PAGE using SE260 mini-Vertical Electrophoresis Unit (GE Healthcare) at 150 V for 2 h. Protein spots on the gel were visualized by blue silver CBB-G250 staining [4]. The stained gel was captured by ImageScanner III (GE Healthcare). Reproducibility of the protein spot position based on their pI and molecular weight was automatically detected on 2-DE by using ImageMaster 2D-Platinum software (GE Healthcare) including the protein profile pattern, the protein spot resolution, the total number of detected spots, and the normalized spot intensity [17]. Parameters used for spot detection included the minimal area of 10 pixels, the smooth factor of 2.0, and the saliency of 2.0. Western Blot Analysis. Proteins (10 g/lane) were resolved on 12.5% SDS-PAGE at a constant 150V for 2 h. The separated proteins were transferred onto PVDF membranes (Immobilon-P; Millipore, MA, USA) using Trans-Blot SD semidry transfer cell (Bio-Rad). The membranes were blocked in 5% skim milk in PBS for 1 h at room temperature with agitation. After washing, the membranes were probed with antibodies against HSA (ab10241; Abcam Inc., Cambridge, MA) or IgG heavy chain (IgG HC) (P0124, DakoCytomation, Denmark) at dilution 1:1000 in 1% BSA/PBS, 4 ∘ C overnight. The membranes were washed to remove excess antibodies and then incubated in secondary antibody conjugated with HRP (at dilution 1:2000 in 1%BSA/PBS) (Dako-Cytomation, Denmark) at room temperature for 1 h. After washing, the membranes were incubated with enhanced chemiluminescence (ECL) (GE Healthcare), followed by detection with ImageQuant6 LAS 4000 (GE Healthcare). In-Solution Tryptic Digestion. Equal protein amounts (20 g each) from 4 conditions, i.e., untreated plasma, thermal treatment, IMDP, and CPLL, were digested following modified filter-aided sample preparation (FASP) [18]. Briefly, the plasma proteins in the 3 kDa cut-off spin filter were reduced by 5 mM DTT in 8M urea/0.1 M Tris-HCl, pH 8.5 at 37 ∘ C for 1 h in the dark on Eppendorf ThermoMixer-C, subsequently concentrated by centrifugation at 14,000 rpm for 10 min. Then 100 l of 15 mM IAA in 8M urea/0.1 M Tris-HCl, pH 8.5 was added to the filter containing the reduced proteins and then incubated for 30 min at room temperature in the dark with agitation. To concentrate and discard the salt in the sample, the latter in the filter was centrifuged at 14,000 rpm for 10 min and then added with 200 L of 50 mM NH 4 HCO 3 solution. This step was repeated twice. Proteins were digested with a final ratio of 1:50 (w/w) trypsin (Promega Corp., WI, USA) at 37 ∘ C for 16-h. The trypsin activity was stopped by adding 5% formic acid in 50% ACN and then incubated at 37 ∘ C for 20 min. The spin filter containing peptides was centrifuged at 14,000 rpm for 30 min to collect the peptides which passed through the 3 kDa cut-off filter. The peptides were dried by SpeedVac concentrator. The dried peptides were resuspended with 10 L of 0.1% formic acid. Label-Free Quantitative Mass Spectrometry. Five microliters of the peptide solution were injected into Agilent 6530 Accurate-Mass Quadrupole-Time Of Flight (QTOF) mass spectrometer in standard 4GHz high resolution mode coupled to Agilent 1260 Infinity liquid chromatography with precolumn contained Zorbax 300SB-C18 (5 m, 5×0.3 mm) and analytical column contained Zorbax 300SB-C18 (3.5 m, 75 m×150 mm) using a gradient of solvent B (acetonitrile with 0.1% formic acid) in solvent A (water with 0.1% formic acid). Condition on injection was 5% solvent B and progressed to 40% solvent B for 100 min with a linear gradient and subsequently to 80% solvent B for 10 min at a flow rate of 0.5 L/min. The Agilent QTOF instrument was operated via Mass Hunter workstation data acquisition and the parameters were set as follows: MS range of 290-3000 m/z, MS/MS range of 50-1700 m/z, 20 maximum precursors per cycle, capillary voltage = 2000 V, fragmentor = 175 V, skimmer = 65 V, OCT 1 RF Vpp = 750 V, gas temperature 300 ∘ C, and drying gas 10 L/min. Raw data of all samples were searched via Spectrum Mill software against Swiss-Prot database version 20161213, Homo sapiens taxonomy (20,130 sequences), peptide mass tolerance ±100 ppm, MS/MS fragment mass tolerance ±0.4 Da, monoisotopic, charge 2+ to 7+, 2 missed cleavage for trypsin digestion. Peptides were identified using the score threshold >9 and the false discovery rate (FDR) <1% and quantified by the MS1-based intensity. Only peptides that presented in at least 2 out of 3 independent samples for a given condition and also pass a filter of ≥2 unique peptides per protein [19,20] were utilized for a comparative purpose. Data and Statistical Analysis. Data and statistical analysis were performed with Excel and R package MetaboanalystR [21]. The MS1 intensity of each unique peptide was normalized against the total ion intensity of its LC-MS injection. Missing values were not imputed and were set to zero by default. Expression data was preprocessed by log2 transformation and autoscaling. The self-organized heatmap was based on Pearson distance and average linkage. Venn diagram was generated by InteractiVenn [22]. A correlation matrix was plotted using Pearson correlation. Principal component analysis was performed to visualize directions of sample groups based on mass spectrometric data. Physical and chemical properties including instability index, aliphatic index, and grand average of hydropathicity (GRAVY) were computed by ProtParam tool (https://web.expasy.org/protparam). Data was presented as mean, standard error of the mean (SEM), and coefficient of variation (CV) in the independent experiments. P value < 0.05 was considered statistically significant. The Optimum Thermal Treatment Is 95 ∘ C for 20 min. A main challenge for the optimization of thermal treatment is that differences in applied temperature and incubation time can yield various outcomes. Extreme heating or very long incubation may destroy all plasma proteins, whereas mild heating or too short incubation may not produce a stable aggregate of denatured proteins. Since the goal of this study was to apply thermal treatment to plasma proteomics, both temperature and incubation period need to be optimized to cause depletion of high abundant plasma proteins in a reproducible manner. Since the most abundant protein, HSA, constitutes over half of the proteins in plasma and can be easily detected as a 69-kDa protein band on SDS-PAGE, the optimal conditions for thermal treatment were screened by HSA depletion. The effects of different temperatures and incubation times, and optimal conditions for thermal treatment, are shown in Figure 1. For the varied temperature-fixed incubation time conditions (65 to 95 ∘ C; 20 min), the prominent band of HSA was markedly decreased at 95 ∘ C thermal treatment comparing to the other lower temperatures (Figure 1(a), left panel). For variation in incubation time (5 to 30 min) at fixed temperature (95 ∘ C), the results showed HSA depletion reached a steady state after 20-30 min (Figure 1(a), right panel). Figure 1(b) showed that the protein band pattern of the TS soluble fraction was unique, whereas the untreated plasma and thermolabile protein precipitates showed a similar pattern. This result suggests thermal treatment extracted a thermostable subproteome from whole plasma, leaving most of the high abundant proteins, especially HSA, in the protein precipitate. Accordingly, the optimal condition for thermal treatment at 95 ∘ C, 20 min was applied for further analyses. 2-DE Showed Consistency and Reproducibility of Thermostable Plasma Subproteome. The 2-DE was performed to evaluate the consistency and reproducibility of the TS soluble fraction after thermal treatment since this technique allows sensitive visualization for detecting changes in proteome profile. Plasma samples derived from 3 individuals were prepared by the optimal condition for thermal treatment. The recovery yield of thermal treatment was approximately 1% (Supplementary Table 1). This information is useful for downstream analyses, since the amount of TS protein required can be approximated from the initial plasma volume. Next, the 2-DE analysis of TS soluble fraction, as compared to the untreated plasma, is shown in Figure 2. The protein spot patterns of the untreated plasma from 3 subjects were almost identical on visual analysis, and likewise, for the results obtained from TS soluble fractions of thermal treatment. In the untreated plasma, an interindividual variation was observed in subject 3 as a faint protein spot at approximately 14-kDa molecular weight; nonetheless, the confidence level was uncertain due to low expression. After thermal treatment, this interindividual variable protein spot was clearly enriched in the corresponding TS soluble fraction of subject 3 (Figure 2), which verifies the initial observations with untreated plasma. Furthermore, the number of protein spots on the 2-DE were quantitatively counted by ImageMaster 2D-Platinum program to determine reproducibility. As a result, untreated plasma and TS soluble fractions showed intercoefficient of variations (inter-CV) of 2.3% and 4.1%, respectively (details in Supplementary Table 2). These qualitative and quantitative findings showed the consistency and reproducibility of the optimized thermal treatment and also supported further comparison with standard methods. Comparison with Two Standard Methods Confirmed Applicability of Thermal Treatment. The IMDP and CPLL methods are frequently applied in plasma proteomic projects. This study, therefore, adopted Pierce-Top2 abundant protein depletion and ProteoMiner as representatives of commercially available IMDP and CPLL kits to standardize the thermal treatment. Comparison at the protein level was performed by SDS-PAGE and Western blotting for a robust evaluation of different prefractionation methods, while quantitative analysis at the peptide level was studied by label-free quantitative mass spectrometry. Figure 3(a) demonstrated protein band patterns of the untreated plasma samples and their corresponding protein fractions after thermal treatment, IMDP, and CPLL of nine technical replicates, corresponding to three biological specimens. As expected, the protein band pattern was consistent in the same group, whereas distinct patterns were found with different prefractionation methods as compared to untreated plasma. This result suggested that thermal treatment, IMDP, and CPLL yielded different plasma subproteomes based on their mechanisms for protein isolation. The 14-kDa protein, an interindividual variation protein initially observed in Figure 2, was again detectable in thermal treatment and probably by IMDP of subject 3 as the 14-kDa protein band, the so-called "Band-A" (Figure 3(a). It was not surprising that the Band-A was not detected in the untreated plasma since the protein loading amount in SDS-PAGE (10 g/sample) was less than the 2-DE (50 g/sample). The isoelectric focusing of the 2-DE can increase the sensitivity of protein spot detection also. Presence of the Band-A in the subject 3 after IMDP supported the reliability of the thermal treatment. Further assessment was carried out by Western blot analysis. Figure 3(b) showed the ability of various methods to deplete HSA and IgG, the two most abundant plasma proteins. Compared to the untreated plasma, three prefractionation procedures shared a common ground; even they exhibited different magnitudes of HSA and IgG depletion. Based on this promising result, thermal treatment was then further benchmarked with the IMDP and CPLL by mass spectrometricbased analysis. After in-solution digestion, tryptic peptides of the untreated plasma, thermal treatment, IMDP, and CPLL conditions were identified and quantified by label-free quantitative mass spectrometry. Totally 963 unique peptides, corresponding to 213 unique proteins, were identified at the peptide score threshold >9 and the peptide FDR<1% (details in Supplementary Table 3). Of these, 489 unique peptides, which were present in at least 2 out of 3 samples for a given condition, and also passed a filter of ≥2 identified peptides per protein [19,20] (details in Supplementary Table 4), were considered as having high confidence and suitable for comparison of label-free quantitative data. A multidimensional comparison was then performed using a data-driven approach. Relative intensities of 489 unique peptides (Figure 4(a)) and their corresponding 58 unique proteins (Figure 4(b)) were present in the selfclustered heatmaps. Expression profiles with unsupervised clustering clearly distinguished untreated plasma, thermal treatment, IMDP, and CPLL, consistent with the previous finding (Figure 3(a)). The Venn diagram demonstrated that only 15 peptides were shared among all groups, whereas up to 49-93 peptides were uniquely present with each prefractionation method (Figure 4(c)). Next, the expression profiles of three biological samples within the same group showed a good correlation (Figure 4(d)), while the correlation coefficients were very low between different methods. Furthermore, the principal component analysis revealed four distinct directions belonging to the untreated plasma, thermal treatment, IMDP, and CPLL datasets (Figure 4(e)). In addition, the number of peptide spectrum matches (#PSMs) of serum albumin and immunoglobulins (Supplementary Figure 2) resembled Western blot analysis as shown in Figure 3(b). Moreover, physical and chemical properties of 44 identified proteins between untreated plasma and thermal treatment were compared by computation of molecular weight (MW), isoelectric point (pI), instability index, aliphatic index, and grand average of hydropathicity (GRAVY). Note that the aliphatic index is a positive factor for the increase of thermostability of globular proteins [23]. The prediction value of protein properties showed that the proteins in thermal treatment condition had lower molecular weight, lower pI, lower instability index, higher aliphatic index, and higher GRAVY index than the untreated plasma proteins (Supplementary Figure 3 and Supplementary Table 6). This computational metrics convey a clear effect of the thermal treatment on various changes in the physicochemical properties of the proteins. Taken together, this multidimensional comparison supported the conclusion that the thermal treatment yielded a distinct plasma subproteome which did not replace but was instead complementary to the IMDP and CPLL. Untreated plasma (a) S1 S2 S3 Untreated plasma Thermal CPLL IMDP S1 S2 S3 S1 S2 S3 S1 S2 S3 showed consistency and reproducibility of protein band patterns with thermal treatment as well as with untreated plasma, IMDP, and CPLL. Protein bands were visualized by the blue silver CBB-G250 staining [4]. The interindividual variable proteins, so-called the Band-A, were labelled in the red square. (b) Western blot analysis (10 g/lane) demonstrated that three methods could deplete HSA and IgG, the most abundant plasma protein, as compared to the untreated plasma and thus met a required characteristic for plasma prefractionation method. Full-length immunoblots were provided in Supplementary Figure 1. Plasma samples from three subjects (S1-S3) were used for both experiments, while three technical replicates per specimen were also performed for SDS-PAGE. Thermal Treatment Has a Strategic Benefit for Large-Scale Plasma Proteomic Studies. In the end, the selection of prefractionation method should consider the specific need of each project [3]. There is no perfect method since different approaches have unique characteristics with both benefits and disadvantages. From this point of view, procedure characteristics of thermal treatment, IMDP and CPLL are listed in the following section and discussed regarding the sample loading capacity and recovery yield, the complexity of procedure, and the unit cost. The source information of IMDP and CPLL was based on the kit instructions and the vendor websites as last checked on January 5, 2019. The sample loading capacity and recovery yield of prefractionation method would affect the design of downstream analyses. In this regard, thermal treatment, IMDP, and CPLL had the sample loading capacity of 300 L plasma (∼20g protein), 10 L plasma (∼0.8g), and ≥0.01g protein, respectively, with their corresponding recovery yields of 1%, 10.0%, and 1.3%. The recovery yields of IMDP and CPLL in this study were also consistent with a previous report (10.9% and 1.1%, respectively) [33], indicating the reproducibility of the commercial methods. IMDP exhibited a higher recovery yield than others, but this is probably due to the saturation of immunoaffinity beads [33]. Considering a scaling up, thermal treatment has apparently no limit of sample loading capacity, even though the starting plasma volume of 300-1000 L should return a sufficient protein amount (200-650 mg) for most downstream analyses. Also, this scaling up is associated with a minimal cost burden since no specific material is required for thermal treatment. The complexity of procedure could reduce the sample throughput and productivity. This characteristic could be objectively measured by the step of procedure and the working time [34]. Thermal treatment and IMDP consist of two main steps (sample loading and centrifugation), whereas CPLL needs four (column pretreatment, sample loading, centrifugation, and protein elution). The working time of thermal treatment and IMDP is also similar (35 min and 45 min, respectively), while CPLL requires at least 150 min to complete the process. The unit cost of prefractionation method could be a burden for large-scale projects. In our setting, thermal treatment has an actual cost of 0.1 USD/sample (as estimated by the cost of electricity for the heating process). The unit cost of Pierce top 2 abundant protein depletion spin column (the IMDP representative) is 30.6 USD/sample (184 USD per 6 columns; #85161, www.thermofisher.com) and that of ProteoMiner Protein Enrichment Small-Capacity (the CPLL representative) is 72.7 USD/sample (727 USD per 10 columns; #1633006, www.bio-rad.com). Although this information cannot cover all prefractionation kits available in the market, it is undoubted that thermal treatment is cost-effective and has less workload than the comparators. Accordingly, it is attractive to apply the thermal treatment in plasma proteomic studies involving large populations and multicenter cohorts in the future. Conclusions In conclusion, this study demonstrated that the optimized and standardized thermal treatment is a reliable and reproducible plasma prefractionation for proteomic analysis. Information on its potential application supported the role of thermal treatment in large-scale proteomic studies involving biomarker validation and cost-effective proteomic biomarker assays. Data Availability The proteomic data used to support the findings of this study were included within the Supplementary Information. Conflicts of Interest The authors declare no conflicts of interest. Table 1: recovery yield of plasma prefractionation by thermal treatment. Supplementary Table 2: coefficient of variation (CV) found with the numbers of the detected protein spots on 2-DE gels (n=3 subjects). Supplementary Table 3: total of 963 unique peptides identified at the peptide score>9 and the peptide FDR<1% by LC-QTOF. Supplementary Table 4: detailed information on 489 unique peptides corresponding to 58 unique proteins for comparative purposes. Supplementary Table 5: detailed information regarding the unique peptides present in each compartment of Venn diagram. Supplementary Table 6: prediction of physical and chemical properties of 44 identified proteins in untreated and thermal conditions by ProtParam tool (https://web.expasy.org/protparam). Figure 1: full-length immunoblot results of HSA (69 kDa) and IgG HC (55 kDa) corresponding to a group of cropped blot images shown in Figure 3
6,221.6
2019-04-30T00:00:00.000
[ "Medicine", "Chemistry" ]
Effects of Internal and External Imbalances and the Role of Relative Prices on Economic Growth: Evidence From Turkey Balance of payments constrained growth model considers that if a country has a chronic external deficit, economic growth can be constrained. Initial model extended by taking into account the effects of capital flows, external debt sustainability, interest payments, budget deficit or public debt, simultaneous effect of internal and external imbalance, and the role of relative prices. We further incorporate Turkey’s high intensity of imports in the aggregate demand components and estimated the model with system estimator. The new version of the model improves significantly explaining the growth in Turkey. Our results reveal that economic growth in Turkey constrained by internal and external imbalances and relative prices play a significant role. Policies aimed at changing the structure of the imports and exports are the most effective for achieving higher growth. It is also shown that economic growth in Turkey highly depends on external demand when the strong depreciation of the domestic currency also acts as a stimulus to growth. Introduction developed a long-run growth model consistent with the balance of payments equilibrium. The model implies that no country can grow faster than the rate consistent with the balance of payments equilibrium. The balance of payments consistent growth rate is obtained by the ratio of income elasticity of demand for exports and imports times the exogenous growth rate of world income. When countries do not respect this simple rule sooner or later, they will face a serious problem in financing the external debt and current account deficit, and in the absence of capital flows, it will be necessary to contract domestic demand and hence growth. The law implies that growth is constrained by world demand, and if available external resources cannot finance current account deficit, it can be a serious obstacle to higher economic growth. Furthermore, Thirlwall's model implies that income plays a significant role in the adjustment of the economy to bring back to equilibrium, not relative prices. The model has two controversial assumptions: the real exchange rate or relative prices are constant in the long run, and initially, the balance of payments is in equilibrium. Recently, Soukiazis et al. ( , 2014 extended the model further by considering the role of imbalances in public sector as an additional constraint on economic growth and allowing relative price play a role in adjusting external imbalances. Imported inputs are important for domestic production, exports, and economic growth in developing countries. Use of imported inputs in manufacturing products improves the quality of products due to the use of more sophisticated inputs 919493S GOXXX10.1177/2158244020919493SAGE OpenCivcir and Yücel research-article20202020 1 Ankara University, Turkey 2 Bartin University, Turkey in the production process. This could lead to introducing higher quality products for the export market (Kugler & Verhoogen, 2009). Also, the use of imported inputs in the production process might increase productivity in the manufacturing sector (Halpern et al. 2015;Khandelwal & Topalova, 2011). Furthermore, low-cost imported inputs may increase export revenue (Bas & Struss-Kahn, 2014). All of these factors might increase the profitability of the firms, which can increase existing exports and allow the firm to bear the fixed cost of accessing new product markets (Edwards et al., 2018). Increasing use of imported inputs might be increasing exports and contributing to economic growth, and also increases cross-country import dependence (Nordas, 2007). Imported inputs for export have potential to be the main channel for growth if it enhances capital stock and productive capacity. A country which can expand its economic activity in the high technological sector, while avoiding the expansion of medium and low technological sectors, can attain high export growth (Saygılı & Saygılı, 2011). However, Moreno-Bind (2002), Pacheco-Lopez and Thirlwall (2004), and Blecker and Ibarra (2013) argue that the reliance on a strategy based on foreign content of export may be harmful to growth. Such a strategy may result in an increase in the income elasticity of demand for imports without a compensating effect on the income elasticity of exports. In this case, a country that relies on imported inputs may experience lower growth rates consistent with the balance of payments constrained growth (BOPCG) rates. With trade reforms in the 1980s and the 1990s, Turkey increased its export significantly; however, this also increased import content of production and exports. In line with Muscatelli et al. (1995) and Saygılı and Saygılı (2011), we extend the Soukiazis et al. (2014) model further by incorporating Turkey's high intensity of imports in the aggregate demand components and the production process; therefore, we extend the investment and exports functions by taking into account effects of imports. Remaining part of the article is organized as follows: In the "Internal and External Imbalances, and Growth in Turkey" section, we briefly explain recent developments in Turkish economy with special emphasis on the link between current account deficit, fiscal imbalances, and economic growth. The "Related literature" section gives a brief survey of Thirlwall's model and presents an extended growth model that takes into account internal and external constraints and relaxes the assumption of relative prices being neutral. In "The Balance of Payment Constrained Growth Model With Internal and External Imbalances, Non-Neutral Relative Prices, and With Foreign Content" section, we test the extended model on the Turkish economy to identify the main determinants of its growth performance within the demandoriented growth model. The "Scenario Analysis" section provides a scenario analysis focusing on the factors that could foster or harm economic growth in Turkey. The "Conclusion and Policy Implications" section presents the main conclusions and policy recommendations that could help the country to improve its growth performance. Internal and External Imbalances, and Growth in Turkey Starting from 1980, the momentum of the reform process brought a decade of rapid economic growth. However, Turkish economic growth has been very volatile in the 1990s; the average growth rate was 3.9%. During this period, the Turkish economy functioned under a mist of vulnerability, distressed by chronically high inflation and persistent fiscal imbalances. Periods of economic expansion have followed with periods of rapid decline (see Figure 1). After a severe recession in 1994, the economy went through a boom period of abovetrend growth between 1995 and 1997. The real depreciation of the Turkish lira in 1994 led to strong exports performance and contributed to the strong economic growth. In 1998, the economy severely hit by the Russian crisis and economic growth shrank over 6% in 1999. Increasing macroeconomic uncertainty limited access to external financing, the government was forced to increase its reliance on domestic financing and monetization. This increased inflation and rapid accumulation of domestic debt to gross domestic product (GDP) ratio (see Figure 1). The Turkish economy was also vulnerable to the emerging market crisis-the turmoil in Asia and the Russian default affected the economy. In the second half of 1998, Turkey faced massive capital outflows, rising real interest rates, and declining economic activity. Policies clearly could not continue on this path for long; following elections in April 1999, the new government introduced a new economic program focusing on fiscal adjustment and deep structural reform. The government has signed a stand-by agreement with the International Monetary Fund (IMF) in late 1999 to stabilize and reform the economy (IMF, 2001;Onis, 2003). Major institutional and structural reforms introduced after the 2001 crisis helped overcome the earlier "boom-and-bust" cycles during the 1990s, but external deficits expanded significantly. The combination of strong growth and external imbalances has characterized Turkey's growth pattern since the beginning of the 2000s. The development of Turkey's current account deficit since the beginning of the 1990s plotted in Figure 2. Turkey's current account deficit hovered around 1% of GDP during much of the 1990s. A more sizable external imbalance started to emerge only after the financial crisis of 2001, when Turkey experienced strong growth, thanks to important reforms to strengthen its macroeconomic policy framework and financial sector. The current account deficit to GDP ratio steadily increased between the 2003 and 2007 periods. The impact of the global crisis was severe on the Turkish economy, the growth rate was significantly negative, and the current account deficit decreased sharply in line with the economic slowdown in during 2008 and 2009. In 2010, domestic demand recovered swiftly and the economy returned to rapid growth reached 9.2% in 2010 and 8.5% in 2011 and the current account deficit widened again relative to its pre-crisis levels. The deterioration of the current account has worsened worryingly in the first quarter of 2011 which reached the unsustainable level. After a vigorous recovery from the global financial crisis, Turkey went into a period of below potential growth in recent years. However, consumer price inflation is far above the inflation target, and the current account deficit is much too high for comfort. Economic growth was encouraging given the very adverse circumstances in 2015 and 2016. Current account deficit to GDP ratio in the same periods was still high. Turkish economic growth is highly dependent on domestic demand and capital inflows, which mainly come in the form of short-term debt-creating flows. Spells of current account deterioration have been characterized by strong credit-financed domestic demand-driven growth, sizable capital inflows, real exchange rate appreciation, and increasing import penetration. In summary, Turkish growth is constrained mainly by the fiscal imbalances during the 1990s and external imbalances after the year 2000. Considering all these developments, we aim to use an alternative approach that takes into account internal and external imbalances on the economic performance of Turkey. Thirlwall (1979) developed the BOPCG theory by focusing on the relative growth rate adjustments required to balance trade at given relative prices. The model designed to understand long-run differences in growth performance. Since then, several contributions have been made. Thirlwall (2012) comprehensively review recent theoretical and empirical contributions to the BOPCG models. Related Literature The original formulation of Thirlwall's law assumes that only the export revenues finance imports. However, if a country can attract capital inflows, this assumption is too restrictive. Thus, Thirlwall and Hussain (1982) revised Thirlwall's law by taking into account the capital flows. In the modified model, changes in the export prices also affect the economic growth of countries by way of the real value of net capital inflows. This extended model allows for a continuously rising ratio of net borrowing; therefore, a country's level of indebtedness relative to GDP can increase continuously. McCombie and Thirlwall (1997) modified the Thirlwall and Hussain (1982) model to make sure that the long-run economic growth is consistent with a sustainable path of foreign borrowing. Theoretical result of the modified model showed that capital inflows cannot permit a country to increase its growth rate above that given by Thirlwall's law for a long period. Elliot and Rhodd (1999) further modified the Thirlwall and Hussain (1982) model by including the effect of debt servicing. Moreno-Brid (1998-1999 also extended the Thirlwall and Hussain (1982) BOPCG model by incorporating the constraint that the current account deficit to GDP ratio is constant in long-run equilibrium. Barbosa-Filho (2001) extends the balance of payments (BP)-constrained growth model to allow for a sustainable accumulation of foreign debt, taking into account both the potential instability of such a constraint and the impact of interest payments on debt accumulation. Moreno-Brid (2003) develops a version of the BOPCG model adapted from Thirlwall and Hussain (1982) that explicitly considers interest payments and generates a sustainable path of external debt accumulation. In many BOPCG models, there are no relative price effects. However, according to some empirical studies, the impact of relative prices or the real exchange rate on economic growth is mixed. Prices may change in response to productivity growth and that change in relative prices may have a significant impact on export growth. Recently, Soukiazis et al. ( , 2014) extended the BOPCG model further by considering the role of imbalances in public sector as an additional constraint on economic growth and allowing relative price not neutral in adjusting external imbalances. The similar model has been applied to Italy (Soukiazis et al., 2014), Portugal (Soukiazis, Cerqueira, & Antunes, 2013), Romania (Soukiazis et al., 2015), Slovakia (Soukiazis, Muchova, & Lisy, 2013), Greece (Soukiazis et al., 2018), and Nigeria (Panshak et al., 2019), and the result revealed that it is very coherent in identifying the most important determinants of growth. Thirlwall's original model previously tested for Turkey by Halicioglu (2012) by estimating the export and import functions for the 1980-2008 period. His results show that the average predicted growth rate is close to the average actual growth rate. However, he also finds that during the high inflationary periods (1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003), the actual and predicted growth rates are significantly different. Gokce and Cankal (2013) also tested the model indirectly for 1968-2011 period. They found a cointegrating relationship between output and export, and claimed that Thirlwall law is supported by the data. The vast majority of studies support the BOPCG hypothesis. The Import Demand Function Import demand is explained by the components of domestic income, contrasting the traditional model that relies on real aggregate GDP. Furthermore, it is assumed that relative prices play a significant role in determining import demand and that in the long run can have an effect on long-run economic growth. The import demand function with the above assumption is specified in growth rates as follows: The above function shows that the growth rate in demand for imports  m is a function of the growth rates of consumption  c, spending by the government sector  g, export sector inflation  p, as well as the changes in exchange rate  e over time. ε mp < 0 is relative price sensitivity of the demand for imports; hence, devaluation can curtail import demand by making it costlier in the home market. The Export Demand Function The Turkish economy relies significantly on the importation of critical raw and intermediate goods, and equipment and machinery for export growth. Therefore, we extend the SCA export model in line with Muscatelli et al. (1995), and Saygılı and Saygılı (2011) by adding the import growth. Specification of the export growth equation in dynamic form is as follows: The growth rate of export  x is the function of foreign income growth  y * , the relative prices growth    p e p * , + − and import growth  m. ε xy * > 0 captures the income elasticity of export. This component is the aggregation of non-price features of the export goods linked with the diversification, quality, packaging, trustworthiness, and so on. Similarly, ε xp > 0 represents the relative price sensitivity of exported goods and services. Given the positive sign of the coefficient, depreciation in the value of the domestic currency has the potentials of stimulating demand for exports, hence making exports more competitive in the foreign market. Private Consumption and Investment Functions Conventionally, long-run aggregate consumption depends mainly on total disposable income (earnings from holding public bonds and assets inclusive). We will presume that consumption growth is a function of all disposable income growth: where  c is the growth of consumption,  y d is the growth of disposable income, and ε cy > 0 captures the income elasticity of consumption. The private investment model is derived from the Keynes accelerator theory, postulating that the growth of gross investment  k depends on the growth of domestic income  y and the change in real interest rate ṙ over time period, 1 and we further extend the model to take into account the import requirements of the investment. This extension is in line with Hernandez et al. (2019). The investment function is specified as follows: Here, ε ky > 0 captures the effect of the accelerator, whereas ε kr < 0 shows the impact of the real cost associated with funding gross investment and ε km > 0 captures import sensitivity of the investment. The Government Sector The government budget is expressed in nominal values as given by the below identity: Here, nominal government spending is represented by G n , public debt owned by home bondholders is accounted for by B H , while B F represents public debt owned by foreign bondholders, real domestic income is captured by Y, P denotes the domestic price level, D is the public deficit, nominal interest rates compensation given to domestic and overseas public debt investors are captured by i and i*, respectively, e represents the nominal exchange rate, and t is the tax rate on nominal income. According to this expression, we are in the state of a public deficit when total current spending surpasses the tax revenues from domestic nominal income, that is, when The long-term behavior of the real government expenditures growth, ġ, compatible with the constraint (5), is given by From the above expression, budget deficit ratio is represented by are the shares of public debt owned by home and foreign bondholders (as a percentage of GDP), respectively,  d is the growth of budget deficit and  b F and  b H are the growth rates of the public debt owned by foreign and home bondholders, respectively. The Balance-of-Payments Condition In the concluding part of the model specification, we model the external equilibrium condition by the below identity: The left-hand side component of the equilibrium condition indicates the amount of monetary resources that will be used to finance imports (export revenues plus the amount of public deficit financed by foreign capital flows minus the interest payments to foreign bondholders). The final identity can be represented as where Domestic Income Growth Following in the dimension of Soukiazis et al. (2014) and imports in investment and export functions, the growth rate of domestic income can be obtained as 2 Domestic income growth model comprehensively specified above shows inter alia that the growth of real GDP is a function of both internal and external imbalances as well as relative prices. Precisely, the Numerator A is disintegrated in various components: the first term captures the shock of foreign demand on real GDP (i.e., domestic growth), the second term reveals the effect of substitution through the adjustments or changes of relative prices, the third term tells us how the trade volume affects domestic growth, and the last component of the numerator terms captures the impact of internal imbalances on domestic growth. The denominator measures the role and effect of the disaggregated import elasticities of the components of demand on domestic growth. We shall employ Equation 9 in explaining growth for Turkey. Estimation and Application of Extended SCA Model for the Turkish Economy We extended the Soukiazis et al. (2014) model further by incorporating Turkey's high intensity of imports in the aggregate demand components. We estimate and test the model for Turkey and provide some assessment on the expected economic performance of Turkey, in a sense identifying policy options for higher economic growth for Turkey. As we have seen in the overview of the Turkish economy in the "Internal and External Imbalances, and Growth in Turkey" section, the country faced with both fiscal imbalances in the 1990s and external imbalances after 2001. We test and verify the extended growth model in line with SCA that takes into account internal and external constraints and relaxes the assumption of relative prices being neutral for Turkey. The initial stage entails the estimation of all the foursystem equation simultaneously. Therefore, import demand equation (1), the export demand equation (2), the private consumption (3), and investment equation (4) are estimated simultaneously to get the needed elasticity coefficients essential to compute the reduced form of domestic income growth as specified in Equation 9. As earlier indicated, this article uses growth rates of all the variables spanning 1990Q1-2016Q2 to estimate the derived four-system equation. The definition of the variables and the data sources are given in Appendix A. We used dummy variables for crisis periods, 1994, 2001, and 2009. We estimated our four-equation system by three-stage least squares (3SLS), which is more efficient for controlling the endogeneity of regressors and cross-equation error correlation. Table 1 gives 3SLS estimation results where endogeneity of the variables in the models and cross-correlation of the residuals across equations are taken into account. Exports, imports, consumption, and investment at growth terms as well as the growth rates of domestic income, government final expenditures, domestic disposable income, real effective exchange rate, and real domestic interest rate are assumed to be endogenous variables. The article assumes that other variables in the system as exogenous, including the lagged values of some of the variables (see Table 1). To check the robustness of our 3SLS estimation results, we estimated our four equations individually by two-stage least squares (2SLS) with using the same set of instruments. The Sargan test for over-identification reveals that the instruments used are valid and uncorrelated with error terms. It also demonstrates that the excluded instruments are correctly excluded from the estimated equation. The Breusch-Godfrey test for serial correlation results shows that the null hypothesis of no serial correlation cannot be rejected at the 5% significance level. The autoregressive conditional heteroskedasticity (ARCH) test results conform homoscedasticity of the residuals. 3 When we look at the 3SLS estimation resuts in Table 1, we can assert that the parameter estimates are largely in conformity with the underlying theoretical postulation. Generally, elasticities demonstrate their expected signs and significance. Relative price elasticity in both import and export equations is highly significant with correct signs. Relative price elasticity of imports higher than that of exports in absolute value indicates that imports are more sensitive to relative price changes. Further comparing with the value of income elasticities, relative price elasticities are lower; this finding is in line with the literature confirming that the trade is more sensitive to income than relative price changes. In the import demand function, all the variables have correct signs and statistically significant at the 1% significance level, and government expenditure is significant at the 10% significance level. Import elasticity of investment and export are 0.328 and 0.318, respectively, while the consumption elasticity is 1.251, which indicates that imports increase more than proportionally with respect to the consumption increase. Estimation results in Table 1 also show that income elasticity of consumption and investment are high, 1.034 and 2.370, respectively; the latter is confirming the accelerator principle in the investment function. Interest rate elasticity of investment is −0.163, which is significant but low relative to the other variables. In our specification of the investment equation, we added import growth as a new variable because with the liberalization of external account in Turkey, production structure changed by employing the significant amount of imported inputs, machinery, and equipment. Import elasticity of investment is .370 and statistically significant. Estimation results of the export equation show that world income elasticity of the exports is 1.721, which implies that export is highly sensitive to foreign demand. This high sensitivity of exports to foreign demand should be a case of concern in the period of slowdown in foreign income and lose access to foreign markets. A contribution and novelty on the applicability SCA model for Turkey is the inclusion of imports in the export demand function. The result shows that import growth positively stimulates the export growth of Turkey, and imports elasticity of exports is 0.312. This owes to the change in the production structure of the economy after external account liberalization and the large capital inflows resulting overvaluation of the domestic currency. After estimating the core parameters of the structural model, we can compute the growth rate of domestic income in Turkey compatible with external and internal constraints as given in Equation 9. Table 2 reports the values necessary to compute the growth rate of domestic income in Turkey. Greek letters ( ) ε ij are elasticity obtained from 3SLS estimation of the models (Table 1); all the other variables are quarterly based annual averages over the 1990Q1-2016Q2 period (see Appendix A for the definition of the variables). Three main growth rates are computed. The first one is the growth, as defined in Equation 9, with internal and external imbalances, and relative prices are important adjustment variables and incorporate high intensity of imports in the aggregate demand components. The second growth rate is obtained by using the SCA model. The third one is the growth rate of the original Thirlwall's model. The original Thirlwall model requires aggregate income elasticity of demand for imported goods and services. Imports function of the following form needs to be estimated:      m cont p e p y v mp my t Ordinary least squares (OLS) estimation of the model may be biased and inconsistent because of the endogeneity of the growth of domestic income as indicated by . Therefore, we used 2SLS estimator for making the results more consistent. We use the growth of private consumption ( ).  c the growth of investment ( );  k and the growth of exports ( ),  x as instruments for the growth of domestic income ( )  y The aggregate import elasticity with respect to domestic income growth is obtained as 2.004. When we embark on a comparative analysis of the actual average growth rate of domestic income for the period 1990Q1-2016Q2 (4.092%), the below bold conjectures could be outlined: 1. The growth predicted by original Thirlwall's law is 3.727% using the aggregate income elasticity of imports (2.004), which is below the actual growth rate of 4.092%. The Turkish economy grew at a higher rate (0.365% per annum) than that allowed by the balance of payments equilibrium. Recall that Thirlwall's law assumes that the balance of payments is in equilibrium, relative prices are neutral in the 3SLS = three-stage least squares. *10% significance level. **5% significance level. ***1% significance level. sense that does not play any significant role on growth, and capital flows and internal imbalances are not considered in the model. 2. The growth predicted by the Soukiazis et al. (2014) model where relative prices are not neutral and with internal and external imbalances is 5.868% per annum higher average growth rate (1.776) compared with the actual growth rate. 3. Finally, growth obtained by the extended model which takes into account external and internal imbalances and relative prices is not neutral, and import content of aggregate demand component is 4.836%. This growth rate is very close to the actual growth rate but slightly over predicts realized growth rate in Turkey. The difference between actual and the predicted growth rate, in this case, is 0.744% per annum. These results imply that Turkey could grow at a higher rate than it actually did without exacerbating external and internal imbalances. Our results show that if we consider internal and external imbalances and non-neutrality of the relative prices and import content of aggregate demand component, the model predicts a slightly lower growth rate than the SCA model. Turkey's import sensitivity of the demand components is quite high, especially that of consumption (ε my = 1 251 . ); this result is not surprising, because high trade deficit without sustainable financing resources will constrain the growth. At the aggregate level, income elasticity of imports demand is high, ε my = 2 004 . ; showing that imports grow by more than twice the increase in domestic income; this evidence also explains the high import sensitivity of the components of demand. Therefore, the demand components of multiplier effects on growth are not substantial in the Turkish economy because they are offset by the increase in imports. Many countries import too many raw materials and intermediate goods to produce exportable and domestic goods as well as final consumer goods; what is important here in the final goods that form products should contain high domestic value added. In international markets, most of the produced goods and exports contain a substantial share of imported components, but in terms of gains, the value added in exports embodying imported components must be much higher. Turkey produced (and exported) low value-added domestic goods despite the move from low to medium or mediumhigh technology exports in recent years (Turkstat, 2018). On the contrary, the share of the general service sector including construction in the overall economy was about 45% at the beginning of the 1990s and has risen significantly representing 61% of the gross value added in 2016 against 20% in industry and 5% in agriculture (Turkstat, 2018). There was a significant drop in the share of industrial and agricultural sectors. Most of the service sector consists of a high number of micro enterprises with a substantial proportion of nontradables and high informality. Therefore, to improve the external balance and growth performance of the country shift to more tradable sectors, decreasing informality in the economy will improve internal balance and hence can contribute to economic growth. Scenario Analysis Here, we carefully designed some possible and executable policy scenarios to enable us to identify the most suitable policies that will assist in positioning Turkey on the path of rapid and sustainable growth. This analysis is carried out in the dimension of the extended SCA model. The scenario analysis we apply focuses on attaining external equilibrium. 1. The model used here assumes that relative prices are not neutral. We check the effect of the depreciation of the domestic currency by assuming that the average value of growth rate of real relative prices for the whole period changes from    p e p * . + − = 0 0144 to 0.0216, representing depreciation of the domestic currency. Our predicted growth rate increases to 4.9188. This finding suggests that a currency depreciation could work as a stimulus to growth, by increasing the competitiveness of the country in foreign markets, given domestic inflation kept under control. 2. Alternative policy option could be to reduce import sensitivity of exports from ε mx = 0 32 . to ε mc = 0 25 . . In this case, our predicted growth rate increases from 4.8364% to 5.099%. Large import content of the exports could be an impediment to growth because the exports' multiplier effects on income are crowded out by higher imports. Reducing import content of exports and encouraging the use of domestic inputs are the appropriate policies to achieve faster growth in Turkey. However, in a globalization world, we live in now most of the exports have import content, but to gain from the trade requires putting significant value added to it. 3. The growth rate of Turkey is also sensitive to import contents of the consumption and investment. to ε mk = 1 0 . , the predicted growth rate will be increasing from 4.8364% to 5.5214%. Further reducing imports sensitivity of investment from ε mk = 0 328 . to ε mk = 0 25 . will increase the predicted growth rate to 5.5169. Given these results to foster economic growth in Turkey, policies should aim at reducing imports dependence on elements of domestic demand, especially consumption. 4. Reducing the imports to income ratio by 5% (from 0.238 to 0.226) will increase the predicted growth rate from 4.8364 to 6.3575; alternatively increasing exports to income ratio by 5% will result in 6.2336 growth rate. Furthermore, if we assume that the imports share is equal to the exports share ( . w w X M = = 0 2164 implies reduction in imports), the predicted growth rate will be 7.8232, while exports share equal to imports ( . w w X M = = 0 2382 implies an increase in exports) predicted growth rate will be 7.7320%. Therefore, the most appropriate way to attain faster growth for Turkey is related to changes in the structure of the exports and imports. 5. An increase in the foreign income elasticity of exports demand from ( . * ε xy = 1 7212 to 2.5 like South Korea will increase predicted growth from 4.8364 to 5.3384. 6. An increase in the foreign growth rate from 0.02 to 0.25 will increase growth rate from 4.8364 to 5.1171 7. An increase in the foreign holding of debt (reduction of domestic holding from 0.65 to 0.4) will increase the predicted growth from 4.8364 to 5.5287. The non-resident flows into the government debt securities market may lower the cost and smooth the cyclicality of domestic funding/savings over the years. Therefore, capital inflows are also very important for growth in Turkey. 8. Fiscal policy toward reducing income taxation: We observe that average income tax during 1991-2016 period is about 20%, if this tax rate reduces from t = 20% to t =10% (everything else constant), predicted growth rate by the model is ( . %),  y a = 3 756 which is slightly higher than that found in Table 1 ( . ),  y a = 3 714 This indicates that more friendly taxation will not significantly stimulate growth in Turkey. 9. Government budget deficit policy toward reducing the public deficit and debt ratio: The period average of the budget deficit and debt to GDP ratio in Turkey is relatively low: w D = 0 04 . and w B = 0 41 . Further cut on budget deficit from 0.04 to 0.03 will reduce growth to 3.298%. However, as the debt to GDP ratio is low, if we increase debt to GDP ratio from 0.41% to 0.60% to stimulate the economy, for this to happen, we need to increase the budget deficit to GDP ratio from 0.04 to 0.05; this scenario generates relatively higher growth of 4.141%. 10. The period average of the domestic interest rates is relatively high in Turkey. Assuming a more favorable monetary policy of the Central Bank aiming at reducing the cost of financing of the economy by 20% (implied Δi = −0.0094) could help the economy to grow 5% which is much higher than the actual and the predicted growth rate. On the contrary, if foreign interest rates rise from period average of 4.3% to 5% (implied Δi* = 0.00723), the predicted growth rate falls significantly to 3.205%. If this increase reaches to 7% (with Δi* = 0.02723), growth rate will be 2.205%. These exercises have shown that growth in Turkey is very sensitive to changes in the domestic and foreign interest rates, both due to domestic and foreign debt. Therefore, financing the domestic economy with lower interest rates is a considerable stimulus to growth. This implies that Turkey to grow faster should reduce the cost of financing. The most effective policies to achieve faster growth in Turkey are related to the external sector; Turkey should implement structural reforms aiming at reducing chronic current account imbalances. Lowering imports to income ratio or increasing exports to income ratios will produce higher growth rates. Depreciation of the domestic currency also acts as a stimulus to growth. Conclusion and Policy Implications We extended the Soukiazis et al. (2014) model further by incorporating Turkey's high intensity of imports in the aggregate demand components. We estimated the model for Turkey and provided some assessment on the expected economic performance, in a sense identifying policy options for higher economic growth for Turkey. Our results show that Thirlwall's Law predicts the average growth rate of the Turkish economy. Thirlwall's Law assumes that external trade is balanced, public finances are at equilibrium, and relative prices are neutral. The SCA model, which allows external sector and public sector imbalances and relative prices, is not neutral; the predicted growth rate is significantly higher than the actual average growth rate under investigation. This is mainly consistent with the external trade disequilibrium that the country has been accumulating over the years and further significant public sector imbalances during the 1990s in Turkey. The extended SCA model, which considers internal and external imbalances and non-neutrality of the relative prices and import content of aggregate demand component, predicts slightly lower growth rate than the SCA model. The model estimated here is more complete one because it incorporates the foreign content of aggregate demand components. We can think of this extended SCA model as more realistic in terms of predicting the sustainable growth of the economy. Our scenario analysis clearly shows that the most effective policies to achieve faster growth in Turkey is related to the external sector; Turkey should put every effort to obtain a positive net trade balance. Long waiting structural reforms should be implemented to ease the external constraints on the economic growth in Turkey. Lowering imports to income ratio or increasing exports to income ratio will produce higher growth rates. It is also shown that economic growth in Turkey highly depends on external demand when the strong depreciation of the domestic currency also acts as a stimulus to growth. It should be noted that this article estimates the aggregate import function in line with the SCA-BOPCG framework. Therefore, in terms of the future direction of the research, it is necessary to estimate the disaggregated import function, specifically intermediate, capital, and final goods, to determine their precise implications on the Turkish economy. Description of the Variables and Data Sources  y t : Annual real gross domestic product (GDP) growth rate -GDP at 1998 constant prices (national currency, quarterly based annual percentage change)  c t : Annual growth rate of private final consumption -Private final consumption expenditure at 1998 constant prices (quarterly based annual growth rate)  x t : Annual growth rates of real exports -Exports of goods and services at 1998 constant prices (national currency, quarterly based annual percentage change)  m t : Annual growth rates of real imports -Imports of goods and services at 1998 constant prices (national currency, quarterly based annual percentage change)  k : Annual growth rates of investment -Gross fixed capital formation at 1998 constant prices (national currency, quarterly based annual percentage change)  p t : Annual growth rate of GDP deflator (quarterly based annual percentage change) w G : Share of government's expenditure on GDP -Total government expenditure minus interest payments, the ratio of nominal government's expenditure to the nominal GDP (as a percentage of GDP) w D : Share of government's deficit on GDP -Net lending (-), net borrowing (+), the ratio of nominal government's deficit to the nominal GDP (national currency, quarterly based annual percentage change) w B : Share of government's debt on GDP -General government net borrowing (the ratio of nominal government's net borrowing to the nominal GDP) w M : The ratio of nominal imports to the nominal GDP (national currency) w X : The ratio of nominal exports to the nominal GDP (national currency) t: Share of total government revenues on GDP i: Nominal long-term interest rates  g t : Growth rate of government's final consumption expenditure (quarterly based annual percentage change)  y d : Growth rate of real disposable income (quarterly based annual percentage change). e: Nominal effective exchange rate -Price of domestic currency in terms of US$, computed by using the same countries in the calculation of the real effective exchange rates. (P*e/P): Real effective exchange rate index (2010 = 100). Computed by the authors using data from OECD (Organisation for Economic Co-Operation and Development), Main Economic Indicators (database), http://dx.doi.org/10.1787/data-00052-en (January 5, 2017). We calculated consumer price index (CPI)-based real effective exchange rate. The following countries are included: Germany, Italy, France, the United Kingdom, Spain, the United States, South Korea, Holland, Belgium, Greece, Japan, Switzerland, Austria, Sweden, Denmark, Finland, Portugal, Canada, Australia, Iceland, and Ireland. Weights are determined using bilateral trade flows
9,188.2
2020-04-01T00:00:00.000
[ "Economics" ]
High-Throughput Sequencing to Reveal Genes Involved in Reproduction and Development in Bactrocera dorsalis (Diptera: Tephritidae) Background Tephritid fruit flies in the genus Bactrocera are of major economic significance in agriculture causing considerable loss to the fruit and vegetable industry. Currently, there is no ideal control program. Molecular means is an effective method for pest control at present, but genomic or transcriptomic data for members of this genus remains limited. To facilitate molecular research into reproduction and development mechanisms, and finally effective control on these pests, an extensive transcriptome for the oriental fruit fly Bactrocera dorsalis was produced using the Roche 454-FLX platform. Results We obtained over 350 million bases of cDNA derived from the whole body of B. dorsalis at different developmental stages. In a single run, 747,206 sequencing reads with a mean read length of 382 bp were obtained. These reads were assembled into 28,782 contigs and 169,966 singletons. The mean contig size was 750 bp and many nearly full-length transcripts were assembled. Additionally, we identified a great number of genes that are involved in reproduction and development as well as genes that represent nearly all major conserved metazoan signal transduction pathways, such as insulin signal transduction. Furthermore, transcriptome changes during development were analyzed. A total of 2,977 differentially expressed genes (DEGs) were detected between larvae and pupae libraries, while there were 1,621 DEGs between adults and larvae, and 2,002 between adults and pupae. These DEGs were functionally annotated with KEGG pathway annotation and 9 genes were validated by qRT-PCR. Conclusion Our data represent the extensive sequence resources available for B. dorsalis and provide for the first time access to the genetic architecture of reproduction and development as well as major signal transduction pathways in the Tephritid fruit fly pests, allowing us to elucidate the molecular mechanisms underlying courtship, ovipositing, development and detailed analyses of the signal transduction pathways. Introduction Tephritid fruit flies are of major economic importance in agriculture, causing damage to fruits and other plant crops. The genus Bactrocera is of worldwide notoriety for its destructive impact on agriculture. Adults lay eggs in fruit and the hatched larvae feed inside. Various species of fruit flies in the genus Bactrocera vary in the number of generations and in the type of host plants used for feeding and ovipositing. Due to the wide host range and high fecundity of the adults as well as the great adaptability of the larvae, effective control of Bactrocera is of great value. The oriental fruit fly, B. dorsalis, is one of the most important quarantine pests in Asian countries. It can feed on up to 250 different types of fruits and vegetables, causing severe ecnomic loss [1]. Therefore, it is important to identify more molecular targets involved in adult reproduction and larval development in B. dorsalis, to formulate simple and effective strategies in agricultural pest control. Compared to the model insect species whose genomes have been sequenced, such as Drosophila melanogaster, Anopheles gambiae, and Bombyx mori, genomic sequence resources for Tephritid fruit flies are limited. The EST approach to obtain detailed information on transcriptome signatures that relate to some physiological processes in the medfly Ceratitis capitata has indicated the significance of large-scale gene discovery in insects lacking genomic sequence information [2]. But gene discovery using 3730 DNA analyzer in the C. capitata cDNA libraries of embryos and adult heads is limited to genes involved in reproduction, sex determination, and chemosensory perception [2]. Although the known resources available on NCBI for B. dorsalis have facilitated the study of gene function during yolk protein synthesis [3,4], large scale molecular studies in B. dorsalis remain limited. There is a lack of large-scale sequencing in species of Tephritidae. The introduction of novel high-throughput sequencing technologies has provided significant convenience for further studies of non-model organisms including insects [5,6]. Next generation sequencing technologies such as the 454 pyrosequencing and Illumina have been widely used to identify genes involved in several physiological processes and behaviors. An EST library has been built for the flesh fly Sarcophaga crassipalpis utilizing massively parallel pyrosequencing with the Roche 454-FLX platform [7]. A transcriptomic study in the lepidopteran model host Galleria mellonella by 454 pyrosequencing has created extensive resources for Galleria, especially for immune related genes [8]. The 454 sequencing technology has also been used in the milkweed bug Oncopeltus fasciatus to discover genes participating in several early developmental processes [9]. The transcriptome of Bemisia tabaci has been built by Illumina sequencing and a large number of genes associated with development and insecticide resistance were identified [10]. Transcriptome analysis of B. dorsalis which mainly focused on genes involved in insecticide resistance has been performed by Illumina sequencing recently [11]. However, a comprehensive identification of genes involved in reproduction and development as well as major signal transduction pathways in B. dorsalis remains unavailable. In the present study, we present the results from the sequencing and assembly of the transcriptome of B. dorsalis (whole body) at different developmental stages (larvae from three instars, pupa and adult) using the 454-pyrosequencing technology. Genes involved in reproduction and development as well as nearly all major conserved metazoan signal transduction pathways were largely identified. Additionally, a great number of differentially expressed genes were obtained and functionally annotated, and the gene expression patterns for some of these genes were verified by qRT-PCR. This transcriptome is undoubtedly valuable for molecular studies of the underlying mechanisms in reproduction and development and is a useful resource for further exploring signal transduction pathways. Furthermore, our data will provide insights for the development of effective and eco-friendly pest control strategies. Results and Discussion Generation and assembly of oriental fruit fly ESTs To obtain an overview of the B. dorsalis gene expression profile during development, cDNA samples from different developmental stages (Table 1) were prepared and sequenced on a 454 GS-FLX machine. A total of 350,865,036 bases from 747,206 sequence reads with a mean read length 382 bp were obtained (Table 2, Figure S1). The number of reads from the larval, pupal and adult samples was 343,907 reads, 355,839 reads, and 217,426 reads, respectively, with a mean read length of 398 bp, 380 bp, and 359 bp, respectively ( Figure S1). These raw data were assembled into 28,782 contigs and 169,966 singletons, and finally we generated 48,876 unigenes. The mean contig size was 750 bp with lengths ranging from 43 bp-11,208 bp (Table 2, Figure 1). The contig size distribution revealed the following: more than half of the contigs (14,912; 51.81%) were between 500 and 1000 bp in length; 30.14% (8,676) were less than 500 bp; 17.48% (5,030) of contigs were between 1000 and 3000 bp; and 0.57% (165) were more than 3000 bp ( Figure 1). The previously reported B. dorsalis transcritome by Illumina sequencing generated 49,804 unigenes with the mean size of 456 bp, and less than 10% (4,404) of unigenes was more than 1000 bp [11]. Therefore, compared with the previous report, our new transcriptome by 454-pyrosequencing makes more detailed and general genetic data available that will facilitate large-scale discovery and utilization of genetic resources for B. dorsalis. To demonstrate the quality of our sequencing data and sequence assembly, 5 contigs (.750 bp) and 5 singletons were randomly selected for RT-PCR analysis. Agarose gel electrophoresis results showed that 9 out of 10 primer pairs resulted in a product of the expected size, and the PCR products were further confirmed by Sanger sequencing (data not shown). Annotation of predicted proteins BLASTX alignments (E-value cut-off of 10 24 ) between the predicted protein sequences and several protein databases, including GenBank non-redundant (NR) and Swiss-Prot, showed that a total of 11,859 (35.8%) predicted proteins could be annotated with known biological functions, whereas the remainder will require more genetic data, which is currently lacking in the fruit fly. To analyze which part of the assembled sequences had counterparts in certain insect species, orthologous genes shared between B. dorsalis and other three insect model species were compared, including either Dipteran D. melanogaster, Dipteran A. gambiae, or Lepidopteran B. mori. We used our B. dorsalis ESTs as a query in a BLASTn search (E-value cut-off of 10 210 ) against the databases flybase, vectorbase and silkbase. Given that p values are influenced by the size of the database, a score value of 150 were chosen for the exclusion criteria to make BLAST hits between databases comparable. The results showed 9,295 hits, more than 4,000 of which are common to all insect species and therefore belong to the core invertebrate genes ( Figure 2). There were 7,463 identifiable genes shared between Drosophila and Bactrocera indicating a good coverage of the Bactrocera transcriptome, as that many other genes are species-specific. Homologous genes shared between Anopheles and Bactrocera were 6262, and the number was 6087 between Bombyx and Bactrocera. Both B. dorsalis and D. melanogaster belong to Aristocera within Diptera, while A. gambiae belongs to Nematocera within Diptera, so B. dorsalis and D. melanogaster in relationship are near with each other while A. gambiae is far with them. It is unsurprising that the number of homologous genes shared between B. mori and B. dorsalis is the least because Lepidopteran Bombyx is in the farthest relation to Dipteran Bactrocera. The identity distribution and species distribution were then analyzed ( Figure 3). For the identity distribution of the predicted proteins, most of the hits (39%) had 60% to 80% identity with other insects in the nr database, whereas 30% of the sequences had greater identity than 80% ( Figure 3A). The species distribution of the top BLAST hits against the nr database for the Bactrocera transcriptome showed that Bactrocera genes had the greatest number of matches with Glossina and Drosophila genes. Among them, 24% of the distinct sequences had first hits with sequences from the Dipteran species Glossina morsitans morsitans and 17% with sequences from another Dipteran species Drosophila, followed by other species within Diptera, including Ceratitis (8%), Bactrocera (8%), Lucilia cuprina (4%) and Stomoxys calcitrans (3%) ( Figure 3B). The other sequences had hits with other species, such as Coleopteran Tribolium castaneum (3%), higher mammalian species Rattus norvegicus and Homo sapiens (3%). Gene ontology and clusters of orthologous groups classification Gene ontology (GO) assignment programs were utilized for functional categorization of annotated genes. In many cases, multiple terms were assigned to the same transcript. These sequences were categorized into 43 main functional groups belonging to 3 categories, including biological process, molecular function, and cellular component ( Figure 4). Among the biological processes, the dominant GO terms were grouped into either metabolic (35%) or cellular (16%) ( Figure 4A). Within the molecular function category, there was a high-percentage of genes with binding (23%), catalytic activity (21%), protein binding (13%), and hydrolase activity (13%) ( Figure 4B). For cellular components, those assignments were mostly given to cell (27%), intracellular (23%), and cytoplasm (18%) ( Figure 4C). This GO assignment result is similar to the previously sequenced B. dorsalis transcriptome, in which binding, cell and metabolic process were the three largest groups [11]. Genes involved in reproduction In insects, sexual reproduction is a very important physiological process and is critical to the maintainace of a population. Therefore, identification of genes related to reproduction is significant and would be helpful for pest control and for evaluating the molecular machanisms of reproduction in higher-order species. From the B. dorsalis transcriptome, 14 genes have been identified with significant hits to 14 different Dipteran genes known to be involved in reproduction by BLASTX analyses (Table 3). These genes can be divided into 3 classification groups. For the first group, a total of 7 genes are predicted to be involved in male courtship behavior. In Drosophila, mutations in the takeout gene lead to little or no courtship behavior, and mutations in the technical knockout gene cause unsuccessful male courtship behavior [12,13]. The quick-to-court gene, which encodes a predicted coiled-coil protein mainly expressed in the olfactory organs, central nervous system, and male reproductive tract, plays important roles in male courtship behavior. Mutations in this gene cause abnomal malemale or accelerated male-virgin female courtship [14]. We also found a gene homologous to the Drosophila timeless in this transcriptome. Copulation times of males are extended in timeless mutants [15]. Males carrying mutations in lingerer, which is involved in initiation and termination of copulation, disengage their genitalia abnormally [16]. Mutations in calcium calmodulin Table 3. Oriental fruit fly assembled sequences with best-hit matches to Dipteran genes involved in reproductive behaviors. kinase II disrupt the ability of the male to learn to avoid courting males and mated females [17]. The ken and barbie gene encodes a putative transcription factor, and mutants feature malformation of terminalia in adult Drosophila [18]. Sequences in the second group share a high degree of homology with genes involved in female reproduction in Drosophila and mosquitoes, including the Drosophila logjam, sphingosine kinase 2, forkhead box, and sex peptide receptor (SPR). The logjam gene encodes a predicted protein homologous to the EMP24/GP25 transmembrane components of cytoplasmic vesicles and may participate in the intracellular trafficking of positive signals for oviposition. Mutants for this gene are unable to oviposit mature eggs but display normal courtship behavior and fertility [19]. Mutations in sphingosine kinase 2 have no effect on viability at any developmental stage or on adult longevity but do result in the reduction of flight ability and fecundity, which might be caused by retention of mature eggs in the ovaries [20]. The forkhead-box (Fox) proteins constitute a large and diverse group of transcription factors characterized by a conserved 110-amino acid 'Fox' DNA-binding domain. Fox genes that belong to various subgroups (A-Q) have been found to be involved in many biological processes, including development, metabolism, and immunoregulation [21][22][23][24][25]. Three Fox gene homologs were found in the B. dorsalis transcriptome, including FoxL, FoxO and FoxP. Fox transcription factors reportedly play important roles in regulating reproduction in Aedes aegypti. Silence of mosquito FoxL and FoxO reduces the amino acidinduced vitellogenin gene expression, which leads to fewer eggs laid [26]. Another female reproductive gene is SPR, which mediates the post-mating switch in reproductive behaviour of female Drosophila. Knockdown of SPR results in very few eggs laid after mating, and RNAi-treated females remate frequently [27]. Genes involved in signal transduction Genes involved in signal transduction pathways have been listed in Table S1. The results showed that ''Signal transduction mechanisms'' constitutes the majority of the clusters within the metabolism pathway classification of the B. dorsalis transcriptome ( Figure 5), including hormone signaling, insulin signaling, MAPK, Wnt, Notch, and Hedgehog (Table S1). Among these signaling pathways, the insulin signaling pathway is one of the most important. Insect growth and developmental processes, among which molting and metamorphosis are the most important physiological events, are regulated by two major hormones: steroid 20-hydroxyecdysone (20E) and sesquiterpenoid juvenile hormone (JH). Both the 20E and JH signal transduction pathways have been well studied [33][34][35][36][37]. Recently, another conserved signaling pathway called the insulin signaling pathway has been studied in Drosophila and has been demonstrated to play an essential role in controlling insect body, organ, and cell size. Several protein are involved in this pathway, including PI(3)K, PTEN, and Akt/PKB [38][39][40]. 16 genes have been identified to share high homology with insect genes known to be involved in this pathway by BLASTX analyses (Table S1). Overexpression of PI(3)K proteins in the wing or eye imaginal discs in Drosophila results in enlarged wings or eyes, respectively, while mutation in this gene results in flies with smaller wings and eyes, which are caused by changes in both cell size and cell number [38]. Additionally, PI(3)K regulates cell division and cell survival in the imaginal discs by controlling the cell number [41]. The gene DPTEN, which is human tumor suppressor gene PTEN homolog, encodes a putative cytoskeleton-associated molecule with both protein phosphatase and phosphatidylinositol 3,4,5-trisphosphate (PIP3) 3-phosphatase activities. It is known to control cell number and growth by antagonizing PI(3)K, through PI3K-dependent and -independent pathways [38,42,43]. The phosphoinositide-3-OH-kinase-dependent serine/threonine protein kinase Akt (protein kinase B) has been reported to affect cell and imaginal disc size during Drosophila development in an autonomous manner [40]. In the past decade, lipid rafts have been considered critical for the proper compartmentalization of insulin signaling in adipocytes. Flotillin-1, originally identified as an integral membrane protein [44], was reported to recruit a complex of tyrosine-phosphorylated Cbl and Cbl-associated protein (CAP) to the lipid rafts, and this recruitment is required for GLUT-4 translocation in response to insulin [45]. Gene expression profile among the different developmental stages To identify genes showing differential expression during development, the differentially expressed sequences between two samples were identified (Table S2). There were 2,977 significantly differentially expressed genes detected between the larval and pupal samples, including 838 up-regulated genes (FDR,0.001) and 1,159 down-regulated genes (FDR,0.001, Figure 6 & Figure S2A). The large number of differentially expressed genes between these two samples may be attributed to the important molting and metamorphosis processes during the transition from larva to pupa. A cascade of physiological processes occurs during molting and more complicated physiological processes take place during metamorphosis including histolysis of larval tissues, remodeling and formation of adult tissues, and a molting cascade similar to the larval molt [46,47]. Additionally, a total of 1,621 differentially expressed genes were detected between adult and larval libraries, with 490 up-regulated genes and 605 down-regulated genes ( Figure S2B). Between adult and pupal libraries, 544 genes were up-regulated, while 794 genes were down-regulated, with a total of 2,002 differentially expressed genes ( Figure S2C). The number of differentially expressed genes between samples in our transcriptome is different from that of the previously sequenced B. dorsalis transcriptome by Illumina [11]. This may be caused by the different samples in these two transcriptomes. The larval stage is limited to the third instar larvae of B. dorsalis in the previous transcriptome, while the larval library consists of all larval stages in our new one by 454 pyrosequencing, including the first instar, the second instar and the third instar larvae. Therefore, our new transcriptome makes more detailed and general genetic data available that will facilitate further study of larval development in B. dorsalis, especially molting and metamorphosis. The top 10 most differentially expressed genes between each two samples (larvae vs. pupae, adults vs. larvae, and adults vs. pupae) were analyzed (Table S3). The results showed that most of the upregulated genes in larvae and pupae had orthologs with Drosophila and other insects, including those encoding the cuticle protein, the fat body protein, trypsin, and the Ras-related protein. However, nearly 50% of the genes upregulated in adults had no known orthologs in the NCBI database. These findings might be attributed to the fact that developmentally related genes have been more widely identified in Drosophila and other Dipteran species than adult genes. Functional annotation of differentially expressed genes To understand the functions of DEGs and identify DEGs involved in signal transduction and important physiological processes, such as development and reproduction, all of the DEGs were mapped to terms in the KEGG database and compared with the whole transcriptome background (Table S4). Among all of the genes with KEGG pathway annotation, a total of 547 differentially expressed genes were found between the larval and pupal stages. Most of them were identified to be involved in metabolic pathways, including oxidative phosphorylation, the citric acid cycle (TCA cycle), and glycolysis/gluconeogenesis. This observation is consistent with that of the previously sequenced B. dorsalis transcriptome and suggests that the metabolic rate of the B. dorsalis larvae is different from that of the pupa. Between the adult and pupal stages, 416 differentially expressed genes were annotated with KEGG pathway annotation. In addition to oxidative phosphorylation and the TCA cycle, starch and sucrose metabolism-associated genes were also enriched. Adult B. dorsalis primarily feeds on sucrose; thus, genes involved in sucrose metabolism are upregulated to adapt to food sources. As for the pathways enriched between the adult and larval stages, 337 differentially expressed genes were identified. Among these, the insulin signaling pathway was notably enriched, which might be attributed to the vigorous growth in the larval stage. Verification of differentially expressed genes To further evaluate our DEG library, the expression level of 9 genes mostly involved in development and reproduction were analyzed by qRT-PCR. Results showed that real time PCR revealed the same expression tendency as the DEG data, despite some quantitative differences in expression level (Table 4, Figure 7). The effector genes trypsin and Jonah 44E were highly expressed at the larval stage. Other two genes belonging to the nuclear receptor superfamily, Broad and hormone receptor 3(HR3), were expressed at a high level at the pupal stage. Broad is a wellknown molecular marker of pupal commitment and has been reported to be involved in mediating the 'status quo' action of juvenile hormone on the pupal-adult transformation in Drosophila and Manduca [48]. Broad is abundantly expressed during the formation of the pupa but not in adult differentiation [34,48]. This is consistent with our result. A member of the small GTPase subfamily, rab7, was also expressed abundantly at the pupal stage. Notably, both the serine/threonine-protein kinase AKT and mediator complex (MED) were highly expressed in both larvae and pupae. This finding is consistent with their roles in controlling cell and imaginal disc size during development in Drosophila [40]. An effector gene adult cuticle protein 1 (ACP1) was expressed significantly at the adult stage, but its specialized cuticle function needs further Conclusion We have generated a comprehensive transcriptome of the B. dorsalis during development using a 454 FLX platform. The single run produced 28,782 contigs with a mean size of 750 bp. A great number of genes involved in reproduction, development and most major signaling pathways were found in this transcriptome. Additionally, genes differentially expressed at different developmental stages were largely identified and functionally annotated with KEGG database. To our knowledge, this is the first report using 454 sequencing technology for a Tephritid fruit fly species lacking a reference genome. These data make a substantial contribution to existing sequence resources for the oriental fruit fly, provide many more potential molecular targets for B. dorsalis control, and may aid in studies of the mechanisms of development and reproduction in fruit flies. Figure 7. qRT-PCR confirmation of the differentially expressed genes between each two stages (larvae vs. pupae, adults vs. larvae, and adults vs. pupae). The transcript abundance from DEG data is shown above each gene. Relative transcript levels are calculated by real-time PCR using 16 s rRNA as the standard. 2F, feeding 2th instar larvae; 2M, molting 2th instar larvae; 3M, metamorphic molting larvae; P1, prepupa; P2, pupa. Three biological replicates were performed, and the data shown are typical results. doi:10.1371/journal.pone.0036463.g007 Methods Insect culture and sample collection B. dorsalis were cultured in our laboratory at 28uC under a 12 h light: 12 h dark photoperiod. Adult flies were reared on artificial diets (25% yeast extract and 75% sugar) and oviposited into bananas. The hatched larvae fed inside according to the methods described by Li et al. [49]. Larvae of three instars were collected as one sample. The pupae from different stages (prepupa, new pupa, mid pupa, and old pupa) were collected as another sample. Newly emerged adults and sex matured adults before and after copulation were collected as the third sample (the sex ratio was 1:1, Table 1). At least ten insects were collected for each stage. RNA extraction, mRNA purification, and cDNA synthesis Each frozen sample was ground in mortars with liquid nitrogen, and then total RNA was isolated using TRIzol reagent (Invitrogen) following the manufacturer's instructions. The concentration of total RNA was determined using the NanoDrop (Thermo Scientific, USA), and the RNA integrity value (RIN) was checked using the RNA 6000 Pico LabChip on an Agilent 2100 Bioanalyzer (Agilent, USA). For mRNA purification, total RNA was incubated with 10 U DNase I (Ambion) at 37uC for 1 h, followed by a purification step using the MicroPoly (A) Purist Kit (Ambion) according to the manufacturer's instructions. Then, the purified mRNA was dissolved in the RNA storage solution, and the final concentration was determined using the NanoDrop. Double-stranded cDNA was synthesized from mRNA according to Ng's full-length cDNA synthesis protocol with some modifications [50]. A GsuI-oligo dT primer was used for the first-strand cDNA synthesis with 10 mg of mRNA and Superscript II reverse transcriptase (Invitrogen). After incubation at 42uC for 1 h, the 59-CAP structure of mRNA was oxidized by NaIO 4 (Sigma) and ligated to biotin hydrazide, which was used to select complete mRNA/cDNA heterodimers by binding Dynal M280 beads (Invitrogen). After the second strand cDNA synthesis, the polyA and 59 adaptor was removed by GsuI digestion. cDNA sequencing The cDNA was fractioned ultrasonically using cDNA size fractionation columns (Agencourt, USA). Each cDNA fraction found to be larger than 800 bp was sonicated to the range of 300-800 bp, and then pooled together with the other cDNA samples ranging from 300 bp to 800 bp. The prepared cDNAs were transformed into single-stranded template DNA (sstDNA) libraries with the GS DNA Library Preparation kit (Roche Applied Science). The sstDNA libraries were clonally amplified in a bead-immobilized form with the GS emPCR kit (Roche Applied Science) and sequenced on a 454 Genome Sequencer FLX instrument. Sequence assembly Raw reads were firstly cleaned by removing adaptor sequences and low quality sequences (reads with unknown sequences 'N'), and then assembled into EST clusters (contigs) using CAP3 with the default assembly parameters. The unassembled reads were considered as singlets. The raw data from 454 reads have been deposited into the NCBI Short Read Archive under the accession no. SRA047953. Sequence annotation All contigs and singlets were annotated with GetORF from the EMBOSS package [51]. The ORF of each predicted protein was used for BLASTp searches against the Swiss-Prot and the NCBI nr databases setting the e-value threshold to 10 24 . GO annotations were also derived based on sequence similarity with GoPipe [52]. Predicted proteins were first used for BLASTp against the Swiss-Prot and TrEMBL databases using an E-value cut-off of 10 24 . GO annotations were analyzed by GoPipe according to the gene2go software. The COG and KEGG pathways annotations were performed using Blastall software against the Cluster of Orthologous Groups database and the Kyoto Encyclopedia of Genes and Genomes database [53]. In this study, we used the default parameters in each approach and no other custom approach was used. Analysis of differentially expressed genes (DEGs) To analyze genes differentially expressed at the different developmental stages, the number of reads for each of the contigs from the three samples was converted to Reads per Kilobase per Million (RPKM) [54]. Then, the MARS model (MA-plot-based method with Random Sampling model) in the DEGseq package was used to calculate the expression abundance of each contig between the three samples. We used an FDR (false discovery rate) to determine the threshold of p value for this analysis. An FDR,0.001 was considered to have significant expression abundance. For the identification of the pathways that the DEGs were predicted to participate in, we mapped all DEGs to terms in the KEGG database and looked for significantly enriched KEGG terms compared to the genomic background. Quantitative real-time PCR (qRT-PCR) verification 9 genes were chosen for the confirmation of DEG data by qRT-PCR using the SYBR Premix Ex Taq kit (Takara, Japan) according to the manufacturer's instructions with a real-time thermal cycler (Bio-Rad, Hercules, CA). TRIzol reagent (Invitrogen, USA) was used to extract total RNA from the B. dorsalis at 6 different typical developmental stages (Table 1), including the second instar larvae (feeding and molting larvae), third instar larvae (metamorphosing larvae), prepupae (48 h before pupation), pupae, and adults. At least 10 insects were collected for each sample. The first strand cDNA was obtained from 2 mg of total RNA using M-MLV Reverse Transcriptase (Takara, Japan) with the primer oligo-anchor R (59-GACCACGCGTATCGATGTC-GACT 16 (A/C/G)-39). The primers used for qRT-PCR detection of selected DEGs are listed in Table S5. The relative gene expression data were analyzed using the 2 2DDCt method as described by Ren et al. [55]. The results were analyzed using a one-way analysis of variance (ANOVA) statistical test. All quantitative PCR were repeated in three biological and three technical replications.
6,556.2
2012-05-03T00:00:00.000
[ "Biology", "Environmental Science" ]
Tracing the roots of phonetic variation in East Asian Englishes through loan phonology One key aspect of Englishes in the Kachruvian Expanding Circle concerns phonetic features as they commonly bear traits of speakers’ native languages. This article explores language contact phenomena that are likely to cause L1>L2 phonological transfer, which underlies the phonetic specificity of English in East Asia. Drawing on the general theory of loan phonology, the author treats phonographic adaptation of English loanwords in East Asian languages compared to Russian, as a reliable source of data that supports research on the nature of phonetic variation in Chinese, Korean, Japanese, and Russian Englishes. The data were obtained through comparative analysis of English loanwords (200 for each language) selected from dictionary sources and speech samples from the Russian-Asian Corpus of English which was collected in earlier research. The findings confirm typological correlation of phonological transfer in loanword phonographic adaptation and in foreign language phonology. In both linguistic contexts, a crucial role is played by syllabic constraints, because being the fundamental unit of any phonological system, a syllable serves a domain of its segmental and suprasegmental features. Consequently, various resyllabification phenomena occur in English borrowings in the languages of East Asia whose phonological typology is distant from that of English; as a demonstration of this same conflict, the syllabic and, hence, rhythmic organization of East Asian Englishes tends to exhibit similar code-copying variation. The greater typological proximity of English and Russian syllable regulations leads to fewer manifestations of syllabic and rhythmic restructuring in both loanword adaptations and English spoken by native speakers of Russian. Introduction An integral attribute of the Expanding Circle varieties of English which is easily spotted in global English-mediated communication contexts is their phonetic variations, which help the listener to rather effortlessly identify the primary language (L1) background of a speaker. This is due to the natural immersion of L1 phonology into a bilingual's secondary (L2) phonological system. Phonological transfer is one of numerous language-transfer manifestations accompanying secondary language acquisition. According to Lado (1957: 2), "individuals tend to transfer the forms and meanings, and the distribution of forms and meanings of their native language and culture to the foreign language and culture --both productively when attempting to speak the language and to act in the culture, and receptively when attempting to grasp and understand the language and the culture as practiced by natives." Nowadays the term transfer, being interpreted as both positive and negative L1-upon-L2 influence, covers various linguistic contexts in which speakers shift elements from their mother tongue to L2. These include pidgin and creole development, language convergence, language attrition, code switching and mixing, etc. Loanword phonology, known as "a study of how languages adapt foreign words within their phonological systems" (Crystal 2008: 287), has also been typically attributable to transfer (Broselow 2000). Dictionary of Linguistics and Phonetics defines a loan as "a linguistic unit (usually a lexical item) which has come to be used in a language or dialect other than the one where it originated" (Crystal 2008: 286). According to the dictionary, several types of loan process have been identified, with loan words being recognized as the type "where both form and meaning are borrowed, or 'assimilated,' with some adaptation to the phonological system of the new language" (Crystal 2008: 286). The general theory of loan phonology distinguishes two transfer types depending on the direction of cross-linguistic influence, namely, borrowing ("recipient language agentivity", i.e. the assimilation of foreign elements by the speakers' native language) and imposition ("source language agentivity," i.e. influence of a speaker's native language structures on the second language) (Van Coetsem 1988: 3). Many researchers have argued that via these bidirectional transfer manifestations, loanword phonology can provide data on the phonological constraints in the recipient language that are not necessarily evident in native phonology (Hyman 1970, Kawahara 2008, Kang 2010, Hyman & Plank 2018. More recently, linguists have started to reflect on loanword phonology as a source of evidence that is comparable to L2 phonological evidence (De Jong & Cho 2012, Gut, Fuchs & Wunder 2015. However, the comparative methodology of loan phonology vs. L2 phonology has not been widely adopted in linguistics, nor has it been employed to describe the phonologies of world Englishes. This is most likely because internal phonology and phonotactics of the borrowing language alone cannot account for all the cases of transfer manifestation because some languages develop, as Smith (2009: 155) puts it, "a loanword-specific adaptation strategy." This article aims to show that much of the account of loanword phonographic adaptation (imposition phenomena) runs in direct parallel with the phonetic and/or phonological evidence from L2 speech production and perception, while the processes of adjusting the loanword into a new phonological system, and developing L2 phonological categories in L2 acquisition are both confronted with the primary necessity to satisfy the constraints of the native language. This, in its turn, results in forming the idiomaticity of loanword sound forms and of L2 phonology, respectively, since in any case of phonological contact, when a language runs into a phonological structure that does not have a representation in its phonology, the phenomena of phonological transfer occur. Another ground on which the analogy between loan phonology and L2 phonology can be drawn is the likelihood of loan adaptation being partially performed by "advanced L2 speakers" (Calabrese & Wetzels 2009: 51). Linguists argue that if this occurs, the loan phonology might be "filtered" by L2 English perception (ibid.), which implies even more similarities, though accrued otherwise. Honna (2006) stresses a great influx of English loanwords in the languages of East Asia. In the process of borrowing from the English-dominated global culture, the recipient languages adjust the sound form (along with the meaning) of English loanwords 1 according to their own phonological rules. In most cases, speakers attempt to approximate English sounds by choosing the acoustic equivalent that most closely correlates with phonemes or phonemic sequences (or other units) available in the recipient language (Calabrese & Wetzels 2009: 11). The recipient sound system quite often comes into natural conflict with that of the source language, at the same time seeking a compromise, which results in certain phonetic "fine-tuning" of the loanword in its new linguistic domain. Transferring L1 phonological patterns in loan words may involve not only segmental changes but also L1-specific syllable restructuring, stress assignment, etc. In this article, we assert that comparable adjustments take place when late bilinguals from East Asia acquire the idiomatic phonetic system of the English language. As Berent (2013: 10) states, "we instinctively extend the phonological pattern of our language to all inputs, and when violators are detected, we automatically recode them as licit forms." Hence, there is the likelihood of interlanguage formation in L2 learning, which may eventually become fossilized (Selinker & Lamendella 1980). According to Major (2001: 81), interlanguage usually contains three groups of components: those transferred from L1 and L2, and universals. Our assumption is that that similar elements can be found in loan phonology as well. Problem statement By using relative data from our study on loanword phonology compared to second language phonology this article aims to show that there exist observable traits of likeness and overall correlation of the two transfer types. The specific methods of phonographic adaptation of English loanwords in Chinese, Japanese, Korean, and Russian are matched up to to the corresponding phonological features identified in East Asian Englishes and Russian English. The focus in both aspects of our research is laid upon the fundamental linguistic unit, the syllable, given that the structure of the syllable (or mora in Japanese) to a great extent determines the entire organization of speech in any language. For example, Randolph (1989) provides reliable evidence on the influence of syllable-based constraints on properties of English sounds, while Selkirk (1982) shows that principles of syllabification interact with rules of stress assignment, etc. At the same time, linguists have demonstrated the key role the syllable plays in the perception of speech. Knowledge of the phonological rules of how syllables and syllable sequences are organized on the part of the listeners is crucial for their ability to decode the speech continuum (Massaro 1972, Nusbaum & DeGroot 1991. We proceed from the basic assumption that syllabic and rhythmic adaptation of English loanwords is indicative of the borrowing language's phonology, which, in another language-contact context, demands syllable code-copying alteration in this or that English variety. The choice of East Asian languages, Russian, and English, as well as the corresponding varieties of English, as a research focus for comparative investigation was guided by the fact that English and Russian, on the one hand, and Chinese, Japanese and Korean, on the other, are typologically and genetically distant from one another. Hence, the languages under study have different types of syllable matrices' formations and functions, and they are also different in terms of their rhythmic organization, the major distinction being stresstiming versus syllable (or mora)-timing (see, e.g., Bondarenko et al. 2007). Russian, a language that allows complex consonant clustering in the syllable onset and coda positions, and exhibits stress-timing prototype in rhythm (Auer 1993, Zavyalova 2018, is taken as an example of a language genetically and typologically close to English, to show the contrast in both English loanword adaptation and L2 (Russian English) production. METHODS OF DATA ANALYSIS The present study of phonetic variation in East Asian Englishes through loan phonology employs a complex methodology including general descriptive and comparative, as well as experimental (auditory and instrumental) phonetic methods. Since we consistently compare the newly obtained data with the findings of our earlier group research project on phonetic variation in East Asian Englishes (Bondarenko et al. 2007), it is important to outline methodological basis for those findings and the key results of that investigation. Initially, major dissimilarities in the syllable and rhythmic structures of the languages under study (English and a group of East Asian languages, compared to Russian) which were thought likely to cause the phenomenon of phonological transfer in different situations of language contact were revealed via the review and comparison of descriptive studies of the appropriate phonological systems. Table 1 below demonstrates the most salient syllable-related features: (4) stress-timed rhythm and relative isochrony. cf. stress-timed rhythm. To explore the syllabic and prosodic organization of East Asian Englishes (compared to Russian English), we designed a multi-stage methodology to allow for the analysis of non-native speech production and perception, as well as for the experimental study of these processes. Speech Production research required collecting English speech corpora (see Korpus… 2011): (1) A subset of English speech samples read by native speakers of Chinese, Korean, Japanese, and Russian (35-40 subjects in each language; aged 17-25; senior University students majoring in English at Dalian University of Foreign Languages, China; Hokuriku University, Japan; Kyungnam University, Republic of Korea; Far Eastern Federal University, Russia; L2 skills -intermediate), collected in order to identify phonetic features in corresponding non-native English varieties. (2) A subset of English speech samples read by American speakers (8 subjects; aged 30-55; visiting professors and fellow scholars at Far Eastern Federal University, Russia). The samples included: a) the same English texts as read by East Asian and Russian participants to be further employed as patterns for comparison with non-native samples, and b) specially designed texts, partially borrowed from Chwat 6 (1994), containing potential phonetic difficulties, to be further used as stimuli in the experiment on native English speech perception by East Asian bilinguals 7 . Speech Perception research included the following steps: (3) Auditory analysis of the elicited non-native English speech samples by two categories of subjects: American speakers (who previously participated in our Speech Production research), and Russian teachers of English Phonetics (5 subjects; aged 25-55; Far Eastern Federal University, Russia), for assessing the degree of English language proficiency of the subjects and determining the scope of phonetic variation. As a result, three groups of English speakers were identified by the listeners, namely, basilectal 8 , mesolectal, and acrolectal. Phonetic variation features (compared to native American speech samples) were further examined only for mesolectal non-native speech samples on the assumption that phonetic representations of foreign accent would be most salient in speakers with intermediate L2 command, since at this level of L2 proficiency bilinguals tend to use appropriate grammar and vocabulary, still displaying rather strong L1 phonological transfer. (4) Auditory analysis of the American English speech samples by East Asian and Russian participants (35-40 speakers in each language; aged 17-25). The experiment elicited numerous cases of perceptual resegmentation of the stimuli on the part of East Asian bilinguals. This part of the experiment was critically important for our research since we treat speech perception and production as more or less isomorphic processes that together can unveil a host of phonetic difficulties in L2 acquisition attributed to transfer. Moreover, the data obtained has clear implications for understanding loan phonology as it appears to be largely dependent on L2 perception of bilinguals, who carry out language borrowing. Finally, to support our theoretical findings on the likelihood of syllable-related phonological transfer in different situations of contact between English and East Asian languages (compared to Russian) and to prove the validity of the Speech Production and Speech Perception research, instrumental-phonetic methods with elements of electro-acoustic analysis 9 of speech samples were applied (Bondarenko et al. 2007, Zavyalova 2018). Based on the described research above, which provides the ground for our hypothesis on the underlying syllable code conflict as a trigger of diverse phonological transfer occurrences in East Asian varieties of English, the present 7 As speech production and speech perception are viewed as two inseparable modes of speech interaction (Сasserly & Pisoni 2010), both processes are relevant for the study of phonological transfer in various situations of language contacts, including non-native speech and phonographic adaptation of loanwords. 8 Basilect, mesolect, and acrolect are sociolinguistic terms, which in this context, correspondingly, mean elementary, intermediate and advanced second-language proficiency levels. 9 The computer program used in the research -Praat 5.0.5 -is a software program developed by the Institute of Phonetic Sciences, University of Amsterdam; it is specially designed for phoneticians to assist in analyzing acoustic features of speech (http://www.fon.hum.uva.nl/praat/). study focuses on revealing correlative phonological transfer manifestations in phonographic adaptations of English loanwords in East Asian languages, compared to Russian. Firstly, we analyze the main syllabic rules in English as a donor language and outline the restrictions for them in the recipient languages. To provide examples and comparisons for the second part of our study, in which different types of phonographic adjustments in loan phonology are grouped, a corpus of English loanwords (200 in each recipient language) was formed. The sources include: Tuttle New Dictionary of Loanwords in Japanese (Taeko 1994) and Online Japanese Dictionary of Foreign Words (2016) for Japanese; the electronic dictionary ABBYY Lingvo 12 Software (2006) (Vasmer 1956) and Etimological Dictionary and Dictionary of Anglicisms of the Russian Language (Dyakov 2010) for Russian. English borrowings in East Asian languages are analyzed in Romanized spellings: Romaji 10 , Mandarin Pinyin, and Revised Romanization of Korean (RR), respectively. Their counterparts in Russian, used for comparison, are spelled in Cyrillic 11 . Parallels with regular pronunciation patterns of English words found in the corresponding varieties of English in our Russian-Asian speech corpus (or with phonetic forms made up on the basis of previously revealed regularities) are drawn throughout the description. As our research was initially syllable-oriented, the choice of the English loanwords to be used in the comparative analyses of phonographic adaptation methods was determined by the complexity of syllable structure in the donor language both in terms of consonant clustering in the onset and/or coda of the syllable (CCV-, CCCV-, CVC-, CCVC-, CCCVC-, CVCC-, CCVCC-, CCCVCC-, VC-, VCC-, VCCC-types), and of the phonotactic restrictions within a syllable, with English syllable boundary characteristics being also taken into consideration. According to Faircloth and Faircloth (1973: 78), the percentage of closed syllables in English is estimated as follows: 30.22 (CVC), 16.34 (VC), 5.55 (CVCC), 2.84 (CCVC), 0.72 (VCC), 0.60 (CCVCC), 0.24 (CCCVC), 0.19 (CCCVCC), 0.12 (CVCCC), and 0.02 (CCVCCC). In contrast, East Asian languages under study display strong and principled limitations on consonant clustering, both in syllable onsets and codas. In Russian syllables, consonant clustering is allowed, sequences of phonemes within the clusters following rules of syllable phonotactics. One more difference between the borrowing languages under study is that being syllabic by nature, East Asian languages tend to have a syllable restructuring (resyllabification) constraint in polysyllabic words (Derwing et al. 1993), while the phonemic character of the Russian language along with its synthetic typology (Arakin 2005, Zavyalova etd al. 2016) allows for the syllable restructuring phenomenon in polysyllabic derivatives: e.g., стол ("(a) table") /stol/ -CCVC, but два стола ("two tables") /sto.la/ -CCV.CV, столовый (Adj., "relating to table") /sto.lo.vyj/ -CCV.CV.CVS 12 . In English, contrastively, syllable organization displays a feature which demonstrates strong dependence of coda consonants on the type of vowel nucleus in a syllable under stress: when the vowel is checked it attracts the following consonant, forming a closed syllable. This phenomenon is also known as nucleus-vowel-length-dependence 13 : short vowels can occur only in closed syllables. When the vowel nucleus under stress is long, the following consonant forms the onset of the following unstressed syllable. See examples in Table 2. Since the syllable code in the East Asian languages and Russian prescribes mostly CV or CV(S) models, the boundaries of English closed syllables with checked vowel nuclei in polysyllabic words are not expected to be observed in English speech production (or perception) by East Asian and Russian bilinguals, or in the phonographic adaptation of English loanwords by the recipient East Asian languages and Russian. At the same time, no consonant cluster restructuring is predictable on the part of the Russian language as compared to East Asian languages in the language contacts contexts under study. Another feature of English-specific syllabification which is lacking in East Asian languages and Russian is related to a particular type of syllable formed by the sonorants /l/, /n/ and /m/, which may be preceded by a consonant (e.g. little /'lɪt.l/, table /'teɪ.bl/, garden /'gɑ:.dn/, rhythm /'rɪð. (ә) m/, etc.). In view of the described linguistic differences above, our prediction is that no such syllables are likely to be formed in loan words borrowed from English or in English as L2 production (or perception). Consistent with previous studies (Calabrese & Wetzels 2009, Paradis & LaCharité 2011), our comparative findings in the peculiarities of English speech production and perception by East Asian and Russian speakers, as well as in phonographic adaptation of English loanwords by the recipient East Asian languages and Russian, demonstrate clear evidence of the tendency to transform the syllabic, hence, the rhythmic patterns of an English word (or a rhythmic group), approximating them to the corresponding recipient language schemes. Our findings are also in line with Campbell's (2004: 66), who asserts that non-native phonological "patterns are subject to accommodation, where loanwords which do not conform to native phonological patterns are modified to fit the phonological combinations which are permitted in the borrowing language." Our study revealed the most frequent transformations associated with approximation to native phonological patterns both in the methods of phonographic adaptation of English loanwords by the recipient East Asian languages and in L2 phonetic organization by the mesolect 14 Asian-English bilinguals (compared to Russian-English ones). Typical correspondence patterns of phonetic modifications in both linguistic contexts are attested by the descriptions and examples below.  Consonant clusters occurring in syllable onset (2), (3), and/or coda (1) of an English word cause regular vowel insertion (i.e. onset/coda branching), which results in resyllabification and change in the rhythmic structure of the word, as exhibited in Table 3:  Comparable (2) and different (1), (3) transformations, involving vowel insertion and syllable restructuring, are notable in English-specific type of syllable formed by sonorant acting as a nucleus (sometimes preceded by a consonant) (Table 4).  Closed syllable structures (CVC-type) that are prohibited or restricted by final consonant distribution rules in speakers' mother tongues are modified by either omitting the coda (1) or adding a vowel after it (2), (3), as shown in Table 5:  In a disyllabic (or polysyllabic) English word, closed syllables are regularly transformed into open syllabic units (CVC.>CV.CV.), with the coda of the preceding syllable becoming the onset of the following one (1), (2), (3), as shown in Table 6:  Rhythmic restructuring of English words by East Asian speakers is manifested in two ways, namely: English word English loan in East Asian languages and Russian a) The stress is assigned (or extra prominence is given) to a non-stressed syllable of a polysyllabic English word, as seen in Table 7: b) The overall rhythmic structure of the word can be modified due to syllable restructuring known as plus-segmentation, i.e. adding extra vowel sounds in initial consonant clusters and after single-consonant codas (Table 8). As a result of the described syllabic and consequent rhythmic modifications or autonomous rhythmic restructuring due to linguistic differences in rhythm along with commonplace phonemic substitution, East Asian varieties of English tend to demonstrate neutralization of distinctions between different lexical units, which leads to the formation of homophonic pairs of lexemes both in English speech production and perception by East Asian speakers; see Table 9. Note that the syllabic code of the Russian language does not noticeably conflict with that of the English one; therefore, homophonic lexical pairs are formed in Russian English mostly due to segmental modifications. Table 9 Modification-induced homophones in East Asian and Russian Englishes (Corpus Source) compared to phonetic adjustment of English loanwords (Dictionary Source) Homophone pairs in East Asian Englishes compared to Russian English (1) poker Homophone pairs in East Asian Englishes compared to Russian English It is noteworthy that most salient in our list of loanwords under study are adjustment cases, where vowel-insertion simplification of donor consonantclustered syllable structures, prohibited by the phonotactics of the recipient language, takes place at the beginning of the word, consequently inducing its overall rhythmic restructuring. These adjustments lead to the formations of such homophonic pairs in East Asian Englishes as blood = ballad [ˈbae.lә(d) Table 10 below provides our projection of the likelihood of overall syllabic and rhythmic restructuring induced by L1 syllable-related transfer that commonly occurs in East Asian, Russian and other Expanding-Circle Englishes, as well as its consistently manifesting itself in phonological adjustments of English loanwords in the corresponding recipient languages. As it can be seen in Table 10, overall rhythmic restructuring in both linguistic contexts under study directly depends on the type of resyllabification and stress relocation within a word. Transformation schemes involving plus-segmentation induce mandatory changes in rhythm, while those associated with the minussegmentation tendency (e.g. omitting final consonants) or with regrouping phonemic sequences in a polysyllabic word do not lead to noticeable variations in rhythm. Discussion We maintain that the syllabic and prosodic (rhythmic) resegmentation of English loanwords in East Asian languages and correlative phenomena manifested in East Asian English speech production (as well as in perception of English speech by East Asian speakers) are both caused by the syllable coding-related differences between East Asian languages under study (particularly, Chinese, Japanese, and Korean) and English. It is accepted that phonological typology distinguishes two main groups of languages as to the minimal unit of phonetic coding, namely phonemic languages (like Russian, English, and German) and syllabic ones (like Chinese, Vietnamese, and Burmese), in which the syllable demonstrates phonemic features (see, for example, the definition of a syllabeme in Ivanov & Polivanov 1930). However, the important role of the syllable as a speech unit (Bondarko 1969) or as a unit of higher "mental activity of a speaker" (Ladefoged 1975: 221) is not argued for the phonemic languages. Syllabic structure determines the phonological system of Japanese, which is a mora-syllabic language, and of Korean, which is considered post-syllabic with a unique alpha-syllabic system of writing, Hangeul, relying both on alphabetic and syllabic principles. Although syllable coding in a language cannot be guided by anything but the physiology of speech, there still exist idiomatic rules that make one language sound different from another. Regardless of the phonological or morphological status of the syllable in this or that language, there exist particular regulations determining its phonemic organization. The World Atlas of Language Structures (Maddieson 2013) defines Chinese (Mandarin), Japanese, and Korean as languages with moderately complex syllable structures that "permit a single consonant after the vowel and/or allow two consonants to occur before the vowel," forming CVC and CCV syllable types, where the second of two consonants is commonly limited to being either a "liquid" -/r/, /l/ -or a "glide" -/w/, /j/. English and Russian are classified in the Atlas as languages with complex syllable structures, i.e. having "freer combinations of two consonants in the position before a vowel, or which allow three or more consonants in this onset position, and/or two or more consonants in the position after the vowel," producing (C)(C)(C)V(C)(C)(C)(C) syllable types (Maddieson 2013). So it seems natural that "inconvenient" complex English syllables are regularly transformed into more "convenient" moderately complex ones in East Asian borrowings, which is notable in both loanword phonology and English speech of East Asian bilinguals. No such customary adjustment is marked on the part of Russian English and Russian as a recipient language, being closer to English in terms of the allowable phonemic complexity of the syllable. There are two main reasons to consider the correlation between the phonological processes of adjustment in borrowing and of L1 transfer in L2 acquisition. First, one has to accept that both are induced by language contact, and, which is more important, that both happen in, or via, a bilingual individual who acts as a speaker and as a listener of two languages. It must be pointed out that the phase that precedes both loan adaptation and L2 production is the perception of L2 words (and of speech in general), which is regulated by the bilinguals' auditory system, which is "pre-tuned" by the acoustic properties of their native language. Speakers whose mother tongues do not allow consonant-clustered syllables or exhibit different phonotactic rules within a syllable will unavoidably perform perceptive restructurings of "improper" sound sequences when listening to a foreign language. The same is true in relation to language borrowing, which is performed by bilinguals who, having access to the phonology of the donor language, try to find the closest match among the phonemes and their sequences within a syllable available in the inventory of the borrowing language. Another relationship of English loan phonology to L2 English phonology is that the latter is naturally acquired via the former, i.e. in many cases learners are recommended to increase their L2 English vocabulary through borrowings considered by some researchers and educators as "a built-in lexicon of English" (Daulton 2008, 2015, Hara 2011). For the above reasons, we assume that although the two processes -loan word adaptation (affected by the borrowing language phonological constraints) and L2 acquisition (affected by L1 phonological transfer) -are definitely distinct phenomena. Although they occur in different domains (in L1 and L2), these two phenomena are related by common causality, which is the embodied phonological structure and "calcified" phonotactic (and prosodic) rules of a bilingual's native language. Both phenomena form three groups of elements in L1 (in case of loan word adaptation) and L2 (in case of L2 acquisition): those specific to L1 and to L2, and universals. With regard to syllable types, East Asian Englishes and English loan words adapted in Chinese, Japanese, and Korean are characterized by the presence of the universal CV type of syllable, and the ones specific to L1. Hence, the analysis of one of the two processes can be efficiently used for interpretation of another. Conclusion The paper contains observations on the typological correlations between the phonology of world Englishes in the Expanding Circle and phonographic adaptations of English loanwords in their speakers' native language systems. It provides explicit accounts of L1 phonological transfer phenomena, regular in both linguistic contexts, which confirm that means similar to those identified in linguistic borrowing manifest themselves in syllabic and rhythmic structuring of words and their sequences in corresponding L2 varieties of English. Our data suggest that loanword phonology can be viewed as a valuable source of evidence for phonological constraints in the recipient language, which, further, might shed more light on language-specific and universal phonological features. Correlation of the two phenomena -loan word phonology and L2 phonology -can be accounted for by the similar phonological restrictions in L1 and the phonological transfer effects in language contact. Most influential seem to be L1 syllable constraints that breed various related resyllabification phenomena in loan words and L2 phonology. It should be pointed out that while structural changes taking place in loan adaptations do not interfere with the loanwords' meanings on the part of the borrowing languages' speakers, comparable transformations in non-native English varieties can undeniably be expected to affect word recognition by listeners. From our results it is clear that, for instance, regular simplification of syllable onset consonant clusters via vowel insertion, noticeable in English loans, tends to lead to total restructuring of the word's syllabic and rhythmic patterns, which, when occurring in East Asian Englishes, in many cases forming homophonic English lexemes, might seriously hinder understanding. Furthermore, in view of the volume of English borrowings in East Asian languages under study, we also subscribe to the view that loan phonology serves as a potential cause for a vast range of modifications in English varieties. The main conclusion that can be drawn from our study is that English borrowings in L1 can help us understand the roots of phonetic variation in East Asian Englishes and the phonology of other Expanding Circle varieties of English, in general, which has clear implications for enhancing communication in English-mediated contexts.
6,934.2
2020-12-15T00:00:00.000
[ "Linguistics" ]
Thermodynamical properties of strongly interacting matter in a model with explicit chiral symmetry breaking interactions We analyse the effects of the light and strange current quark masses on the phase diagram of QCD at finite temperature and vanishing baryonic chemical potential, computing the speed of sound, the trace anomaly of the energy momentum tensor and the fluctuations and correlations of the conserved charges associated to baryonic, electric and strangeness numbers. The framework is a known extension of the three flavor Nambu Jona Lasinio model, which includes the full set of explicit chiral symmetry breaking interactions (ESB) up to the same order in large $N_c$ counting as the 't Hooft flavor mixing terms and eight quark interactions. It is shown that the ESB terms are relevant for the description of a soft region in the system's speed of sound and overall slope behavior of the observables computed. At the same time the role of the 8q interactions gets highlighted. The model extension with the Polyakov loop is considered and the results are compared to lattice QCD data. We analyse the effects of the light and strange current quark masses on the phase diagram of QCD at finite temperature and vanishing baryonic chemical potential, computing the speed of sound, the trace anomaly of the energy momentum tensor and the fluctuations and correlations of the conserved charges associated to baryonic, electric and strangeness numbers. The framework is a known extension of the three flavor Nambu Jona Lasinio model, which includes the full set of explicit chiral symmetry breaking interactions (ESB) up to the same order in large Nc counting as the 't Hooft flavor mixing terms and eight quark interactions. It is shown that the ESB terms are relevant for the description of a soft region in the system's speed of sound and overall slope behavior of the observables computed. At the same time the role of the 8q interactions gets highlighted. The model extension with the Polyakov loop is considered and the results are compared to lattice QCD data. I. INTRODUCTION The study of the thermodynamical properties of strongly interacting matter is one of the open present day theoretical and experimental challenges. On the theoretical side the well known problems of the ab initio approach of lattice QCD (lQCD) when dealing with finite chemical potential as well as the will to grasp a deeper understanding of the interplay of the various underlying mechanisms in strongly interacting matter can be seen as an incentive to the use of moderately complex effective lagrangians. On the other hand at vanishing chemical potential the growing confidence in lQCD results means that the agreement with these is increasingly used as a way to establish the success of other theoretical predictions. The values of the current quark masses constitute undoubtedly one of the most relevant inputs in the study of the QCD phase diagram, as the quark condensates become not exact order parameters for the chiral phase transition. Model estimates of the size of the condensates at the critical transition points show that they may be significantly larger than the bare ones [1][2][3][4][5][6][7], indicating that non-perturbative effects are still effective in spite of the transition. After the transition, a more or less slow convergence to the bare values of the condensates depends naturally on the size of the current quark masses and how fast the perturbative regime of QCD is reached, where bulk thermodynamic observables are conditioned by the Stefan-Boltzmann limit pertinent to an ideal quark-gluon gas. The chiral critical end point (CEP) which according to numerous model calculations is expected to occur, separating a region of first order transitions at higher baryon *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>chemical potential µ B and lower temperatures T from the crossover behavior at lower µ B and higher T, is not yet established. A second order transition is likely to occur at this point in the chiral limit of light quarks and an infinitely heavy strange quark [8]. Model results, using quark masses close to physical values, differ drastically regarding its possible location. The hope that its eventual location may be narrowed down in lQCD, using the reweighting technique to extend lattice calculations from µ B = 0 to finite values [9], should the nature of the transition be second order, requires still a detailed analysis of the quark mass and volume dependence [10], on which hinges the accuracy of Lee-Yang zeros of the lQCD partition function [11]. Meanwhile extensive lQCD studies by different groups report that the algorithmic difficulties that prevented the use of the light physical quark masses have been mostly overcome, as well as spurious taste breaking effects for staggered discretization schemes, making it possible to achieve a realistic hadron spectrum [12,13]. Their findings converge to the by now commonly accepted understanding that along the µ B = 0 line no genuine phase transition occurs. A crossover takes place around T ∼ 155 MeV in a T interval of roughly 20 MeV [14], [15], for recent reports see [16], [17] . This value of T decreased substantially as compared to the quoted value one decade ago, T = 192 MeV [18], that used calculations with improved staggered fermions for various light to strange quark mass ratios in the range [0.05, 0.5], and with a strange quark mass fixed close to its physical value (although with an estimate for the string tension 10% larger than the usually quoted), while in [19] the crossover temperature was reported to be close to the present day value for the renormalized chiral susceptibility and about 25 [20,21]. The necessity to devise powerful measures of signatures for this crossover, that could also be useful in the experimental searches, has become a main objective. Besides the chiral condensate and its derivative with respect to the quark mass, the chiral susceptibility, used to probe the restoration of chiral symmetry, fluctuations and correlations in conserved charges have become tools to identify the transition from hadronic to quark-gluon degrees of freedom in the crossover region. In relativistic heavyion colliders experiments the ratios of such fluctuations are obtained in precision experimental studies for several collision energies as part of the RHIC beam energy scan program [22,23], which focuses on the search of the CEP. Chemical freeze-out parameters are then extracted within a canonical ensemble description of the data. These parameters lie below the freeze-out parameters extracted formerly from particle yields in the Hadron Resonance Gas (HRG) model [24] , [25,26] and the single freeze-out model [27][28][29]. A recent analysis using the HRG model to fit the net Kaon fluctuations at RHIC [30] provides experimental evidence that the freeze-out temperatures for strange hadrons could be 10 − 15 MeV higher than for the light ones. Other measures resort to the equation of state (EoS) for determination of the trace of the energy momentum tensor and related specific heat and speed of sound. It has been pointed out a long time ago that the EoS near the QCD phase transition might be very soft as compared to an ideal pion gas [31][32][33] such that a "longest lived fireball" could be produced in relativistic heavy ion collisions [34] with ideal conditions to study signatures of the quark gluon plasma (QGP) as it goes through the stage of deconfinement. This softest point in the EoS is seen as a minimum in the velocity of sound, which measures the rate of change of the pressure with respect to the energy density. lQCD results show that at the minimum the energy density is only slightly above that of normal nuclear matter density [15]. In the present study we address the effects of the light and strange current quark masses in the light of an effective theory of QCD. As is well known from Chiral Perturbation Theory the canonical mass term represents only the leading order of an expansion in the masses themselves [35][36][37]. The explicit symmetry breaking pattern involves current quark mass dependent interactions at higher orders. Our extension included first two kinds of eight quark chiral symmetry preserving interactions [53,54], which were needed to complete the number of vertices important for dynamical chiral symmetry breaking in four dimensions [55,56], and resolved instability issues related to the model's effective potential reported in [57]. The next extension added the set of explicit symmetry breaking (ESB) multiquark interactions at the specific N c order considered [58,59]. The phenomenological impact of the ESB terms on the quality of the low lying spectra of pseudoscalars and scalars as well as other related observables is remarkable, in comparison with the models without their inclusion. In particular the possibility to accurately describe the spectra of the scalar mesons together with a good fit for their strong decays gave us confidence that the model parameters obtained represented an adequate set for the analysis of the model related QCD phase diagram. In a subsequent work [60] we have shown that the extended model leads to the emergence of two CEPs, associated with the light and strange quark condensates, in contrast to the common picture in which only the light condensate relates to a first order transition (except for a small effect on the strange condensate due to the coupling of both sectors), while the strange condensate displays a crossover behavior. The two CEPs act upon the onset for formation of strange quark matter, which is shifted to significantly smaller values of µ B . Recently a similar extension of model interactions within the quark meson model and taking into account finite size effects led to the interesting result that quark matter with only u and d quarks may be the stable configuration in a region close to the end of the table of elements [61]. In face of these new developments one recognizes that the current quark mass effects are far from being fully explored and understood. The crossover regime which follows after the two CEP's reported in [60] towards lower µ B until reaching the µ B = 0 line at higher temperatures is expected to weaken the prominent ESB features mentioned for the critical zone. Nevertheless we show that some observables are sensitive to the current quark mass values and ESB interactions. We stress that the numerical values of the current quark masses are intertwined with the dynamics of the ESB interaction terms and must always be considered together in the extended version. The numerical value of the strange current quark mass is reduced by roughly a factor 2 (thus being possible to reach its empirical value and light to strange quark mass ratios) when ESB interactions are considered in conjunction with best fits for the hadronic spectra, compared to the case without ESB interactions. The obvious drawback of the model is the lack of confinement. However, it should be noted that the NJL model shares the global symmetries with QCD. Therefore it gives a reasonable tool to study the critical phenomena even if the location of the CEP may differ. For instance, both NJL and Polyakov-loop NJL models predict that the critical point is located inside the pion condensed phase. This result is consistent with the QCD no-go theorem [62], which is a rigorous statement in the large-N c limit. Thus, we suppose that symmetries combined with the 1/N c approach lead to a reasonable picture of the phase properties of the hadron matter even if the confinement mechanism is not included. The comparison with lQCD data shows that there is room to improve the model calculations, mainly with respect to the transition to the Stefan-Boltzmann limit that occurs in a much shorter temperature interval, as well as a systematic shift to lower temperatures of the observables considered. The main observation is that the model is able to capture important features, such as the presence of a dip in the sound velocity, and the slopes of most observables considered. Here we show that by coupling the model to the gluonic degrees of freedom through the Polyakov loop, the temperature gap between our model predictions and lQCD data is practically removed, as well as an improvement is obtained regarding the height of the peak of one of the correlators. An overall good agreement with lQCD data is obtained. The text is organized as follows: after revisiting the model Lagrangian and thermodynamic potential in section II we present the thermodynamic observables that we compute and the model fits in section III,III A and III B and discuss the results in section IV. Our conclusions are summarized in section V. A. Model thermodynamic potential Although the model Lagrangian has been introduced and applied in previous works we indicate it here for completeness and refer for further details to [58][59][60]75]. The effective multiquark Lagrangian is expressed in terms of the U (3) Lie-algebra valued field Σ = (s a − ip a ) 1 2 λ a , involving the quark bilinears s a =qλ a q, p a = qλ a iγ 5 q; a = 0, 1, . . . , 8, λ 0 = 2/3 × 1, λ a are the standard SU (3) Gell-Mann matrices for 1 ≤ a ≤ 8. The q designates the color quark fields which enjoy the chiral flavor U (3) L × U (3) R global symmetry of QCD in the massless case. In addition the Lagrangian depends on external sources χ, which generate explicit symmetry breaking effects. In terms of these fields and sources the Lagrangian density reads to next to leading order (NLO) in the large N c counting with and the source dependent pieces Under chiral transformations one has for the quark fields R ; the sources transform as the field Σ. At this stage the sources can be fixed as the current quark masses χ = 1/2diag(m u , m d , m s ), after using the freedom related with the Kaplan-Manohar ambiguity associated with L 1 , L 9 , L 10 [76] and which are henceforth set to zero. The couplings g i =ḡ i /Λ n (g i stands generically for any coupling) carry negative dimensions, given by the powers of Λ, and thus the Lagrangian is non renormalizable. Here Λ ∼ 4πf π ∼ 1 GeV [77] is associated with the scale of spontaneous chiral symmetry breaking. As mentioned in the introduction this Lagrangian contains all non-derivative spin 0 multiquark interactions up to the same counting in large N c as the 't Hooft interaction, given by the term ∼ κ in Eq. 2. It has been shown that the large N c counting scheme selects the same interactions that are also relevant in the effective potential in the limit Λ → ∞, i.e. those scaling at most as Λ 0 . The LO contributions are only the 4q interaction ∼ G of the original NJL Lagrangian and the canonical quark mass term L 0 , all the other terms are N −1 c suppressed with respect to LO. These contain terms which violate the OZI rule (κ, κ 2 , g 1 , g 4 , g 7 , g 8 ), of which (κ, κ 2 ) break the U (1) A symmetry and are thus anomalous, as well as interactions which describe four-quark componentqqqq admixtures to theqq ones (g 2 , g 3 , g 5 , g 6 ). The bosonization of the Lagrangian is carried over with functional integral techniques in the stationary phase approximation resulting in the effective mesonic Lagrangian density L bos at T = µ = 0, written in terms of the scalar, σ = λ a σ a , and pseudoscalar, φ = λ a φ a , nonet valued fields. The result of the stationary phase integration at leading order, L st , is shown here as a series in growing powers of σ and φ. The coefficients h a , h ab , ... depend on the current quark masses and encode all the dependence in the coupling constants, see Eq. 12 below for h a . As in the case of the mass parameters, also only the h a with (a = 0, 3, 8) (or h i , (i = u, d, s) in the flavor basis) do not vanish [54]. The result of the remaining Gaussian integration over the quark fields is given by W ql . Here the second order operator in euclidean space is the constituent quark mass matrix resulting from the process of spontaneous symmetry breaking, which requires a redefinition of the field σ → σ + M, such that the new vacuum expectation value vanishes < σ >= 0. The one-quark-loop action W ql has been obtained by using a modified inverse mass expansion of the heat kernel associated with the given second order operator [78,79]. The procedure takes into account the differences ∆ ij = M 2 i − M 2 j , in a chiral invariant way at each order of the expansion, with b i being the generalized Seeley-DeWitt coefficients. The is the average over the regularized 1-loop euclidean mo-mentum integrals J i with i + 1 vertices (i = 0, 1, . . .) We use the Pauli-Villars regularization [80] with two subtractions in the integrand [81] We take only the dominant contributions to the heat kernel series, up to b 1 , b 2 for meson spectra and decays, which involve the logarithmically I 1 and quadratically I 0 divergent integrals in Λ. The model thermodynamical potential Ω in the mean field approximation is written as a contribution stemming from the stationary phase approximation containing all the dependence on the model couplings, V st , and one which is related to the heat kernel quark one loop integrals J −1 which now carry the explicit T, µ dependence (for details please see [83]) where h i , i = (u, d, s) are solutions of the following system of cubic equations The h i are equal to one half the (unsubtracted) quark condensates, i.e. without the 2nd term in III. THERMODYNAMICAL PROPERTIES OF STRONGLY INTERACTING MATTER Strongly interacting matter is expected to undergo two transitions when subjected to high enough temperature (T ) and/or chemical potential (µ): deconfinement and chiral symmetry (partial) restoration. Although a straight connection between the two is still unclear they are for the most part expected to occur more or less simultaneously [84], [85]. The temperature and chemical potential dependence of fluctuations and correlations of conserved charges can be useful tools serving as indicators for the transition behavior. The fluctuations and correlations of the charges are given respectively by: Here we will consider the results pertaining to baryonic number N B , electric charge number N Q , and strangeness number N S . The corresponding chemical potentials are related to 1 As can be readily deduced from the relations of the corresponding The traced energy-momentum tensor Θ µ µ and the speed of sound C s are also thermodynamical quantities of interest. Both of these have been evaluated in lQCD thus we can use them as benchmarks to evaluate the adequacy of our models. They can be obtained respectively as: with P denoting the pressure, = T s−P the energy density, s = ∂P/∂T the entropy density and C V = (∂ /∂T ) V the specific heat at constant volume. A. Polyakov loop extension Although no gluonic degrees of freedom are present in the NJL model its extension to the so called Polyakov-Nambu-Jona-Lasinio Model is an attempt to mimic part of its dynamics by considering a static homogeneous background gluonic field in the temporal gauge The Polyakov loop L, winding around the imaginary time with periodic boundary conditions, and its trace in color space, φ (and charge conjugateφ) are given as where P stands for path-ordering and β = 1/T . In the quenched limit φ is an order parameter for the transition between the confined phase where the center of SU (N c ) symmetry (Z Nc ) is verified (vanishing traced Polyakov loop) and the deconfined phase where this symmetry is spontaneously broken [86]. An additional term, the Polyakov potential, must be added to drive this temperature induced spontaneous breaking. Its form can be determined by fitting lattice QCD observables. We take two choices, U I [66] and U II [67,74], with parameters shown in Table II, and N S = −Ns (by convention strangeness number is the negative of the number of strange quarks). • Logarithmic form • Exponential K-Log form where the term proportional to K is the Van der Monde determinant, and the exponential term going with a 2 is a modification introduced in [74]. We considered a slight modification of this potential as we used U II φ,φ, T = U II φ,φ, T −U II [0, 0, T ] which enables the reproduction of the expected vanishing value for Ω/T 4 as we approach the vacuum ({T, µ} = {0, 0}). From a practical point of view the extension from NJL to PNJL amounts to the introduction of two new classical fields in the model, φ andφ, the introduction of the Polyakov potential and a modification of the occupation numbers (see for instance [72] for details on the implementation of the model) 2 . At vanishing baryonic chemical potential, the case considered in this work, φ =φ. B. Parameter fitting The parameters of the model are the quark current masses (m u , m d and m s ), the cutoff (Λ) and the couplings (G, κ, κ 2 , g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 and g 8 ). In the formulation of the model without non-canonical explicit chiral symmetry breaking, NJLH8q, these 15 parameters are reduced to 8 (κ 2 and g i with i = 3, . . . , 8 are set to zero). If we neglect the isospin symmetry breaking (m l ≡ m u = m d ) these are further reduced to 7. As was shown in [54] the NJLH8q model can be fitted for several fixed values of the OZI violating eightquark interaction coupling g 1 (the four-quark interaction strength, G, is smaller for increasing g 1 but the remaining parameters are unchanged) while keeping the mesonic spectra unchanged apart from a decrease in the σ meson mass for increasing value of g 1 (the scalar mixing angle also changes). Two sets, denoted as NJLH8qA and NJLH8qB, are shown in Table I with the latter corresponding to the highest value of g 1 (and conversely the lowest G). In these isospin symmetric sets (m l ≡ m u = m d ) we fit the model parameters by imposing a value of g 1 and fitting the remaining 6 parameters using the pion and kaon weak decay couplings (f π and f K ) and the meson masses (M π , M K , M η and M a0 ). This freedom allowed us to isolate and study the impact of the eight-quark interaction term in the model phase diagram in the chemical potential-temperature, {µ, T }, plane [83]. One of the main highlights of this study was the realization that the CEP is shifted to lower chemical potential and higher temperature with increasing g 1 . This in turn leads to a substantial reduction in the related crossover temperature at µ = 0 compared to the case with weak g 1 coupling, as reported before in [6,89], with the lower values of T complying with lQCD results [19]. The extension to include the non-canonical explicit chiral-symmetry breaking interactions introduces 7 new parameters. For the parameter set NJLH8qmA from Table I we chose to impose the value of the current masses (m l and m s ). The remaining 12 parameters can be fitted by fixing f π , f K , the pseudoscalar and scalar mixing angles (θ ps and θ s ) and the 8 meson masses (M π , M K , M η , M η , M a0 , M K * , M σ and M f0 ), in the isospin limit. Note that the inclusion of the ESB interactions allows to fit the pseudoscalar as well as the scalar spectra to empirical data with a high degree of accuracy, as well as the weak decay constants and the current quark mass values. This on the other hand reduces the former freedom in the interplay of G, g 1 parameters, which is now considerably narrowed down, favoring the strong g 1 coupling strength, (we are considering here a range 470 MeV M σ 500 MeV). One also sees that the increase in g 1 comes in this case accompanied not only by a decrease in G, but also a decrease in the ESB couplings g 4 , g 7 . IV. RESULTS AND DISCUSSION A. Speed of sound and energy-momentum trace anomaly In Fig. 1(a) we see a comparison of the temperature dependence of the squared speed of sound at vanishing chemical potential as obtained using lQCD and several parametrizations of our model. Although a complete quantitative agreement seems impossible with the con- Table I. Model parameters obtained using a regularization kernel with two Pauli-Villars subtractions in the integrand (see [90] Parameters marked with an asterisk ( * ) were kept fixed. Several quantities which are either outputs or kept fixed (and used in the fit of the remaining parameters) are presented in the bottom rows: weak decay couplings ([fπ] = [fK ] = MeV), meson masses for the low-lying scalars/pseudo-scalars, the dynamical masses of the quarks (given in MeV) and the corresponding chiral condensates qq i (given in MeV 3 ) taking into account the subtraction of the contribution coming from the current mass, see Eq. 13. The pseudoscalar and scalar mixing angles (θps/θs) are given in degrees. Sets NJLH8qA and NJLH8qB include up to eight-quark interactions but no non-canonical explicit chiral symmetry breaking terms whereas set NJLH8qmA does include these terms. Set m l ms G κ κ 2 g 1 g 2 g 3 g 4 g 5 g 6 g 7 g 8 Λ NJLH8qA 5.94 186. 12 sidered parametrizations an approximation of the general qualitative behavior can be obtained: the squared speed of sound increases with temperature starting at zero until a critical point upon which it dips into a local minimum (for 3 of the sets) and then rises again going asymptotically to the Stefan-Boltzmann limit. For temperatures above this local minimum the obtained speed of sound overshoots the lQCD result whereas for lower temperatures we obtain lower values. Note that as one expects the speed of sound to go to zero in the limit of vanishing temperature and chemical potential the lQCD results should be followed by a dip towards zero for lower temperatures. The model velocity of sound displays a soft point close to the one of lQCD, at around T ∼ 150 MeV. For this feature to be present in the model, the parameter g 1 of OZI-violating 8-quark interactions is required to have a certain strength g 1 ∼ 3000 − 4000 GeV −8 , see set NJLH8qB without ESB interactions, and sets NJLH8qmA, NJLH8qmB with ESB in Table I; at the weak coupling g 1 = 500 GeV −8 of set NJLH8qA the relative minimum is absent and the velocity of sound shows a monotonous decrease. It had already been noticed some time ago that the strength of parameter g 1 has impact on the number of degrees of freedom [83]; a rather strong (more than 50 %) suppression of the artificial quark excitations (due to lack of confinement of the model) at T /T c > 0 was observed, in comparison to a PNJL model calculation [70]. Thus it is understandable that this property manifests itself in the occurrence of the relative minimum in the velocity of sound, as the model emulates partially the missing degrees of freedom attributed to the onset of deconfinement. The inclusion of the ESB breaking parameters does not change this important property, in spite of the fact that the accurate fit of the low lying spectra and related properties strongly constrains the parameters of the model in the vacuum. On the contrary: as mentioned before, the ESB interac- Table I: dashed lines correspond to the NJLH8q sets whereas solid lines correspond to the NJLH8qm sets (respectively the ones without and with non-canonical explicit chiral symmetry breaking). The markers, labeled as WBEoS and HotQCDEoS, correspond to continuum extrapolated lQCD results taken respectively from [14] and [15]. tions together with the requirement of having good fits of the spectra rule out the smaller values of g 1 strengths. The fact that the former freedom in the model parameter g 1 is narrowed down, and specifically to the values that describe the soft region in the speed of sound, can be seen as a major result regarding the phenomenological importance of including ESB interactions in the model. Apart from the relative minimum in the speed of sound, notice that the sets with ESB display a slight dip after the steep rise and before flattening out. This is a remnant of the two CEPs encountered in the model as described previously, and still visible at µ B = 0. This behavior may be guessed to be very subtly present in the lQCD points as well, although it would require further investigation to clarify this point. Moving to Fig. 1(b) , one sees that the height of the peak of the trace of the energy momentum tensor is improved, as well as the slope before the transition, comparing with lQCD data, by the inclusion of the ESB terms (or the selection of the strong g 1 coupling without ESB). To understand that this is natural to expect within our model, we recall that the trace of the energy momentum tensor and the number of degrees of freedom ν(T ) = (90/π 2 ) P (T )−P (0) As mentioned above, the number of degrees of freedom for weak and strong g 1 coupling was obtained in [72,83], as function of T /T c , where T c is the crossover temperature. Converting by this factor one obtains the slope behavior displayed in Fig. 1(b). The slope after the peak is steeper in the model than in lattice calculations, but this is also expected, since the model approaches the Stefan-Boltzmann limit faster. Furthermore since in the non-Polyakov loop extended case there are no gluonic degrees of freedom present, this limit corresponds to a lower value (lim T →∞ −Ω N JL /T 4 = 31.5 π 2 /90 whereas lim T →∞ −Ω P N JL /T 4 = 47.5 π 2 /90 , see for instance [72,83]). The energy density, , and pressure, P , as well as their derivatives with respect to the temperature, T , the specific heat, C V and entropy density, s, are depicted in Figs 2. Despite the reasonable agreement, apart from a shift towards lower temperatures, when we look at the energymomentum trace anomaly, Θ µ µ , (see Fig. 1(b)), in the individual thermodynamical quantities involved, P and , as well as s (See Figs. 2(a), 2(b) and 2(c)), the effect of the missing degrees of freedom is clearly present in their asymptotic behavior. In the cases with stronger eight-quark interactions (NJLH8qB, NJLH8qmA/B) the specific heat (see Fig. 2(d)) deviates from the lQCD with a marked peak around the transition region reflecting the faster transitional behavior. The slope of the curve for temperatures lower than the transition is however better reproduced in the cases with stronger g 1 . B. Fluctuations and correlations of conserved charges In Fig. 3 the fluctuations of several conserved charges are shown. The main gross feature for the baryonic susceptibility χ B 2 in Fig. 3(a) is that the slope improves significantly for the sets with strong g 1 coupling, in comparison with lQCD data. One sees further a noticeable change of slope between the steep rise and the flattening of the curves for sets including ESB interactions, NJLH8qmA and NJLH8qmB. A hint of such behavior using the parameter sets NJLH8q and NJLH8qm. The markers, labeled as WBEoS and HotQCDEoS, correspond to continuum extrapolated lQCD results taken respectively from [14] and [15]. seems to be present in the HotQCDEoS data as well. In a similar fashion do the slopes get improved in the fluctuations of strangeness χ S 2 , shown in Fig. 3(c) for the sets NJLH8qmA and NJLH8qmB. As opposed to these the slope is much steeper for these sets in the calculation of the electric charge fluctuations χ Q 2 displayed in Fig. 3(b), when compared to the lattice points. By expressing these fluctuations in terms of the quark number susceptibilities one sees that the weight of χ u 2 in χ Q 2 is larger than in χ B 2 by a factor 4. We have looked at the individual contributions within the model and found that the transition for χ u 2 occurs faster than for χ s 2 , as expected, and that the slope increases when the ESB interactions are taken into account, while the crossed contributions vanish, reflecting the fact that the model has no gluonic degrees of freedom [93,95], (the Polyakov loop introduces such a correlation, see section IV C below). So it seems that the slope of χ u 2 dominates the scene in χ Q 2 , due to the weighting factor, while the distribution of weights in the χ B 2 leads to the correct slope in comparison to lattice results. The χ S 2 provides for a clean probe of the strange quark number susceptibility, as this is the only contribution. For this case one sees that the lQCD slope is well reproduced with the ESB model sets. Regarding the correlations displayed in Fig. 4, the same effect seems to be at work for correlation of baryonic and electric charges χ BQ 11 , shown in Fig. 4(a); it displays a too fast increase as compared to lQCD for the sets with ESB breaking. As opposed to this the correlations of baryonic and strangeness charges χ BS 11 in Fig. 4(b) show a slope in conformity with lQCD. The correlation of strangeness and electric charges also gets improved for . The markers, labeled as WBEoS and HotQCDEoS, correspond to continuum extrapolated lQCD results taken respectively from [93] and [94]. the ESB sets, Fig. 4(c). This can also be understood by looking at the dependence of these correlations on the quark number susceptibilities Since the correlations χ us 11 are smaller in magnitude in lQCD than the χ i 2 (i = u, s) [93] (χ ud 11 was shown to be small in the flavor SU (2) case [96]), and vanish identically in the model, the only dependence is on χ s 2 for χ BS 11 , χ QS 11 . We have seen that the slope for χ s 2 reproduces well the corresponding lQCD slope, which explains the satisfactory behavior of the slopes of χ BS 11 , χ QS 11 as well. The situation is different for χ BQ 11 , which depends on χ u 2 , for which a too steep slope occurred compared to lQCD. C. PNJL extension The impact of coupling the quark degrees of freedom to the gluonic sector, using the PNJL model extension with two types of potentials is discussed in this section. The gross feature is a systematic shift of all the curves describing the observables of the last subsection to higher temperatures, which is an important effect in bringing most of the observables related with fluctuations and correlated charges closer to the lQCD curves. However the effect on the velocity of sound and the trace of the energy momentum tensor depends strongly on the type of Polyakov loop potential used. Let us discuss first these observables. In Fig. 5(a) the velocity of sound is displayed, calculated with the Polyakov loop potential U I in Eq. 17, showing that independently of the NJL parameter sets of Table I considered, a too deep relative minimum for the velocity of sound occurs, about a factor 2.5 smaller in magnitude, in comparison with lQCD. This result supersedes all the nuances discussed previously in relation with ESB terms. In Fig. 5(b) one sees that the peak of the trace of the energy momentum tensor is roughly twice the value of the lQCD one. These dominating characteristics are also present in the polylogarithmic variant of the Polyakov loop potential in [73], which are therefore not shown. Contrary to this, the potential U II in Eq. 18 discriminates between the different NJL sets. Minima occur for the sets with ESB interactions (and large g 1 coupling without ESB) see Fig. 6(a), two shallow minima, and a more pronounced one for the ESB set with stronger g 1 coupling; the set PNJLH8qA without ESB terms corresponding to weak g 1 coupling does not display a minimum in the velocity of sound, as also verified in [97] using U II , and as it was the case without the Polyakov loop extension, see Fig. 1(a). Regarding the trace of the energy momentum tensor it turns out to be fairly well represented in comparison with the lQCD results using U II , see Fig. 6(b). The individual thermodynamical quantities contributing to Fig. 6(b), and P , as well as their derivatives with respect to the temperature T , C V and s are depicted for the case of the potential U II in Figs 7. Overall, the correspondence of the presented quantities with lQCD is quite satisfactory (note that the inclusion of the extra degrees of freedom enables the correct asymptotic behavior for P , and s). The slight change of slope in around T = 0.18, compared to lQCD, results in a visible peak in C V . For C V and Θ µ µ the quark interaction parameter set which presents the best fit is NJLH8qA. For the other quantities the difference between quark interaction parameter sets (mainly in the transition region) is too small for the purpose of comparison with lQCD results. We omit the corresponding figures for the choice of U I since the calculations resulted in large deviations from the respective lQCD data, as one would expect from Fig. 5(b), and turn out not to be very instructive. Turning to the fluctuations and correlations of the different charge numbers N B , N Q , N S one observes that all observables which had a good slope in the NJL model, i.e. χ B 2 , χ S 2 , χ BS 11 , χ QS 11 for the sets with ESB interactions, get shifted to higher temperatures and agree fairly well with the lQCD data for both Polyakov loop potential implementations, see Figs Unfortunately the slope of the correlator χ Q 2 is not improved with either potentials, see Figs. 8(b) and 9(b). Finally we show in Fig. 12 that the coupling of the quark and gluonic degrees of freedom leads to a nonvanishing correlation between the light and strange quark numbers, albeit smaller than in lQCD, with the U II potential yielding a larger fraction. We also remark that this quantity is not sensitive to the details of the parametrizations in the quark sector. V. CONCLUSIONS We have used the three flavor NJL Lagrangian that has been enlarged in recent years to accommodate systematically current quark mass effects at NLO in the large N c counting scheme to address several thermodynamic observables. These explicit symmetry breaking (ESB) interaction terms are of the same order as the 't Hooft U A (1) breaking anomalous contribution and the previously introduced symmetry preserving eight quark interactions. It has been shown that the ESB terms play a very important role in the description of accurate characteristics of pseudoscalar and scalar mesons. This opened for the first time the possibility to study the model phase diagram of QCD with a set of parameters which reproduces the empirical spectra, together with current quark masses that fit the actual PDG values, allowing to narrow down the uncertainties related to the model parameters. While the model reaches systematically the Stefan-Boltzmann limit too fast as compared to the lattice results and is systematically shifted to lower temperatures as compared to lQCD, there are some relevant features which are reproduced. We highlight the main results: (i) The ESB terms together with the realistic spectra select a region of parameters with strong OZI violating 8q coupling g 1 . We recall that without the ESB terms there was an interval of values for this coupling, which in an interplay with the 4q G coupling, left the spectra unchanged except for the σ(500) mass that got reduced for increasing g 1 . The freedom in g 1 was accompanied by a sliding CEP position in the model QCD phase diagram. (ii) In the strong g 1 coupling regime enforced by the ESB terms the velocity of sound displays a soft point as predicted by relativistic heavy ion models and lQCD. This relative minimum is absent in the NJL model which contemplates only the 4q and 't Hooft interactions, or weak g 1 couplings. Figure 5. In 5(a) the model Polyakov loop extension with potential UI in Eq. 17 is shown for the squared speed of sound as a function of temperature ([T ] = GeV) at vanishing chemical potential as obtained using the parameter sets from Table I. In the legend a "P" has been attached at the beginning of each parameter set, meaning that the Polyakov loop extension has been applied, PNJLH8qA and PNJLH8qB correspond to the sets without the ESB terms, PNJLH8qmA, PNJLH8qmB include the ESB interactions.The markers, labeled as WBEoS and HotQCDEoS, correspond to continuum extrapolated lQCD results taken respectively from [14] and [15]. In 5(b) is shown the energy-momentum trace anomaly for the same Polyakov loop potential. Figure 6. In 6(a) and 6(b) the same observables are shown as in 5(a) and 5(b), but obtained with the Polyakov loop potential UII in Eq. 18. The markers, labeled as WBEoS and HotQCDEoS, correspond to continuum extrapolated lQCD results taken respectively from [14] and [15]. (iii) The trace of the energy momentum tensor displays a peak with height close to the lattice results; the slope of this quantity gets improved as compared to the model without the ESB interactions. However, although the strong g 1 coupling regime describes overall better slopes, it leads to a visible peak in the transition regime for the related quantity C V , which is not favored by lQCD data. (iv) The slopes of the susceptibilities χ B 2 , χ Q 2 , χ S 2 of the conserved baryonic, electric and strange charges are sensitive to the weighting factors of the quark number susceptibilities χ u 2 , χ d 2 , χ s 2 that enter in their definition. We find that the slopes for χ B 2 and χ S 2 , as well as for the correlation involving these two charges, χ BS 11 , get substantially improved, while it is too steep for χ Q 2 . The observable χ S 2 is a clean probe for the slope of the strange quark susceptibility χ s 2 , which agrees well with the corresponding lQCD slope. (v) Finally, by coupling the quark to the gluonic degrees of freedom via the Polyakov loop we observe that the temperature gap between the NJL and the lQCD curves disappears practically and the overall characteristics of the lQCD data is rather well . The markers, labeled as WBEoS and HotQCDEoS, correspond to continuum extrapolated lQCD results taken respectively from [14] and [15]. reproduced. For the trace of the energy momentum tensor the Polyakov loop potential U II is better suited to describe the lQCD data than the potential U I , within our model calculations. 2 ) and in 8(c) strangeness (χ S 2 ). Same notation is used as in 5(a) for the lines. The markers, labeled as WBEoS and HotQCDEoS, correspond to continuum extrapolated lQCD results taken respectively from [93] and [94]. ) and in 10(c) electric charge and strangeness (χ Q S 1 1 ). Same notation for lines as before. The markers correspond to continuum extrapolated lQCD results taken from [94] . Figure 12. Correlation of the up and strange charges (χ u s 1 1 ) compared to continuum extrapolated lQCD results taken from [93] (labeled as WBEoS). As this quantity is almost insensitive to the parametrization of the quark interactions we chose to display the effect of the choice of Polyakov potential (for the fermionic part we chose set NJLH8qB). Without the inclusion of Polyakov loop this quantity vanishes.
10,460.8
2018-06-01T00:00:00.000
[ "Physics" ]
Cascade Superfluorescence in Er:YLF We report the analysis of paired photon pulses arising from two cascading transitions in continuously pumped Erbium-doped YLiF$_4$ 1% and 0.01% crystals at 1.6 K. The dependence of the pulse peak intensity on the squared number of involved Erbium ions, between 10$^{11}$ and 10$^{13}$, definitely identifies the cooperative nature of the two pulsed emissions, that are generated by the subsequent, spontaneous formation of coherent states. The observed fluctuations of the time interval between the paired pulses and, most importantly, its correlation with the second pulse duration, demonstrate that the Erbium ions coherence is indeed seeded by vacuum fluctuations. The coherent coupling of N identical excited atoms results in a cooperative spontaneous emission in which the atomic transition rate is accelerated by a factor N [1]. Rather than being prepared through external coherent sources [2,3], the coupling can be seeded in uncorrelated identical emitters by their own independent spontaneous emission. Atomic coherence then exponentially spreads through the coherent emission that finally determines the atomic ensemble de-excitation. The resultant burst of coherent radiation is called superfluorescence (SF) [4]. Starting from the first demonstration of SF from HF gas [5] to the recent achievements in high pressure gases [6], nanostructured materials [7] and colour centres in diamond [8], pulsed excitation has been employed to accomplish population inversion. A delayed directional emission has been searched as a signature of SF, as the maximal coherence is achieved after a random delay time τ d [9,10]. However, it is rarely considered that the pulsed excitation could determine the simultaneous formation of several, independent coherent subensembles. Since their related delay time is likely to be similar, the observed SF radiation burst could be a temporal superposition of few pulses. The result is a deviation from the expected sech-squared temporal profile, that introduces a systematic error in the estimate of the effective cooperativity factor among the radiators. Besides the excessive size of the active medium [11], the formation of independent subensembles is due to the different transition energy of the emitters. For instance, the superposition of two or three sech-squared pulses, ascribable to three well defined vibrational mode frequencies, has been detected in CH 3 F gas [12]. The effect has been invoked also to explain the observed amplitude modulations in SF pulses, arising from the interference of emissions by different hyperfine levels in Cs [13], Rb [14] and Na [15] vapours. Indeed, in inhomogeneously broadened systems, as is the case for solid-state materials, the degree of cooperativity can be hardly inferred from the complex SF temporal profile [16]. The superposition of the coherent emissions can be avoided by cw pumping the population inversion since there is no temporal correlation in the formation of different coherent subensembles. Pure sech-squared pulses have been reported under this pumping condition in Er:Y 2 SiO 5 (Er:YSO), and the radiated intensity is in agreement with that expected for a single macrocoherent state [17]. In this work we study paired SF pulses emitted by Er:YLiF 4 (Er:YLF) crystals doped at 1% and 0.01% in a cascading transitions scheme. This phenomenon is referred to as cascade superfluorescence (CSF) as the atoms, driven to an intermediate level by a first SF process 2 → 3 (see Fig. 1), sequentially develop a second SF 3 → 4 transition. CSF was attained only in a scant number of gaseous systems [14,[18][19][20][21] due to the demanding condition on the preparation of the excited state. If a coherence degree is initially induced between 0 and 2 levels, the onset of superfluorescence transition 2 → 3 is accompanied by the 3 → 0 coherent emission, called yoked superfluorescence, thereby suppressing the 2 → 3 → 4 CSF [21,22]. An advantage of solid-state systems is the possibility to exploit multiphonon relaxation to accomplish a population inversion without initial coherence. However, the same process could be also responsible for the decrease of the steady-state population, which is instead essential to initiate SF in cw excitation. Actually, the SF regime requires that the characteristic superfluorescent time τ R , which scales with N −1 , must be much shorter than the atomic coherence time T 2 . We thus select an active material with low phonon energy host matrix [23] in order to reduce the multiphonon relaxation rate [24]. In addition, the sample is maintained at 1.6 K inside a superfluid Helium cryostat, described in Ref. 25, to ensure a long coherence time [26,27]. Its three optical windows allow for laser pumping, investigating pulses emitted in the forward direction, along with monitoring the isotropic incoherent emission [17]. These cascading transitions give rise to paired pulses as shown in Fig. 3. As the first pulse triggers the acquisition in the time domain, a stochastic distribution of delays of the following pulse at 1534 nm is recorded. In our measurements, the upper level superfluoresces to a level that does not coincide with the starting level of the second SF. The time intervalτ d between the cascade pulses therefore includes a time t 0 , proportional to the non-radiative 4 I 13/2 (1) → 4 I 13/2 (0) transition time, in addition to the usual delay time τ d of the second SF. A competitive process that shares some features with SF is the amplified spontaneous emissions (ASE). In particular, the observation of directional, sech-squared pulses, along with a superlinear dependence of the timeaverage intensity with respect to the level population is not sufficient to discriminate SF from ASE. The latter dominates over SF for increasing temperatures [29], and SF could hardly be claimed in Er:YLF for temperatures up to 60 K [30]. The definitive proof of cooperative emission is given by the N 2 dependence of the pulse peak intensity R p . As detailed in Ref. [17], when the population inversion density is high, the single pass gain that the SF pulse experiences in its propagation through the medium must also be considered. The observedN = N + N 0 pulse photons can therefore be traced back to N coherent atoms and N 0 uncorrelated atoms driven via stimulated emission. The scaling laws identifying SF process can thus be rewritten as functions of the observablesN , τ R and R p in the forms [17]: where A is the spontaneous emission rate of the SF transition and µ is a factor that depends on the atomic sample geometry. We have collected hundreds of paired pulses for both Er:YLF crystals using an InAs detector. In this case, the photon bunches are not fully collected and the SF equations are verified qualitatively for both the emission wavelengths by adding a scaling factor. When the SF beam at 1.5 µm is properly coupled to the sensitive area of an InGaAs photodiode, we are able to infer the coherent atom number N by measuringN and by estimating N 0 with the fitting procedure. In fact, µ and N 0 are the only free parameters as A = 14.5 s −1 is actually estimated independently by measuring both the 4 I 13/2 (0) lifetime and the related spectral intensity of the 8 radiative transitions towards the ground state manifold. For a pencil-shaped, homogeneous atomic sample, the geometry factor is given by µ = 3λ 2 /(8π 2 ω 2 0 ), where ω 0 is the incident laser beam waist and λ is the emission wavelength [1,31]. For the data acquired with ω 0 = 66 µm and shown in Fig. 4, the expected value µ = (2.0 ± 0.2) · 10 −5 agrees with the obtained bestfit parameter µ = (1.9 ± 0.3) · 10 −5 . We recorded SF pulses also with ω 0 = 130 µm and the calculated value in this case exceeds the experimental one by a factor 4, for both crystals concentration. Such a discrepancy may be ascribed to the fact that the pencil-shaped geometry approximation is in this case not suited to fit the excited atomic distribution. Our findings show that coherence spontaneously develops over more than 10 11 Er 3+ ions. The largest value N = 8 · 10 12 ions was observed in the Er:YLF 1% crystal with ω 0 = 130 µm. As previously reported in Er:YSO [17], this atom number is a fraction (∼10 −3 ) of the inversion population, suggesting a self-selection process of the excited ions based on the similarity of their transition frequency. An interesting aspect of CSF is the correlation between the paired SF pulses, which allows for investigating the τ d statistics even in the cw pumping regime. For an ensemble of N identical, uncorrelated atoms, the delay time is determined by the strength of the quantum fluctuations initiating the SF process. Once the coherence is seeded, the physical system evolution is semi-classical and macroscopic fluctuations of τ d are a direct consequence of quantum noise [32]. Rehler and Eberly [31] derived the average delay time τ d = (µAN ) −1 ln(µN ) = τ R ln(µN ), and its standard deviation σ(τ d ) = 1.3τ R , whereas for other authors [33] It is worth noticing that both models neglect the transverse and light propagation effects, the dephasing processes and inhomogeneous line broadening. For these reasons, experimental findings generally differ from expected values, especially in solid state systems [7,16]. Clearly, a better agreement is found with optically trapped ions [34,35]. As previoulsy demonstrated for a similar pumping scheme [35], the relaxation process within the 4 I 13/2 manifold, that takes place before the 1534 nm-wavelength SF onset, shifts the distribution peak position to τ d = τ d + t 0 . In Fig. 5 a we report τ d versus τ R of the second pulse for four different experimental conditions. Each data set is obtained from approximately one thousand recorded paired pulses. The points represent the averageτ d for the binned τ R values, with 5 ns (orange, red and blue data) or 10 ns (green data) bin widths. The error bars indicate the corresponding standard deviation and hence the amplitude of the delay time fluctuations. An example of theτ d distribution is shown in Fig. 5 The τ d values linearly depend on τ R for all four data sets, as expected for SF. The same trend applies to σ(τ d ) values, indicating larger fluctuations for smaller ensembles. However, the estimated τ d -τ R linear coefficients m, constrained within 2 and 5, are smaller than those calculated for N =10 11 -10 12 using the mentioned simplified models. At resonance, the pump laser absorption is strong in the 1%-concentration sample and the excited ion are mainly found within the first hundreds of micrometers, as is the case for the orange data for Fig. 5. These data significantly differ from the other sets in terms of linear coefficient, line intercept and fluctuations amplitude, as the distribution is less uniform along the whole length of the crystal (∼ 5 mm). This might be explained by considering radiating transverse modes, whose number equals the square of the Fresnel factor F = πω 2 0 /λL, with L the length of the atomic ensemble [10]. Authors have in fact reported a reduction of both τ d and σ(τ d ) for increasing F values [9]. The smaller value of the orange data line intercept, which is an estimator for t 0 , can be also ascribed to the stronger laser absorption that induces a higher phonon density [36]. In the CSF of our physical system, a significant correlation is obtained between the area of the paired pulses (see Fig. 5 c), despite the influence of dephasing processes and fluctuations of the 4 I 13/2 (0) steady-state population. In this work we demonstrate that it is possible to accomplish CSF in solid-state. The use of cw pumping enables the detection of light pulses stemming from wellidentified macrocoherent states, whose temporal dynamics is in good agreement with models that consider the number of radiating ions and their spatial distribution. Most importantly, the cascade superfluorescence allows for investigating also the delay time and its fluctuations in physical systems where SF is obtained with cw pumping. AUTHOR CONTRIBUTIONS F.C. and C.B. designed and performed the experiments with the help of A.K.. F.C. and C.B. wrote the manuscript. F.C. analyzed the data. A.D.L. and M.T. grew and characterized the samples. All authors discussed the results and revised the manuscript. The authors declare that they have no competing interests.
2,895.2
2020-09-13T00:00:00.000
[ "Physics" ]
Enhancing Named Entity Recognition in Twitter Messages Using Entity Linking In this paper, we describe our approach for Named Entity Recognition in Twitter , a shared task for ACL 2015 Workshop on Noisy User-generated Text (Baldwin et al., 2015). Because of the noisy, short, and colloquial nature of Twitter, the performance of Named Entity Recognition (NER) degrades significantly. To address this problem, we propose a novel method to enhance the performance of the Twitter NER task by using Entity Linking which is a method for detecting entity mentions in text and resolving them to corresponding entries in knowledge bases such as Wikipedia. Our method is based on supervised machine-learning and uses the high-quality knowledge obtained from several open knowledge bases. In comparison with the other systems proposed for this shared task, our method achieved the best performance. Introduction Named Entity Recognition (NER) refers to the task of identifying mentions of entities (e.g., persons, locations, organizations) within text. Because of the noisy, short, and colloquial nature of Twitter messages (or tweets), the performance of standard NER software significantly suffers. For example, Derczynski et al. (Derczynski et al., 2015) recently demonstrated that the performance of various state-of-the-art NER software (e.g., Stanford NER and ANNIE) is typically lower than 50% F1 1 for tweets. Entity Linking (EL) refers to the task of detecting textual entity mentions and linking them to corresponding entries within knowledge bases (e.g., Wikipedia, DBpedia (Auer et al., 2007), 1 The harmonic mean of precision and recall. Freebase (Bollacker et al., 2008)). Because of the recent emergence of large online knowledge bases (KB), EL has recently gained significant attention. It is evident that the performance of EL also degrades when analyzing tweets (Derczynski et al., 2015;Meij et al., 2012). However, Guo et al. (Guo et al., 2013) recently revealed that the main failures of Twitter EL are caused while detecting entity mentions from text, because existing EL methods usually address the mention detection task by using external NER software whose performance is unreliable when processing tweets. Consequently, several approaches (Guo et al., 2013;Yamada et al., 2015) have been proposed with enhanced abilities that address the task in an end-to-end manner without completely depending on NER software. The main objective of this study is to investigate the possibility of enhancing the performance of Twitter NER by using an end-to-end EL. Although EL is typically performed after NER in most of the existing methods, our approach performs EL before NER and uses the EL results to enhance the NER performance. Resolving the entity mentions to the KB entries enables us to use the high-quality knowledge in KB for enhancing the NER performance. This knowledge includes things such as the popularity of the entity, the classes of the entity, and the likelihood that the entity appears in the given context. We begin by briefly introducing our end-to-end EL method that specifically focuses on tweets. Our EL method is based on supervised machinelearning and addresses the task in an end-to-end manner. It considers every possible n-gram as a candidate entity mention and detects the mention with a corresponding link to a KB entry if the mention exists in the KB. Furthermore, it can handle mentions that appear as irregular forms (e.g., misspellings, abbreviations, acronyms) using several approximate string matching algorithms. The NER task is split into two separate subtasks: segmentation and classification. During segmentation, the entity mentions are detected from tweets. Then, the entity mentions are classified into the predefined entity types. Both tasks involve supervised machine-learning with various features. For the segmentation task, we use data obtained from the KB of the corresponding entity mention detected by the EL and the output of a NER software as the main machine-learning features. Furthermore, we include several common features used in traditional NER methods. For the classification task, the following three types of features are used as primary features: 1) the KB types of the entity detected by the EL, 2) the entity types detected by the NER software, and 3) the vector representation of the entity mention derived from word embeddings. The entity's KB types are extracted from the corresponding entries in DBpedia and Freebase. Furthermore, the vector representation of the entity mention is derived using GloVe word embeddings (Pennington et al., 2014). To train and evaluate our system, we used the dataset given by the Named Entity Recognition in Twitter shared task. Our proposed method significantly outperformed the second ranked system by a wide margin; 10.3% F1 at the segmentation task, and 5.0% F1 at the end-to-end (both the segmentation and the classification) task. Preprocessing The system first assigns part-of-speech tags to the resulting tokens using ARK Twitter Part-of-Speech Tagger (Gimpel et al., 2011). It also tokenizes Twitter hashtags using our enhanced implementation of the hashtag tokenization. Entity Linking We formalize our EL task as follows: Given a tweet, our goal is to recognize a set of entity mentions (e.g., Obama, President Obama, Barack Obama) that appear in a tweet, and then resolve the mentions into entities (e.g., Barack Obama) in Wikipedia if they exist. Our EL system addresses the task using the following two steps; mention candidate generation and mention detection and disambiguation. Mention Candidate Generation Our system first generates a set of candidate entity mentions with the set of corresponding referent entities. The system takes all the n-grams of n ≤ 10 and looks up each n-gram in a dictionary, treats an n-gram as a candidate mention if it exists in the dictionary, and finally, generates an output of pairs of mentions and their associated possible referent entities. Mention-Entity Dictionary: The system uses a mention-entity dictionary that maps a mention surface (e.g., apple) to the possible referent entities (e.g., Apple Inc., Apple (food)). The possible mention surfaces of an entity are extracted from the corresponding Wikipedia page title, the page titles of the Wikipedia pages that redirect to the page of the entity, and anchor texts in Wikipedia articles that point to the page of the entity. We constructed this dictionary using the January 2015 dump of Wikipedia. Approximate Candidate Generation: One major problem of the mention candidate generation task is that many entity mentions in tweets cannot be detected because they appear as irregular forms (e.g., misspellings, abbreviations). In order to address this problem, we introduce the following three approximate string-matching methods to improve the ability of this task: • Fuzzy match searches the mention candidates that have text surfaces within a certain distance of the surface of the n-gram measured by edit distance. • Approximate token search obtains mention candidates whose text surfaces have a significant ratio of words in common with the surface of the n-gram. • Acronym search retrieves mention candidates with possible acronyms 2 that include the surface of the n-gram. When using the above methods, we observed that the number of mention candidates becomes very large. To deal with this, we use a simple filtering method based on soft tf-idf (Cohen et al., 2003); we simply use only the mention candidates that have a similarity greater than a threshold measured by the soft tf-idf. We use 0.9 as the threshold because this achieves the best performance in our experiments of EL. Mention Detection and Disambiguation Given a pair of a mention and its possible referent entity, it needs to be determined if the possible referent entity is indeed the correct one for its associated mention. In this system, we use a supervised machinelearning algorithm to assign a relevance score to each of the pairs and select the entity mention with the highest score. We use random forest as the machine-learning algorithm. Here, we use machine-learning features that are mostly identical to the method proposed previously (Yamada et al., 2015). Basically, we use various features that are commonly observed in EL studies and enhance the performance further by introducing two new features: 1) the entity popularity knowledge extracted from Wikipedia page views 3 , and 2) the contextual similarity between the entity and the tweet measured by word embeddings. Named Entity Recognition We address the NER task by performing two subtasks: segmentation and classification. Segmentation of Named Entities In this step, entity mentions are detected from tweets. We formalize this task as follows. Given an n-gram in a tweet, the goal of this task is assigning a binary label that represents whether the n-gram should be detected as an entity mention. Note that in order to enable the straightforward integration of EL and this task, we formalize this task as simply classifying n-grams instead of the commonly-used IOB labeling approach (Ramshaw and Marcus, 1995). The basic strategy that we adopt here is to combine the output of NER software and the KB knowledge of the corresponding entity mention detected by the EL using supervised machinelearning. We again use random forest as the machine-learning algorithm. We use Stanford NER 4 as the NER software that achieves relatively better performance in the Twitter NER task in a recent study (Derczynski et al., 2015). Here, we adopt two models of Stanford NER to enhance the performance: 1) the standard three-class model which is included in the software and 2) a model that does not use capitalization as a feature, in order to deal with the unreliability of capitalization in tweets. The results of the NER and the KB knowledge of the corresponding entity mention detected by the EL are used as the primary machine-learning features. We also include features that are traditionally used in NER such as part-of-speech tags and the capitalization features. Furthermore, the ratio of the capitalized words in the tweet is also used as an indicator of the reliability of the capitalization. The machine-learning features for this step include: • EL relevance score * : The relevance score of the entity mention assigned by the previous EL step. • Link probability * : The probability of the entity mention appearing as an anchor text in Wikipedia. • Capitalization probability * : The probability of the entity mention being capitalized in Wikipedia. • The number of inbound links * : The number of inbound links of the corresponding entity in Wikipedia. • The average page view * : The average page view of the corresponding entity in Wikipedia. • NER span match: Binary values that represent whether the n-gram is detected by NER models. • Part-of-speech tags: Part-of-speech tags of the previous, first, last, and next words of the n-gram. • Context capitalization: Binary values that represent whether the previous, first, last, and next words of the n-gram are capitalized. • Character length: The number of characters read in the surface of the n-gram. • Token length: The number of tokens read in the n-gram. Note that some features (marked with * ) are based on an entity mention detected by EL, thus these features can be missing if there is no corresponding entity mention detected by the EL. We also resolve overlaps of mentions by iteratively selecting the longest entity mention from the beginning of a tweet. Classification of Named Entities In this step, detected entity mentions are classified into the predefined types (i.e., person, geo-loc, facility, product, company, movie, sportsteam, musicartist, tvshow, and other) using supervised machine-learning. Here, linear support vector machine is used as the machine-learning model. One main machine-learning feature of this step is the corresponding entity types retrieved from KBs. We obtain KB entity types from the corresponding entries in DBpedia 5 and Freebase 6 . One problem in this step is that there are several entity mentions that cannot be detected by EL because of various reasons (e.g., a non-existent entity in the KB, an error performing EL). In addition, some minor entities might not have entity types in the KBs. In order to deal with this problem, we first include the entity types predicted by Stanford NER as features. However, because the target entity types of our task do not directly correspond to the ones given in Stanford NER (i.e., location, person, and organization), the effectiveness of these features is obviously limited. Therefore, we introduce another type of feature based on word embeddings. For this, we use GloVe word embeddings 7 to calculate an average vector of vectors of words in n-gram text. We also include the relevance score assigned by the previous EL step that indicates the reliability of the KB entity types to the model. The number of words and the number of characters in the ngram text are also included as features to enhance the expressiveness of our model even further. The machine-learning features for this step include: • KB entity types: The entity types in KBs. The KBs used include DBpedia and Freebase. • NER detected type: The detected entity types of the NER model. As mentioned in Section • N-gram vector: The vector representation of the n-gram derived using the method explained above and includes each dimension of the vector as a separate feature. • EL relevance score: The relevance score assigned by the previous EL step. • Character length: The number of characters read in the n-gram text. • Token length: The number of tokens read in the n-gram. Experimental Setup To train our proposed EL method, we used the #Microposts 2015 EL dataset that contains 3,998 tweets and 3,993 annotations of entities. 8 The performance of our EL method using this particular dataset is reported in (Yamada et al., 2015). For this shared task, we trained and evaluated our proposed Twitter NER using the dataset provided by the workshop. 9 Table 1 shows the results of the segmentation task of the five top-ranking systems. Our proposed method significantly outperforms the second ranked method by 10.3% F1. Results The end-to-end results (both segmentation and classification tasks) of the five top-ranking systems are shown in Table 2. Here, our method significantly outperforms the second ranked method by 5.0% F1. Table 3: Performance of our system at both segmentation and classification tasks broken down by entity types
3,249.4
2015-07-01T00:00:00.000
[ "Computer Science" ]
Using Nonlinear Diffusion Model to Identify Music Signals , Introduction Music signal restoration and enhancement is an important part of music signal processing, and it is a problem of early auditory music signal processing. Music signal preprocessing is the necessary work in the early stage of music signal analysis and processing, such as filtering to reduce the noise of the music signal and enhance the edge of the music signal [1]. Music signal recognition technology plays an important role in improving the quality of music signals and is conducive to the postprocessing of music signals, such as music signal segmentation and music signal tracking [2]. Due to the difficulty in solving the time-dependent solution of the probability density function of the noise interference signal, most of the current researches are based on statistical models, and only the solution method of its steady-state solution is given. The main reason is that the probability density function satisfies that it is difficult to solve the partial differential equations [3]. Traditional linear filtering methods such as median filtering and Wiener filtering can also blur the edges of the music signal while removing noise. When the partial differential equation model is used to restore the music signal, it can remove the noise and keep the edge of the music signal. On this basis, this article uses the impact filter model to improve it, which can make the processed music signal have more edges while removing noise, so that it has the characteristics of music signal recognition [4]. For statistical filters in music signal restoration, for example, median and average filters use the median and arithmetic mean values of peripheral pixels in a window to replace the central pixel value. The mean filter is commonly used to filter out Gaussian noise, and the median filter is commonly used to filter out impulse noise [5]. Filters based on partial differential equations can use a broadly continuous two-dimensional function to model the music signal, thereby performing operations such as derivation and integration of the music signal, which standardizes the problem of music signal processing and makes the description of the problem form becomes simple, and the nonlinear filter in music signal processing is reunderstood and analyzed [6]. The use of traditional linear filters to restore and enhance the music signal contaminated by noise will blur or even destroy the discontinuous information of the music signal boundary, while the anisotropic diffusion filter based on partial differential equations can denoise and at the same time well keep the music signal [7]. At the same time, the diffusion method based on partial differential equations has some shortcomings. The most important one is the computational cost of the diffusion equation, which is related to the calculation of continuous integrals. These computational inconveniences make the anisotropic diffusion method difficult to apply to most real-time videos [8]. The main research work of this paper is centered on the application of partial differential equations in music signal recovery and music signal recognition. It mainly analyzes traditional music signal analysis methods and studies the application of partial differential equation methods and anisotropic diffusion equations in music signal restoration, as well as their improved algorithms for music signal recognition. Partial differential equation is a relatively sophisticated method of music signal analysis and processing, which has important research and application value and requires in-depth research. The use of partial differential equations for music signal processing is to process the music signal as a whole. For example, it can well keep the edge shape and position unchanged while denoising. On the basis of summarizing the mathematical model of partial differential equation method in music signal processing, combining partial differential equation and wavelet method, this paper proposes a new denoising algorithm suitable for secondorder partial differential equations and high-order partial differential equations. We focus on the simple and practical wavelet threshold denoising method, and aiming at the shortcomings of its threshold denoising, we propose a denoising method based on wavelet transform and Wiener filtering and introduce the modeling and denoising method of partial differential equation image processing. The experiment shows their effectiveness. The organic combination of the two will overcome their shortcomings and get better results. The variational model and the anisotropic diffusion model have a complete theoretical framework, various models, and mature numerical solutions. The introduction of them into the field of music signal processing and computer hearing undoubtedly provides a powerful tool for solving problems in this field. Therefore, the research topic of this article has a certain theoretical background and strong practical value. Related Work The idea of using partial differential equations for music signal processing was really established from Zhang et al. [9]. They introduced the concept of scale space, which expresses a group of music signals on multiple scales at the same time. The contribution to a large extent constitutes the basis of partial differential equations for music signal processing. The multiple scales of the music signal are obtained by Gaussian smoothing, and the scale space of the music signal can also be obtained by using the classical heat conduction equation to evolve the music signal. Chen et al. [10] proposed that the heat conduction equation is not the only equation that can form a scale space and put forward the criteria for forming a scale space. The anisotropic diffusion model proposed by Farrokhmanesh and Hamzeh [11] is the most influential in this field. They proposed to replace the Gaussian diffusion with a selective diffusion that can maintain the edge, which led to many theoretical and practical problems. Under the same framework, Todisco et al. [12] proposed the method of reducing partial differential equations (TV: total variation) proposed by shock filter, which highlights the importance of partial differential equations in music signal processing. In the fields of music signal processing and computer hearing, there are some partial differential equations based on curves and surfaces of curvature motion. Shahabi and Moghimi [13] developed a level set numerical algorithm. Their idea is to use a higher-dimensional hypersurface level set to represent deformed curves, surfaces, or music signals. This technique not only makes the numerical results more accurate but also solves the topological structure problem that was difficult to deal with before. Partial differential equations can also be used for music signal segmentation. The model proposed by Ando [14] integrates a variety of music signal segmentation algorithms, causing many new problems in theory and practice. Some scholars also have a great influence on music signal segmentation algorithms based on active boundaries. Later, many scholars extended their work with geometric partial differential equations [15][16][17]. Scholars used stochastic differential equations to study the noise spectral density of switched capacitor circuits, linear and nonlinear time-varying circuits, and stochastic differential equations to study the phase noise model of oscillators [18][19][20]. Some researchers have used stochastic differential equations to study the noise model in mixers, stochastic differential equations to study the noise problems in radio frequency circuits, and stochastic differential equations to study the noise problems in nonlinear timevarying circuits. This type of research is so extensive because the Brownian motion model can naturally correspond to noise and can easily establish related stochastic differential equations. Partial differential equations can also be used in music signal inpainting. It is synonymous with music signal interpolation. It was originally spread from artists who repaired broken artworks by hand in museums. At present, music signal inpainting technology is used in music signals. There are many applications in processing, auditory analysis, and digital technology, such as music signal restoration, 2 Advances in Mathematical Physics music signal amplification, music signal superresolution analysis, and error concealment in wireless music signal transmission [21]. It can be seen that the basic equations used in music signal processing include impact filters, partial differential equations, anisotropic diffusion, and active boundaries, which can realize the recovery, enhancement, and segmentation of music signals. In the smoothing process, the strength and direction of the music signal characteristics are detected at the same time, and the smoothing result is a good combination of noise elimination and feature preservation, which is a better music signal smoothing technology. Compared with the thermal diffusion model, the anisotropic diffusion model is actually a nonlinear parabolic partial differential equation, whose diffusion speed is determined by the gradient of the music signal, which can take into account both noise elimination and feature preservation. The algorithms based on impact filters, partial differential equations, and anisotropic diffusion studied in this paper are important components of many music signal processing methods based on partial differential equations, and they have positive significance for the improvement of these models and numerical calculations. Construction of a Model for Identifying Music Signals Based on Partial Differential Equations The Solution Set Space of Partial Differential Equations. According to the actual situation of music signal processing, if the problem is transformed into a linear equation system, the spectrum value of the coefficient matrix is generally small, the linear equation system is often weakly conditional, and the solution obtained at this time is often ill-posed. The condition in the theorem is a sufficient condition for solving the problem of definite solution, but it is not necessarily a necessary condition, regardless of whether AðtÞ satisfies the condition, but as long as it can satisfy the series. For this reason, a variety of regularization methods for solving ill-posed problems have been proposed by the Figure 1: The spatial distribution of the solution set of partial differential equations. 3 Advances in Mathematical Physics academic community. These methods focus on maintaining stability while maintaining solution information and use statistical methods, iterative methods, and variational methods to obtain regularized solutions to inverse problems. Figure 1 shows the spatial distribution of the solution set of partial differential equations. The linear isotropic diffusion filter is the simplest of the music signal smoothing algorithms based on partial differential equations. It has a distinct physical meaning. It can be seen as a process of balancing internal concentration differences when the substance does not produce and die. Its mathematical formula can be expressed as In music signal processing, the diffusion rate of the linear isotropic diffusion filter is constant; the diffusion rate of the nonlinear isotropic diffusion filter corresponds to the local structure of the music signal; the diffusion rate of the nonlinear anisotropic diffusion filter corresponds to the local structure of the music signal. In the anisotropic diffusion equation, the conductivity at each point is a matrix. Considering that the degradation of the music signal is mainly caused by noise and the noise is regarded as a small target with a large curvature of its contour line, let the isodensity contour line of the noise part with large curvature shrink to a point or even disappear over time, and the music signal part with small curvature evolves into the contour of the music signal and remains. Let y be a music signal space, which is defined as a set of piecewise smoothing functionals whose domain is f . In this set, only the edges of the steps are discontinuous. The difficulty in extending the energy functional defined in this music signal space is that the gradient amplitude is discontinuous on the edge of the step. Because the diffusion coefficient is nonzero, it represents a forward degrading diffusion orthogonal to the gradient direction. Since the direction of the edge is perpendicular to the gradient, the well-defined smooth operator that protects the edge trend has received extensive attention. Usually by setting each element of the thermal conductivity matrix, the diffusion effect is related to the direction of strength. Along the gradient direction, the diffusion effect should be set weaker, especially where the gradient value is large. Along other directions, the diffusion effect should be set stronger, so as to achieve the effect of smoothing and denoising. The design of the thermal conductivity matrix depends on the gradient of the music signal at the current point. Therefore, the diffusion tensor changes with the position of the space. Music Signal Recognition Algorithm. The research on the evaluation of the quality of music signal processing is one of the basic researches in the discipline of music signal information. For music signal processing or music signal communication forms, the main body of the information is the music signal, and an important indicator to measure this system is the quality of the music signal. Music signal recognition is to improve the subjective auditory display quality of music signals. The principle of music signal restoration is to compensate for the degradation of the music signal so that the restored music signal is as close as possible to the quality of the original music signal. All of these require a reasonable method for evaluating the quality of music signals. 4 Advances in Mathematical Physics The meaning of music signal processing quality includes two aspects: one is the fidelity of the music signal, that is, the degree of deviation between the evaluated music signal and the original standard music signal; the other is the intelligibility of the music signal, which refers to the ability of the music signal to provide information to people or machines. The level set method is to regard the music signal as a set composed of equidensity contours, consider that the degradation of the music signal is mainly caused by noise, and regard the noise as mostly small targets with larger curvature of the equidensity contour. So the contour line of the high curvature noise part shrinks to a point or disappears over time, and the low curvature music signal part develops into the contour of the music signal and remains. Figure 2 shows a schematic diagram of the music signal recognition algorithm. Although the most ideal situation is to be able to find out a quantitative description method of the fidelity and intelligibility of the music signal, as a basis for evaluating the music signal and designing the music signal system. In the real world, music signals captured by photographic equipment are often noisy music signals. For music signals contaminated by noise, it is necessary to denoise the music signals before using traditional music signal recognition algorithms. Then, from the actual situation of music signal processing, when the problem is transformed into a linear equation system, the general spectrum value of the coefficient matrix is small, the linear equation system is often weakly conditional, and the solution is generally ill-posed. The difference between using partial differential equations for music signal processing and traditional methods for music signal processing is that traditional music signal processing methods, such as filters, when performing music signal processing, only need to perform operations on the music signal to get the result, but partial differential equations need to iterate repeatedly on the music signal when processing the music signal, until a stable solution is obtained after several iterations. In the iteration, the music signals of each layer constitute a scale space. To this end, various regularization methods for solving ill-posed problems have been proposed. These methods focus on maintaining stability while maintaining solution information and use statistical methods, iterative methods, and intersection methods to obtain regularized solutions to inverse problems. Some models that use partial differentiation can enhance the edges of the image while removing the noise of the music signal, and the processing speed is fast and the processing accuracy is high. Model Weight Factor Replacement. We use the function to represent the music signal, where the music signal domain Q is often taken as a rectangle (2-dimensional), and the function value sweet (x) represents the grayscale (or brightness) of the music signal at the image point x. In the computer, the music signal function uðxÞ processed in the process is a discretized music signal. The points (saraple) are discretely sampled in the music signal domain Q. It can be seen that the discovery probability information entropy H begins to decrease with the increase of noise variance D, and when it reaches the bottom, it increases with the increase of noise variance D. The gray value on each pixel is also discrete, often divided into 128 or 256 levels. For the convenience of theoretical discussion, it may be assumed that the music signal function uðxÞ is a continuous music signal, that is, assuming that x takes points continuously in the music signal domain, and the gray value at each point is also continuously taken. In order to be able to effectively process music signals, we first need to know how to understand and represent music signals from a mathematical point of view. The music signal model and its representation method determine the music signal processing model to a large extent. The forward difference format is used in the time dimension. In order to maintain the peak of the music signal, the forward (or backward) difference can be used to calculate I -x and I-y. However, the center difference cannot be used because the signal peaks will be smoothed. Finally, four diagonal gradient estimates are added to make the algorithm more sensitive to edge information. Figure 3 shows the two-dimensional scattered point distribution of the sensitivity of the extreme points of the music signal. Due to the small number of iterations, the overall increase in calculations is not much. We can also see that the derived music signal extreme points in the scale space are no more than those obtained in the music signal, which satisfies the requirement of continuous compression and simplification in the scale space representation causality when the scale parameter increases. In this sense, the Gauss convolution kernel is the only one in the continuous scale space of the music signal; that is, the repaired music signal corresponding to the inner scale of the coarse resolution level can be obtained by the convolution operation of the music signal corresponding to the inner scale of the fine resolution level. Table 1 shows the noise smoothing parameter distribution of music signals. The improved fourth-order partial differential equation has obvious noise smoothing ability. However, the relatively small edges in the music signal are smoothed out together with the impulse noise, especially the details where the edge length is small but the gray level changes on both sides of the edge are large. In a large smooth area, the noise points spread into blocks with similar gray levels, and the smoothing ability needs to be strengthened. Then, group shift Fourier transform is used to transform this partial differential equation into a homogeneous linear differential equation group. Finally, the time-dependent solution of the probability density function of the noise frequency modulation interference signal after the radar intermediate frequency filter is obtained. On the one hand, 5 Advances in Mathematical Physics it performs anisotropic antidiffusion effect on the music signal contaminated by noise; on the other hand, it performs adaptive statistical filtering on the music signal between each antidiffusion, which well removes the obvious pulse points left behind after antidiffusion. We improve the accuracy of the next antidiffusion gradient estimation, greatly reduce the number of iterations of the overall algorithm, and reduce the model's misjudgment of noise points and edge points so that the new algorithm outputs music signals in various areas that are properly smoothed while the edges are still compared sharp and clear. Application and Analysis of Music Signal Recognition Model Based on Partial Differential Equation Method 4.1. Music Signal Data Extraction. The input music signal used in the experiment is a music signal with a = 0, b = 0:1 Gaussian noise and 10% salt and pepper noise, and the size is 404 × 271 and 199 × 199, respectively. We analyze the output results of various algorithms to compare the performance of the P-M model, forward/backward anisotropic diffusion model, and the proposed partial differential equa-tion algorithm. The objective indicators used for analysis are peak signal-to-noise ratio (PSNR) and mean square error (MSE). The music signals shown in the following are output in the sense of the best PSNR of various algorithms and basically match the output in the sense of the minimum MSE. When the weights of the pixels are the same in each direction, it is the 4-point average smoothing method. It can be seen that partial differential equations can express this isotropic Gaussian smoothing method. In order to ensure the characteristics of the music signal in all directions, it is necessary to improve the partial differential equation into an anisotropic distribution form. Although the stable range of the parameter is known, in the experiment, an exhaustive method is used to iterate in Figure 4 shows the music signal peak signal-to-noise ratio matchstick graph. The experimental results show that as long as is selected within the stable range, the best PSNR output will not vary greatly, but the best value is about 0.15. The numerical results can be seen that compared with the soft stop value method, the method proposed in this paper has a better suppression effect on Gaussian noise in music signals; we can obtain a higher signal-to-noise ratio, effectively reduce noise interference, and better retain the edge part. Information improves the quality of the music signal. A small value will make the algorithm converge more slowly, but it can always converge within 10 iterations. It can be seen that the signal (music signal) is layered using wavelet for a simple hard threshold will cause obvious edge artifacts, which will distort the results. It is worth noting that this method does not increase the amount of calculations. Because the decomposition and reconstruction of wavelets can be completed in a short time using algorithm and the data set in the detail image is sparse, its scale diffusion will be carried out quickly, so the amount of calculations in the whole process will not increase. The partial differential equation is better than the wavelet threshold in this respect, and the processed music signal has a better auditory effect. But the partial differential equation is that the signal is processed as a whole. We can expect the signal (music signal) to be layered and then diffused, which will achieve better results. Model Simulation Realization. For two-dimensional grayscale music signals, the effect of median filtering on music signals is usually to take a rectangular window with an odd side length and slide the window along the rows and columns of the music signal point by point. The music signal is at any pixel. The filtered output value is equal to the median value of the gray value of all pixels in the window when the center of the window moves to this point. It is especially important that this method can strengthen the edge and remove noise, while the scale space where the edge is located is more stable than the linear scale space. Obviously, the most important parameter for blurring boundaries in scale space is K. If K is too large, the music signal becomes very blurred, which is similar to linear diffusion. If K is small, it can protect the edge well. Therefore, we assume that K = 0, which is called the stable inverse diffusion equation (SIDES). If each frame of the sequence music signal or video music signal is treated as a separate static music signal, due to this correlation, it can be used to construct the information of several frames of the music signal before and after the music signal to be processed in the continuous video sequence. Based on the anisotropic diffusion model of the sequence music signal to improve the quality of the restored music signal, this section only considers the situation of two frames of images. The proposed partial differential equation model can output the best music signal in only 4 iterations, and a certain degree of smoothness is obtained in each area, while the edges remain clear and sharp, and there is basically no noise left. Figure 5 shows the histogram of the music signal output recognition rate. The output PSNR has been greatly improved to 21.51 dB, and the MSE is reduced by a minimum of 40% to 0.356. It should be noted that comparing the small gray area above each music signal, the output music signal effects of other models almost smooth it out, while the output music signal of the partial differential equation model can be well preserved. Moreover, the edges are more obvious and accurate, and it can be seen that the reproduction of the output music signal of the partial differential equation model is more accurate. With the input of about the size of model, the partial differential equation model requires iterative operations to output the best PSNR music signal generally in about 5 times, and the best PSNR can be improved by about 3 dB compared with the other algorithms mentioned above. Figure 6 shows a line graph of the music signal denoising error rate comparison. If the music signal contains a lot of noise, it can be processed by the threshold first, then the appropriate scale diffusion, and finally superimposed to generate the coarse-scale music signal of the original music signal (when the noise is small, the music signal may not be thresholded deal with). It preserves the boundary while removing noise. It is worth noting that this method does not increase the amount of calculation, because the decomposition and reconstruction of wavelet can be completed in a short time by algorithm, and the data set in the detailed music signal is sparse, and its scale is diffused. It will be able to proceed quickly, so the amount of calculations in the entire process will not increase. Wavelet transform uses wavelet coefficients to describe the changes of signals in different scale spaces. If the correct wavelet basis is selected, the signal will be described by some important wavelet coefficients, and at the same time, Gaussian self-noise will contaminate part of the wavelet coefficients. Experiments show that this method can not only improve the efficiency of the algorithm and the signal-to-noise ratio of the music signal but also achieve better auditory effects. It can be seen that the coefficients of each direction are related to the gradient at the point. When the gradient of the point is large (at the edge), the corresponding weight is small, so the degree of smoothness in this direction is small so that the directional feature is retained. Example Application and Analysis. First, we perform bicubic interpolation on the low-resolution signal; the interpolation process will bring noise, which is filtered by equations to enhance the graphics while smoothing the noise. The parameters are selected as c = 1, t = 0:2, and s = 0:8; the iteration step size is 0.02, iterate 80 times. The experiment successively added Gaussian white noise with a noise variance of 0.02 to the standard image and used the fourth-order partial differential equation method and the method of this article to denoise the image. Among them, the wavelet selects the wavelet to decompose the noisy image in three layers, and N is the total number of pixels of the image. It can be seen from the experimental results that the auditory effect after filtering is more clear. At the same time, it is found through calculation that the signal-to-7 Advances in Mathematical Physics noise ratio of the filtered music signal is also improved. It can be seen that the result is obtained after first filtering the music signal with the partial differential equation method and then performing the watershed method for segmentation. The original music signal is filtered, and the time step is t = 0:02 seconds in the iteration process, and this generation is performed 100 times. It can be seen that when a linear partial differential equation is used to amplify the music signal, no matter how many times it is magnified, the spots and the deviation of bright and dark areas are not obvious, and the effect is better in areas where the gray value of the image changes greatly. However, the edges of objects in the music signal are small enough to be clear, and the overall image looks excessively smooth. The signal-to-noise ratio and subjective perception determine the quality of a music signal. During the experiment, we compared the signal-to-noise ratio of the music signal before and after processing and extracted the edge of the music signal with the same method to judge the influence of denoising on the edge information. Figure 7 shows a ladder diagram of the filter resolution of the music signal. It can be seen that the number of iterations required for the P-M model to achieve the best PSNR output music signal has reached 147. The output music signal is relatively fuzzy and the best PSNR is only 17.141 dB, and the MSE is 0.941, which is higher. For the P-M model, if the size of the input music signal is about partial differential equation, it generally requires about 110 iterations, and the best output PSNR does not exceed 18 dB. The number of iterations required for the forward/backward anisotropic diffusion model is greatly reduced to 49 (the input of partial differential equation size is generally less than 60 times), the edge clarity of the output music signal is improved, but a lot of noise is left. There are still chromatic aberrations in each area, which results in the final PSNR which is still only 18.119 dB. 8 Advances in Mathematical Physics The overall output of the P-M model is too blurry, and the edges are not clear enough. The best PSNR is only 17.47 dB, and the MSE is high, 0.9176. The output noise of the forward/backward diffusion model is more, but because of the backward diffusion component, the music signal is not too blurred, and the best output PSNR does not exceed 20 dB. The overall situation of partial differential equation output has been improved, but there is a little error in the music signal, such as dark spots on the larger white area in the main part of the music signal, and the best output PSNR still does not exceed 20 dB. The output of the proposed partial differential equation model suppresses noise very well, and the edges of many small areas can be kept clear without leaving obvious noise. This algorithm effectively improves the results after denoising and improves the peak signal-tonoise ratio. Theoretically speaking, the noise is mainly concentrated in the high-frequency part. In order to combine the advantages between the two and eliminate their short-comings, this paper proposes a denoising method combining partial differential equations and wavelet closed values. The best output PSNR of the partial differential equation model is 22.74 dB, which is significantly improved compared with other algorithms, while the MSE is reduced to 0.306. Conclusion Based on the theory of partial differential equation method, this paper proposes a model of music signal recognition, uses the finite difference method to discretize the continuous model, and discusses the influence of parameter changes on the noise reduction effect in the new model. The original music signal recognition algorithm based on forward and backward diffusion has been improved so that the processed image has better denoising and enhancement effects. Then, according to the characteristics of the inherent mode function music signal and the residual function music signal Advances in Mathematical Physics decomposed by the EMD music signal, different stochastic differential filtering strategies are adopted to filter each layer, and the underwater music signal is simulated. Finally, the inverse group shift Fourier transform is used to obtain the time-dependent solution of the probability density function of the noise FM interference signal after passing the pulse intermediate frequency filter. The results show that the method proposed in this paper has a good suppression effect on nonlinear noise interference. The experimental results show that this model has the enhanced edge characteristics of the antidiffusion algorithm; the addition of numerical statistical filtering not only eliminates the instability of the antidiffusion but also removes the impulse noise in the music signal well and has a good enhancement edge and noise elimination ability. This paper proposes a new music signal denoising algorithm based on the principle of threshold denoising, which performs initial denoising on the original music signal. On this basis, the relationship of the partial differential equation in the denoising process of music signals was studied, a new threshold denoising method was obtained, and a higher signal-to-noise ratio was obtained. Numerical test results show that by using this method to denoise the music signal, the purpose of protecting edges and removing noise is achieved. The use of partial differential equations for music signal processing not only puts forward new research topics for the theoretical research of partial differential equations but also plays an important role in promoting the development of music signal processing, signal analysis, and other issues, so it has important theoretical value and has broad application prospects. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The author declares that there are no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
7,712.6
2021-10-13T00:00:00.000
[ "Computer Science", "Mathematics" ]
Inert structural transition in 4H and 6H SiC at high pressure and temperature: a Raman spectroscopy study We conducted Raman spectroscopy measurements of 4H-SiC and 6H-SiC up to 69 GPa and 1023 K to assess the stability and bonding of SiC at high pressure and temperature. Both optic and acoustic modes were observed at wide pressure and temperature ranges. The temperature shifts of the Raman frequencies were fitted by the equation with the Bose–Einstein distribution function, and we found that the shifts were almost insensitive to the pressure. The mode Grüneisen coefficients weakly depend on the pressure and temperature, suggesting the sluggish transition of the crystal structure, unlike the previous experiments showing the transition or decomposition of SiC at high pressure and temperature conditions. Inert transitions are confirmed by Raman measurements and annealing experiments using multiple high-pressure apparatuses. The crystallinity may be a hidden critical parameter in the experiments to determine the stable polytypes of SiC under high pressure and temperature. Introduction Silicon carbides (SiC) draw great attraction to the industry as semiconductors for power electronics because of their wide band gaps and high blocking voltage [1].Strong chemical bonding between silicon and carbon induces high hardness, thermal conductivity, and chemical inertness in the SiC [2].The crystalline SiC is known to have many polytypes made by a combination of SiC 4 tetrahedra.They are usually described by using the Ramsdell notation [3].SiC shows 3 C, 2H, 4H, 6H, 15 R, and so on at ambient pressure conditions.The polytypes of SiC consist of identical silicon (or carbon) layers but have different arrangements of these layers.The arrangements of the layers are formed by the twist of the SiC bonding, which rotates the SiC 4 tetrahedra.The crystal structures of 4H-SiC and 6H-SiC are illustrated in figure S1 using VESTA software [4].The polytypes of SiC show a wide range of band gaps, from 2.39 eV of 3C-SiC to 3.33 eV of 2H-SiC [5].The polytypism in SiC is thought to result from similar free energy among the crystal structures [6]. Nowadays, high-pressure treatment creates a new way to explore the novel crystal structure of the compounds [7].One can expect that the underseen structure of SiC can be stable under high-pressure conditions.Therefore, the stability of the SiC polytypes has been studied at high-pressure conditions by means of in situ x-ray diffraction (XRD) measurements, Raman spectroscopy, and theoretical calculations (see [8] and references therein).Nevertheless, the phase diagram of SiC at high-pressure and -temperature conditions is not well constrained so far, likely because of the kinetics of the transition induced by strong covalent Si-C bonding [8].Earlier XRD measurements showed that the 3C-SiC undergoes a phase transition to rocksalt type B1 structure (space group Fm m 3 ¯) at 100 GPa with a large decrease of the unit cell volume, in contrast to 6H being stable up to 95 GPa [9].Later, the laser-heated diamond anvil cell experiments showed that the 3 C undergoes a phase transition to B1 structure at around 75 GPa and 2000 K by in situ XRD measurements [10].On the contrary, a similar experimental study showed that 3 C is decomposed into Si and C at around 60 GPa and higher temperature than 2000 K using high-pressure XRD measurements with laser-heated diamond anvil cells [11].A previous study reported that the polytype of 6H-SiC partly keeps untransformed when a high-pressure polymorph of rocksalt-type structure appears during laser heating experiments above 2000 K [10].The other previous study did not observe any phase transition in 3 C and 6H at 1-80 GPa and 1600-2200 K in the laserheated diamond anvil cell experiments for a heating duration of ∼14 min [12].This discrepancy may be a result of the kinetics of the transition and/or unexpected chemical segregation caused by a large temperature gradient in the laser-heated diamond anvil cell [13].Dynamic compression studies have been conducted to determine the crystal structure of the SiC under ultra-high pressure [14][15][16].Previous studies in the crystal structure of SiC at high pressure were summarized in table S1.Together with the phase relationships, the equations of states of SiC were also studied in 3 C, 6H, and 4H by high-pressure experiments and theoretical calculations [8-10, 12, 17, 18].The structure of SiC is fundamentally important in understanding the nature of the VI group semiconductors, which have broad applications [19]. Raman spectroscopy is an effective method for understanding solids' crystal structure and vibration modes of solids.Thus, the pressure and temperature evolution of the Raman frequency helps to understand the stability of the SiC polytypes.However, it has never been reported at high-pressure and -temperature conditions in the SiC.Previous studies reported the Raman frequencies of 3C-SiC [18], 4H-SiC [17], and 6H-SiC [20] at ambient temperatures up to 80, 55, and 95 GPa, respectively.A previous study suggested that Si-C bonding increases the ionicity with increasing pressure up to around 40 GPa, and then the bonding changes from ionic to covalent again at higher pressure based on the Raman spectroscopy measurements [20].The study further predicted that 6H is unstable above around 100 GPa based on the decrease of mode Grüneisen coefficient at high pressure [20]. Recently, SiC has also been interested in geophysics and astrophysics as constituent materials of the Earth and planets [21,22].Enigmatic SiC found in nature is suggested to be created by the hydrogen-rich fluid arising from the subducted materials in the Earth's deep interior [21].Although the Earth's mantle is mainly composed of silicate and oxide, recent studies showed that some extrasolar planets might have a larger C/O ratio above 1 [22].While such carbon-rich planets are expected to be rare, SiC may be the main constituent material of the carbon-rich planets.The structure and dynamics of such exotic carbon-rich planets have been largely controlled by the stability and the physical properties of the SiC at high pressure [12,23,24].However, the stability and bonding character of the polytypes of SiC under high pressure and temperature corresponding to the planetary interior are poorly understood. Here, we have measured vibrational spectra of 4H-SiC and 6H-SiC at high pressure and temperature conditions by combining externally heated diamond anvil cells and laser Raman spectroscopy.Both optic and acoustic modes were observed at wide pressure and temperature ranges.Our new comprehensive Raman frequency data can help further analysis of the SiC sample in nature and in the laboratory as a reference.We did not observe any transition in the crystal structure within the experimental conditions.The Raman data and the annealing experiments showed the inertness of the transition between polytypes at high pressure and temperature, regardless of the heating methods.We conclude that the crystallinity may be a hidden critical parameter in the experiments to determine the stable polytypes of SiC. Experimental procedures We conducted high-pressure and high-temperature experiments using two types of high-pressure apparatus: a diamond anvil cell and a multi-anvil apparatus. Diamond anvil cell experiments We measured Raman spectra of SiC at high pressure and temperature conditions using a newly designed externally heated diamond anvil cell.Synthetic 4H-SiC and 6H-SiC were used as starting materials.4H-SiC was a semi-insulating type single crystal grown on-axis (0001) with no dopant (electrical resistivity >0.1 MΩ * cm) and micro-pipe density of < 15 cm 2 .6H-SiC was a single crystal grown on-axis (0001) with a dopant of nitrogen (∼5 * 10 18 cm −3 ) and a micro-pipe density of < 30 cm 2 .The samples were loaded into a diamond anvil cell for high-pressure experiments.The culets of the diamond anvils were 300 μm in diameter.The samples were put into the hole made in the rhenium gasket without a pressure medium to maximize the Raman signal at high temperature and high pressure (figure S2).The thickness of the gasket was initially ∼50 μm.After compression to the desired pressure, the samples were heated by an externally heated diamond anvil cell using a cylindrical graphite heater.The high-temperature field is stable and homogeneous by using a newly designed cylindrical heater.The diamond anvil cell was placed in the water-cooling vacuum chamber with optic glass suitable for the spectroscopic measurements to avoid the reaction of the heater with air.The water-cooling system minimized the pressure drift during heating, as shown in the previous study [25].A stable DC power source (ZX-S-400LA, Takasago) was used for heating with a ramp rate of about 10 K min −1 .It is known that the pressure possibly decreases or increases with increasing temperature in the externally heated diamond anvil cell experiments, likely because of the deformation of the gasket.In order to avoid the undesired change in the pressure, the cell was heated to 500 K and kept for several ten minutes before each experimental run.After repeating the procedure, the cell kept initial pressure after pre-heating experiments, suggesting the heating was an isochoric process.Once we confirmed that the pressure keeps the initial pressure, we conducted high-temperature experiments with Raman measurements.We held each high-temperature condition for 5 min during Raman measurements.Input power was 50 W at maximum for generating high temperatures up to 1023 K.We have subtracted the baseline in the Raman spectra using Spectra Manager software (JASCO Corporation).The temperatures were measured by a thermocouple attached to the surface of the diamond anvil.Type-K and W3% Re-W25%Re thermocouples were used for the experiments in 6H and 4H, respectively.The pressure was determined by using Raman spectra of the diamond anvil at 300 K [26].To release the stress of the sample chamber, we conducted a pre-heating procedure <473 K before experiments.Once the pre-heating procedure was done, the sample pressure was the same before and after the heating experiments.The SiC samples were heated by an externally heated diamond anvil cell using a cylindrical graphite heater up to 1023 K at high pressure.Any damage in the heater and thermocouple was observed after heating experiments.Thermal pressure was estimated based on the methods proposed by the previous study based on an assumption of the isochoric [27].We used thermoelastic parameters of SiC to be K = 243 GPa, K' = 2.79, dK/dT = −0.005GPa K −1 , and α = 10 -5 K −1 for the estimation of the thermal pressure [12].Raman spectroscopy measurements were conducted by confocal laser Raman spectrometer system (NRS-4500, JASCO) composed of a red enhanced CCD detector (DR324B, Andor) and Czerny-Turner type spectrometer (f = 200 mm) with 1800 G/mm grating.High spatial resolution along the laser axis was achieved (∼1.5 μm) by the confocal microscope system used in NRS-4500.This is important for eliminating thermal emissions from the background.The laser was a TEM 00 single mode 532 nm laser with an output of 50 mW.The exposure time was 30-180 s.The laser spot was focused on the sample surface to about 1 μm in diameter.The Si standard was used for the calibration before and after experiments.After the heating experiments, we increased the pressure and then repeated the heating experiments.We conducted heating experiments at 15, 30, 45, and 60 GPa in 4H and at 25 and 67 GPa in 6H (table S2).We have conducted two separate runs, including six times of heating for the externally heated diamond anvil cell experiments in total. The annealing experiment was conducted by a laser-heated diamond anvil cell.The single crystal 6H-SiC was put into the hole made in the rhenium gasket with NaCl as a pressure medium.The culets of the diamond anvils were 300 μm in diameter.After compression, the sample was heated from a single side using a CW Fiber laser (TruFiber 200 P Compact, Trumpf).The flash heating experiment was conducted by the laser with a diameter > 100 μm to cover the whole sample.We determined the temperature by a spectro-radiometric method using an EMCCD camera (ProEM HS: 512BX3, Princeton Instruments) attached to the spectrometer (IsoPlane-160, Princeton Instruments).The pressure was determined by using Raman spectra of the diamond anvil at 300 K [26].The pressure was 16 GPa before and after the experiments.Raman spectroscopy measurements were conducted after annealing experiments.We conducted the single crystal XRD measurements in the recovered sample of ∼100 * 60 * 10 μm 3 .We used micro-focused x-ray diffractometers (D8 Venture, Bruker) with a target of Mo, set to 50 kV and 1.4 mA for single crystal XRD measurements.The software package APEX4 (Bruker) was used for data processing after the measurement, and SHELXL was used for crystal structure refinement [28].During the analysis, the initial crystal structure, P6 3 mc, for the refinement was set to the same one reported by the previous study in 6H-SiC [29]. Multi-anvil experiments We used a reagent-grade SiC powder as a starting material for the multi-anvil experiments.The powder XRD measurements showed that the sample was mostly 6H-type SiC with a minor amount of 4H-type SiC (∼ 5%).One in situ high-pressure and high-temperature experiment at 10 GPa up to 2200 K was conducted using the 15-MN Kawai-type multi-anvil press with the Osugi guide block system [30], SPEED-Mk.II [31], at the BL04B1 beamline, in the synchrotron radiation facility, SPring-8, Japan.TF05 WC anvils with a truncation of 3.0 mm were used for the experiment in combination with an 8-mm Cr-doped MgO pressure medium.The cell assembly used in this study was designed with essentially the same concept as that reported by [32].A LaCrO 3 cylindrical heater was set in a direction parallel to the incident x-rays.The starting sample was packed in a graphite cylindrical capsule with graphite lids at both ends.The cylindrical capsule was placed in the same direction as the cylindrical heater.A 25-μm-thick Re cylindrical foil was placed out of the capsule.Electrical insulation between the heater and the Re foil was made by a MgO sleeve.A ZrO 2 thermal insulator was located outside the heater.Ta foils were electrically linked between both ends of the heater and second-stage anvil tops.MgO rods were placed at both ends of the sample capsule.Temperatures were monitored using a W 97 Re 3 -W 75 Re 25 thermocouple at the surface of the Re foil, which was inserted into the heater normal to the x-ray incidence with an electrical insulator of alumina tubes.The temperature variation in the sample was estimated to be less than 5 K [33][34][35]. In situ energy-dispersive x-ray diffraction was performed using white x-rays, which were collimated to 50 μm horizontally and 400 μm vertically using two variable incident slits.Diffracted x-rays were collected at a 2θ angle of ∼6°for 150-300 s for a pressure marker of MgO outside the Re foil and the sample using a germanium solid-state detector (SSD) in an energy range of up to 130 keV.The channel-energy calibration of the SSD was made using the energies of the x-ray emission line (Kα) of 55 Fe and the γ radiation from 57 Co and 133 Ba.The press was oscillated around the vertical axis between 0°and 6°during the x-ray diffraction measurements to suppress the heterogeneity of the intensities of the diffracted peaks [31].Sample pressures were obtained from volumes of MgO using the MgO equation of state proposed by Tange et al [36] based on the third-order Birch-Murnaghan equation of state. The sample was first compressed to a press load of 2 MN (∼10 GPa) and heated to 2200 K at a rate of 100 K min −1 .During heating, we measured some x-ray diffraction patterns of MgO and the sample.At 1400 K, the sample pressure was decreased to ∼9 GPa, and therefore we compressed it to 2.5 MN to keep the sample pressure of 10 GPa.At 2000 K, we kept the temperature for 30 min and took XRD patterns of the pressure marker and sample.We then further heated to 2200 K.After 1 min at 2200 K, the heater became electrically unstable and therefore we quenched the sample by switching off the applied electrical power.Then, the sample was decompressed to room pressure for 1 h. Raman measurements We have obtained the Raman spectra of 4H-SiC and 6H-SiC at high-pressure conditions up to 69 GPa (figure 1).Although we did not use a pressure medium in the externally heated diamond anvil cell, the full widths of the half maximum (FWHM) of the Raman peaks were relatively small at high pressure (figure 1).The observed Raman peaks were slightly asymmetric, possibly due to the Fano asymmetry [37,38].Although the asymmetry analysis will provide further information in the SiC, it will be beyond our current target.Figure 2 showed the Raman wavenumber against pressure at ambient temperature.The frequency of transverse optic (TO), longitudinal optic (LO), transverse acoustic (TA), and longitudinal acoustic (LA) modes are well consistent with the previous studies at ambient temperature [17,20,39].The Raman spectra of the SiC polytypes are understood by large zones that extend in the axial direction to N c, / p where N is the number of layers in the polytype stacking sequence and c is the axial dimension of the unit cell.N are 6 and 4 in the 6H and 4H polytypes, respectively.Since c 2 / p is a reciprocal lattice vector, the pseudo-momentum vectors x = q/q max = 1/3, 2/3, 1 for 6H and x = 1/2, 1 for 4H are equivalent to the Brillouin zone.Thus, accessible Raman scattering measurements are x = 1/2 and 1 for 4H, and x = 1/3, 2/3, and 1 for 6H [40,41].We have observed several acoustic modes in addition to TO and LO (figure 1).Observed acoustic modes were TA (x = 1/3 and 1) in the 6H and TA (x = 1/2 and 1) and LA (x = 1) in the 4H.The irreducible representation for the phonon modes were A 1 for LO, E 1 and E 2 for TO, A 1 for LA, E 1 for TA1, and E 2 for TA1/2 and TA1/3 [42,43].We have fitted the wavenumbers at high pressure and room temperature by the polynomial equations below where the i w is the wavenumber for i = TO, LO, TA, and LA modes, A and B are fitting parameters, and P is pressure in GPa.Fitting parameters are shown in table 1. TO, LO, and LA modes increase with increasing pressure, in contrast to TA modes.The pressure dependence of the frequencies is consistent with the previous studies [17,20], while TA and LA modes were determined at high-pressure conditions for the first time in this study.In particular, very small or negative pressure dependence of TA modes have been noticed in 6H-SiC and 3C-SiC [42,44].Since the pressure dependences of the wavenumber are almost linear, we additionally fitted the data by a linear function.We fitted all data, low-pressure data, and high-pressure data, separately (Table S3 and figure S3). During the experiments at high temperatures, the frequencies of the optic Raman modes were observed to decrease with increasing temperature (figure 3).The signal-to-noise ratios are sufficiently high for the determination of the effect of temperature in the vibrational frequency at high pressure (figure 1). Figure 4 shows the difference of high-temperature wavenumbers with respect to . LO P , ,300 w It is clearly shown that the temperature dependence of the wavenumbers is almost the same at any pressure conditions (figure 4).The temperature shifts of the wavenumbers ( T D ) are fitted into the equations with the Bose-Einstein distribution function n T, ( ) w as shown below [43,45] where the i,0,0 w is wavenumber at 1 atm and 0 K, T is the temperature in K, h is the Plank constant, and k B is the Boltzmann constant.The obtained results at high pressure are generally in good agreement with the previous study at 1 atm [43].Our new high-pressure data showed that the temperature parameters C and D were insensitive to the pressure (figure 4).The values of C, D, and i,0,0 w of 4H were −0.34, −7.7, and 974.8 cm −1 (R 2 = 0.98) for LO, and 4.7, −5.5, and 779.9 cm −1 (R 2 = 0.99) for TO.The values of 6H were 52, −13, and 937.0 cm −1 (R 2 = 0.95) for LO and 29, −7.0, and 770.8 cm −1 (R 2 = 0.98) for TO.A recent study has shown that the FWHM of Raman peaks contains information on the temperature [38].Since it can be a good measure of the temperature using Raman spectroscopy, asymmetry analysis should be done in future studies.During heating at a higher pressure than ∼30 GPa, broad Raman peaks were observed at around 600 cm −1 in addition to the initially observed peaks (figure 1).This may be second-order LA peaks around 500-700 cm −1 in 3 C [46], 4H [47], or 6H [42].Since the wavenumbers of LO and TO are clearly different from the values of 3 C Figure 2. The relationships between pressure and the Raman frequency at 300 K in the (a) 4H-SiC and (b) 6H-SiC.All symbols are results obtained by this study.Black and red solid lines = fitted curves (this study); black and red dotted lines = previous studies in 4H-SiC [17] and 6H-SiC [20].Blue broken lines are 3C-SiC for comparison [18].The lines are drawn within the experimental conditions.Table 1.Pressure dependencies on the Raman wavenumber of SiC.[18], the 6H and 4H polytypes persist in their original crystal structure up to high temperatures of 773 K and 1023 K. Annealing experiments The results of multi-anvil experiments showed that the 6H polytype mostly keeps the original structure, up to 2000 K at 10 GPa for 30 min (figure S4).Any additional phases, such as silicon and diamond, are not observed during heating.Obtained unit cell parameters of 6H-SiC were a = b = 3.065(1) Å, c = 15.00(3)Å, and V = 122.0(7) Å 3 at 10 GPa and 2000 K.The flash annealing experiment by laser-heated diamond anvil cell also showed that the 6H polytype remains untransformed after heating at 16 GPa.The optical microscopic images of samples are shown in figure S5.The temperatures were 2400(300) K and 2200(100) K on the laser-irradiated side and the other side, respectively.The errors were the standard deviation of the temperature within the sample.The pressure was estimated to be 22 GPa at 2300 K after the thermal pressure correction [27].The heating duration was ∼150 milliseconds according to the continuous measurements of the spectra-radiometry.Raman measurements showed that no additional phases exist other than 6H-SiC after experiments.We conducted the single crystal XRD measurements in the starting material and recovered sample after laser-heated diamond anvil cell experiments (Tables S4 and S5).We obtained 829 reflections, including 139 independent reflections for the sample after heating.TO is always the most intense and sensitive to pressure; thus, it is the most suitable mode as a pressure marker in the diamond anvil cell.The LO-TO peak splitting indicates the change in the covalent bonding in the SiC [20]. According to a previous study, the covalent character decreases with increasing pressure up to 70 GPa [20]. Current results showed a similar trend with previous studies in the 6H and 4H (figure 5) [17,20].Figure 5(a) summarizes the LO-TO splitting at high-pressure conditions and 300 K. 4H and 6H showed a similar trend in the LO-TO splitting against pressure (figure 5(a)).Our new high-temperature data showed that LO-TO splitting is less sensitive to temperature compared to the effect of pressure (figure 5(b)).The splitting keeps an almost constant value or weakly decreases with temperature.The scattered data in 6H-SiC may result from uncertainty in the pressure determination and undesired uniaxial stress [48], which are unfortunately not straightforward to investigate with the experimental setup in this study and previous study [20].The transverse effective charge was calculated by following previous literature [17,20,39]. where e T * is transverse effective charge, e ¥ is the dielectric constant at high frequency, m is the reduced mass, and V a is the volume per atom.6.52 e = ¥ was used for the value at ambient pressure, and their pressure ln a e = ¥ were varied from 0, 0.3, 0.6, to 1 [17] (figure 6).The obtained results clearly showed that pressure plays an important role on the effective charge of SiC.However, it is also obvious that the dielectric constant should be determined at high pressure for a better understanding of the ionic/covalent future of the SiC.On the other hand, current results suggested that the covalency of the bonding is not altered at high temperatures, at least up to 1023 K, in contrast to the effect of pressure [20]. 6H is predicted to be unstable above ∼100 GPa based on the anomalous decrease of mode Grüneisen coefficient i g of the LO and TO phonons compared to the other tetrahedrally coordinated semiconductors, such as diamond and cBN [20].The mode Grüneisen coefficient for the discussion above was defined as where V is volume [20].We have used the equations of state of 6H [9] and 4H [17] to calculate .i g The obtained values are summarized in table 2. Obtained slopes of the i g value of 4H and 6H as a function of the density are much shallower than in the previous study (figure S6), suggesting that the LO and TO modes are harder than in the previous estimation under high pressure.Current results showed that the 4H and 6H are similar to the diamond and cBN, showing constant i g value at high-pressure conditions.The constant i g value suggests that the experimental pressure was not sufficiently high for the transition.This is supported by the experimental results showing no transition in the crystal structure at high pressure and temperature, as shown by this study and a previous study [12].We should note here that the i g value may have large uncertainties because it is the second-order derivative of the experimental data, as already described in the previous study [17,20].The obtained value of TA mode (x = 1) does not largely change at high pressure and high temperatures to 69 GPa and 773 K (table 2, figure 7 and S7).The small negative i g value of TA mode (x = 1) suggests that the transitions to other polytypes are sluggish because the TA mode (x = 1) corresponds to the intercell vibration of the atoms [42].The insensitive i g value against pressure and temperature of TA is an explanation for the inert transition between polytypes in the high-pressure and -temperature experiments even above 2000 K [10,12]. Temperature dependence of the Raman frequency We have obtained the Raman spectra at high temperatures up to 1023 K at high pressure.As temperature increases, the wavenumber of LO and TO decreases, and the temperature shifts are almost the same at any pressure conditions (figure 4).The peak shifts are more significant at higher temperatures in both 4H and 6H. The model with the Bose-Einstein distribution function well fits the temperature dependences of the frequencies in this study (figure 4).TO mode shows the peak shift of ∼ −10 cm −1 per 100 K.This value is much higher than the zircon and quartz, which is widely used as a pressure and temperature marker in the diamond anvil cell experiments [49,50].Therefore, TO mode may be useful for a pressure and temperature marker in the diamond anvil cell at a higher pressure than several 10 GPa.Although the peak shift against the pressure is smaller than that of zircon and quartz [49,50], SiC has a great advantage in the wide applicable pressure and temperature range.Moreover, the SiC may be stable under very reducing conditions compared to the oxides.The SiC may be a good pressure and temperature marker in the diamond anvil cell experiments in addition to the 13 C-enriched diamond and the cBN [51][52][53]. The anharmonic mode parameter a i is used to estimate the anharmonicity of the modes [54, 55]. where a is the thermal expansion coefficient, and K T is the bulk modulus.We have used a bulk modulus of 213.61 GPa for 4H [17] and 260 GPa for 6H [9].This time, a is assumed to be 1 * 10 -5 for both 4H and 6H.The a i parameters are summarized in table 3.In the harmonic and quasi-harmonic approximation, iT iP g g = and thus a 0 i = [54,55].Deviation of the a i parameters from zero indicates that the quasi-harmonic approximation is inappropriate for the modes.Obtained a i parameters of SiC are comparable with the silicates, such as α-quartz and coesite, reported previously [55].The current results showed an apparent deviation of the parameters from zero in the LO, TO, and LA modes compared to the TA, suggesting that the anharmonic behavior is stronger in the LO, TO, and LA.The harmonic and quasi-harmonic approximation may not correctly predict the thermoelastic parameters of the SiC under planetary interior conditions.The modification of mode in SiC may help understand the nature of the VI group semiconductors [19,56]. Stability of the SiC polytypes under high pressure and temperature Obtained TA (x = 1) mode, corresponding to the intercell vibration, was insensitive to the pressure and temperature in both 4H and 6H (figure 7 and S7).This suggests that the transition of the crystal structure of SiC may require higher pressure and temperature than the experimental conditions [42].The obtained i g value also indicates the strong kinetic barrier hindering the phase transition in the crystal structure of SiC polytypes.We conducted annealing experiments using the multi-anvil and laser-heated diamond anvil cell to test the inertness of the transition further.It is known that the multi-anvil can generate a stable and homogeneous temperature field compared to the laser-heated diamond anvil cell experiments.The multi-anvil press experiment showed that 6H-SiC mostly remains the crystal structure at 10 GPa and 2000 K for 30 min.The result is consistent with the earlier quench experiments by large-volume press (Whitney et al 1969).The laser-heated diamond anvil cell experiments showed that the crystal structure of the 6H-SiC was very similar to the original structure even after being annealed under a relatively large temperature gradient (Table S4 and S5).Our results are consistent with a previous study using laser-heated diamond anvil cells [12], while they contradict the other studies [10,11].The discrepancy among the diamond anvil cell studies may be due to chemical heterogeneity caused by the laser heating method [13], in addition to the kinetics of the transition between the polytypes having similar free energy [57]. The sluggish phase transition is commonly observed in highly covalent materials, such as diamond and related phases [58].By analogy with the diamond, the transition of SiC is controlled by various experimental conditions, such as heating duration, heterogeneity in the temperature, stress, and the crystallinity of the starting materials.Although we have conducted single-sided laser heating experiments to enhance the effect of the heterogeneity in the temperature, we did not observe the transition.This result indicates that the temperature gradient may not be a dominant factor, at least to the degree of several hundred K per 10 micrometers.In addition to the temperature heterogeneity, the crystal disorder, such as dislocation and stacking fault, can also be an important factor in the transition, although previous studies did not pay much attention to the crystallinity of the starting materials.We used a single crystal with very low dislocation density (micro-pipe density of <15 cm 2 ) as a starting material this time.The density of dislocation or stacking faults may increase due to the deformation under uniaxial compression in the diamond anvil cell, in contrast to the multi-axial compression in the multianvil press.Accordingly, the crystal disorder may be the dominant factor in the transition of the SiC in the highpressure experiments. Conclusions We conducted Raman spectroscopy measurements in 4H-SiC and 6H-SiC up to 69 GPa and 1023 K.Both optic and acoustic modes were observed at wide pressure and temperature ranges.The pressure and temperature shifts of the Raman frequencies were fitted using the polynomial and the Bose-Einstein distribution functions, respectively.SiC may be a good pressure and temperature marker in the diamond anvil cell experiments.The mode Grüneisen coefficients were insensitive to the pressure and temperature, suggesting the inert structural A reference structure of a = b = 3.0810(2) Å, c = 15.1248(10)Å, and V = 124.338(14)Å 3 were taken from Capitani et al [28].We obtained almost same crystal structures of the starting materials, a = b = 3.08060(10) Å, c = 15.1151(9)Å, and V = 124.226(11)Å 3 by a single crystal XRD measurement at ambient condition.The results of the refinement analysis showed unit cell parameters were a = b = 3.0782(11) Å, c = 15.102(8)Å, and V = 123.92(11)Å 3 at ambient conditions with an R-value of 3.7%.The cell parameters and atomic positions were similar to the initial crystal structure, while the c-axis and volume of the unit cell were slightly lower than that of the initial value[29]. Figure 3 . Figure 3.The temperature dependences of the Raman frequency in the (a) 4H-SiC at 60-63 GPa and (b) 6H-SiC at 67-69 GPa.Solid curves are fitted results by the equations with the Bose-Einstein distribution function (see text). 1 . Pressure dependence of the Raman frequency Pressure dependence of the Raman frequency provides insight into the bonding of the Si and C at high-pressure conditions.Moreover, it can be a pressure standard during high-pressure experiments.Peaks of optic modes (LO and TO) are much stronger than those of the acoustic modes and are sensitive to pressure (figures 1 and 2). Figure 4 .w Figure 4.The effect of the temperature on the Raman frequency of (a) 4H-SiC at 15-63 GPa and (b) 6H-SiC at 25-69 GPa.The wavenumbers were normalized to the value of LO at 300 K at given pressures .LO P , ,300 w The lines are fitted curves by the equations with the Bose-Einstein distribution function (solid). Figure 5 . Figure 5. (a)The LO-TO splitting in 4H-SiC (black) and 6H-SiC (red) at high-pressure conditions and 300 K (this study).The solid line is taken from the previous study in 6H-SiC[20].Typical error was estimated by the FWHM of the literature[20].(b) The LO-TO splitting in 4H-SiC (open symbols) and 6H-SiC (filled symbols) at high pressure and temperature conditions. Figure 6 . Figure 6.Transverse effective charge of (a) 4H-SiC and (b) 6H-SiC at high pressure and 300 K. Symbols are data obtained by this study, and lines are taken from the literature[17,20,39].The data with r = 0.6 (see text) are highlighted for a comparison with previous data. Figure 7 . Figure 7.The mode Grüneisen coefficient γ i of the LO (solid lines), TO (broken lines), and TA (dotted lines and dash-dot-dash lines) modes of (a) 4H-SiC and (b) 6H-SiC at high pressure and 300 K. Table 3 . Mode Grüneisen coefficients (γ iT and γ iP ) and anharmonic parameter of SiC.SiC and 6H-SiC at high pressure and temperature.The results of annealing experiments also supported inert transitions.
7,907.2
2024-06-04T00:00:00.000
[ "Materials Science", "Physics" ]
Quasiparticle interfacial level alignment of highly hybridized frontier levels: H$_2$O on TiO$_2$(110) Knowledge of the frontier levels' alignment prior to photo-irradiation is necessary to achieve a complete quantitative description of H$_2$O photocatalysis on TiO$_2$(110). Although H$_2$O on rutile TiO$_2$(110) has been thoroughly studied both experimentally and theoretically, a quantitative value for the energy of the highest H$_2$O occupied levels is still lacking. For experiment, this is due to the H$_2$O levels being obscured by hybridization with TiO$_2$(110) levels in the difference spectra obtained via ultraviolet photoemission spectroscopy (UPS). For theory, this is due to inherent difficulties in properly describing many-body effects at the H$_2$O-TiO$_2$(110) interface. Using the projected density of states (DOS) from state-of-the-art quasiparticle (QP) $G_0W_0$, we disentangle the adsorbate and surface contributions to the complex UPS spectra of H$_2$O on TiO$_2$(110). We perform this separation as a function of H$_2$O coverage and dissociation on stoichiometric and reduced surfaces. Due to hybridization with the TiO$_2$(110) surface, the H$_2$O 3a$_1$ and 1b$_1$ levels are broadened into several peaks between 5 and 1 eV below the TiO$_2$(110) valence band maximum (VBM). These peaks have both intermolecular and interfacial bonding and antibonding character. We find the highest occupied levels of H$_2$O adsorbed intact and dissociated on stoichiometric TiO$_2$(110) are 1.1 and 0.9 eV below the VBM. We also find a similar energy of 1.1 eV for the highest occupied levels of H$_2$O when adsorbed dissociatively on a bridging O vacancy of the reduced surface. In both cases, these energies are significantly higher (by 0.6 to 2.6 eV) than those estimated from UPS difference spectra, which are inconclusive in this energy region. Finally, we apply self-consistent QP$GW$ (scQP$GW$1) to obtain the ionization potential of the H$_2$O-TiO$_2$(110) interface. ABSTRACT: Knowledge of the frontier levels' alignment prior to photo-irradiation is necessary to achieve a complete quantitative description of H 2 O photocatalysis on TiO 2 (110). Although H 2 O on rutile TiO 2 (110) has been thoroughly studied both experimentally and theoretically, a quantitative value for the energy of the highest H 2 O occupied levels is still lacking. For experiment, this is due to the H 2 O levels being obscured by hybridization with TiO 2 (110) levels in the difference spectra obtained via ultraviolet photoemission spectroscopy (UPS). For theory, this is due to inherent difficulties in properly describing many-body effects at the H 2 O-TiO 2 (110) interface. Using the projected density -10 0 ε − ε VBM (eV) INTRODUCTION The photooxidation activity of a surface is determined by the interfacial level alignment between the occupied adsorbate levels and those of the substrate. 1,2 Water photooxidation on TiO 2 has attracted enormous attention 3-10 for energy applications 11,12 based on H 2 production. 13 This reaction also plays an important role in photocatalytic environmental remediation and surface selfcleaning/sterilizing. 1,2,14 This is because the resulting hydroxyl radicals are the key intermediates in the oxidative degradation of organic species. 15,16 To understand water photooxidation, it is necessary to understand the interfacial level alignment between the occupied levels of H 2 O and the TiO 2 substrate. 17 Experimentally, the most common approach to access the adsorbate levels is to take the difference between the covered and clean surface spectra from photoemission spectroscopy. However, when the adsorbate and surface levels are strongly hybridized, it becomes difficult to disentangle the adsorbate and surface contributions to the UPS spectra using only the difference spectra. 18 For example, shifting of the surface levels due to hybridization or band bending may completely obscure the adsorbate levels. 18 Further, the adsorbate levels near the valence band maximum (VBM) are the most likely to be obscured. It is precisely these levels that are most important for photooxidation processes. Using a theoretical approach, one can directly disentangle the molecular levels by projecting the density of states (DOS) of the interface onto the atomic orbitals of the molecule. Altogether, this makes a robust theoretical approach necessary to accurately predict the alignment of the adsorbate and substrate levels, and separate the adsorbate and surface spectra. A robust theoretical treatment requires quasiparticle (QP) G 0 W 0 to capture the anisotropic screening of the electron-electron interaction at the interface. [19][20][21] As previously demonstrated for CH 3 OH on TiO 2 (110), QP G 0 W 0 is necessary to obtain even a qualitative description of the level alignment. [22][23][24] For this interface, the occupied levels of the molecule are only weakly hybridized with the surface levels. This allowed an unambiguous comparison to the photoemission difference spectrum. 22 However, for H 2 O on rutile TiO 2 (110), this is not the case. The occupied molecular levels of H 2 O on single crystal rutile TiO 2 (110) have been probed via ultraviolet photoemission spectroscopy (UPS) 18,25,26 and metastable impact electron spectroscopy (MIES). 26 These experiments were performed under ultrahigh vacuum (UHV) conditions from low to room temperature, 25 from 0.01 to 100 L H 2 O exposure, 18 and for various surface preparations resulting in either reduced TiO 2−x (110) with surface oxygen defects or "nearly-perfect" TiO 2 (110). 18 Altogether, these experiments have addressed the long-standing controversy as to where and how H 2 O adsorbs and dissociates on TiO 2 (110). [27][28][29][30][31][32][33][34][35][36] At 150 K the photoemission difference spectrum between H 2 O covered and clean TiO 2 (110) surfaces consists of three peaks, which are attributed to intact H 2 O adsorbed on Ti coordinately unsaturated sites (Ticus). 25 Upon heating to 300 K, the difference spectrum's three-peak structure evolves into a two-peak structure, which is attributed to dissociated H 2 O adsorbed on bridging O vacancies (O vac br ), i.e., O br H surface species. 25 This assignment of the UPS spectra to intact (I) H 2 O@Ticus or dissociated (D) H 2 O@O vac br is based on the peak energy separations being consistent with those reported for H 2 O 37 in gas phase or OH − in NaOH. 38 A comparison to the H 2 O and OH − peaks is robust for the molecular levels that lie below and have little hybridization with the surface DOS. However, the adsorbate levels that lie within the surface valence band may significantly hybridize with the surface, with a single molecular level contributing to many interfacial levels. These interfacial levels are thus not easily associated with H 2 O and OH − levels. This is exacerbated by the mixing of the molecular levels due to symmetry breaking at the interface. As a result, "between 5 and 8 eV" below the Fermi level, experimentally they "are unable to produce reliable difference structures" from the UPS spectra obtained for "nearly-perfect" TiO 2 (110) exposed to H 2 O at 160 K. 18 Using the QP G 0 W 0 H 2 O projected DOS (PDOS), we have disentangled the adsorbate and surface contributions to the UPS spectra within this difficult energy range. This has been done as a function of H 2 O coverage and dissociation on stoichiometric and reduced surfaces. In so doing, we provide quantitative values for the energies of the highest H 2 O occupied levels, prior to photoirradiation, for a number of experimentally relevant 3,5-7,15 H 2 O-TiO 2 (110) structures. To directly compare to red-ox potentials, the important quantities for determining photoelectrocatalytic activity, one needs the alignment relative to the vacuum level, Evac. 39,40 With this, one obtains the ionization potential directly from −ε PDOS peak + Evac. To obtain a more accurate absolute level alignment, we employ our recently introduced self-consistent QP GW 41-43 technique scQPGW1. 22 The presentation of the results is organized as follows. First, we focus on the H 2 O levels that lie below and have little hybridization with the substrate DOS. This is done for intact H 2 O@Ticus in Section 3.1 and dissociated H 2 O@O vac br in Section 3.2. Further, in Section 3.3, we shown that these results are rather independent of the choice of xc-functional. In so doing we provide evidence for a robust semi-quantitative agreement with the UPS difference spectra for the adsorbate levels for which an unambiguous comparison with the experiment is possible. For a more complete understanding of the UPS experiments, in Section 3.4 we analyze the H 2 O PDOS for a variety of other H 2 O structures on the stoichiometric and reduced surfaces. These may form under different experimental conditions and surface preparations. In Section 3.5 we focus on the highest H 2 O occupied levels, which are significantly hybridized with the substrate DOS. The success of the QP G 0 W 0 PDOS strategy for the lower-energy part of the UPS difference spectra provides support for our results in this difficult spectral region, where a straightforward comparison with experiment is not possible. Finally, in Sec-tion 3.6, we employ scQPGW1 to obtain an improved absolute level alignment relative to Evac, and thus estimate the ionization potential of the H 2 O-TiO 2 (110) interface. METHODOLOGY Our QP G 0 W 0 calculations [44][45][46] have been performed using vasp within the projector augmented wave (PAW) scheme. 47 The G 0 W 0 calculations are based on Kohn-Sham wavefunctions and eigenenergies from density functional theory (DFT) obtained using a generalized gradient approximation (PBE) 48 for the exchange correlation (xc)-functional. 49 The dependence of the QP G 0 W 0 DOS and PDOS on the DFT xc-functional has been tested for 1 ML intact H 2 O@Ticus of stoichiometric TiO 2 (110) and 1 ⁄2ML dissociated H 2 O@O vac br of defective TiO 2− 1 ⁄4 (110) with 1 ⁄2ML of O vac br . For these structures, G 0 W 0 calculations based on the local density approximation (LDA), 50 van der Waals (vdW-DF) 51 , and the rangeseparated hybrid (HSE) 52 xc-functionals have been carried out for comparison with the PBE based G 0 W 0 calculations. In particular, we use the HSE06 53 variant of the HSE xc-functional. In the QP G 0 W 0 approach, the contribution to the Kohn-Sham (KS) eigenvalues from the exchange and correlation (xc)-potential Vxc is replaced by the self energy Σ = iGW, where G is the Green's function and W is the screening 44 based on the KS wavefunctions. 45 The dielectric function is obtained from linear response time-dependent (TD) density functional theory (DFT) within the random phase approximation (RPA), including local field effects. 46 From G 0 W 0 one obtains first-order QP corrections to the KS eigenvalues, but retains the KS wavefunctions. Since our aim is to compare the computed interfacial level alignment with measured UPS spectra, it is most consistent to align the QP G 0 W 0 levels with the VBM. We find Evac, i.e., the effective potential far from the surface, from G 0 W 0 is essentially the same as the Evac from DFT. In other words, the effective potential is unchanged by G 0 W 0 . To obtain a more accurate absolute QP level alignment relative to Evac, we employ a self-consistent QP GW approach. 41 In particular, by employing the scQPGW1 approach, we obtain both a QP PDOS comparable to that from QP G 0 W 0 and an improved alignment relative to Evac. 22,23 Here, 25%, 25%, and 50%, of the QP self energies are "mixed" with the DFT xc-potential over three self-consistent QP GW cycles, 41 respectively. If, instead, 100% of the DFT xcpotential were replaced by QP self energy in a single self-consistent QP GW cycle, one would exactly obtain the QP G 0 W 0 eigenvalues. However, this mixing is required to obtain a smooth convergence of both the QP wavefunctions and the absolute QP level alignment. To fully converge our self-consistent QP GW calculations (scQPGW), we perform a further eight cycles, with each introducing a further 25% of the QP self energy. The geometries have been fully relaxed using LDA, 50 PBE, 48 or vdW-DF 51 xc-functionals, with all forces ≲ 0.02 eV/Å. HSE calculations are performed for the relaxed geometries obtained with PBE. We employ a plane-wave energy cutoff of 445 eV, an electronic temperature k B T ≈ 0.2 eV with all energies extrapolated to T → 0 K, and a PAW pseudopotential for Ti which includes the 3s 2 and 3p 6 semi-core levels. All calculations have been performed spin unpolarized. For the clean stoichiometric TiO 2 (110) surface 23 we have used a four layer slab and an orthorhombic 1×1 unit cell of 6.497×2.958× 40 Å 3 , i.e., where D ≈ 27 Å is the vacuum thickness and a and c are the ex-perimental lattice parameters for bulk rutile TiO 2 (a = 4.5941 Å, c = 2.958 Å). 54 We have employed a Γ-centered 4 × 8 × 1 k-point mesh, and 320 bands = 9 1 ⁄3 unoccupied bands per atom, i.e. including all levels up to 26 eV above the valence band maximum (VBM). For the clean reduced TiO 2− 1 4 (110) surface we have used a monoclinic 1 × 2 unit cell of 6.497 × 5.916 × 40 Å 3 , i.e., to maximize the separation between the O vac br . For the H 2 O covered surfaces, we have employed a four layer slab with adsorbates on both sides and an orthorhombic 1 × 2 unit cell of 6.497 × 5.916 × 47 Å 3 , i.e., where D ≈ 34 Å. We employed a Γ centered 4 × 4 × 1 k-point mesh, with approximately 9 1 ⁄6 unoccupied bands per atom, i.e. including all levels up to 30 eV above the VBM, an energy cutoff of 80 eV for the number of G-vectors, and a sampling of 80 frequency points for the dielectric function. The G 0 W 0 parameters are consistent with those previously used for describing rutile TiO 2 bulk, TiO 2 (110) clean surface and interfaces. 22,23 These parameters have been shown to provide accurate descriptions of bulk optical absorption spectra, and both clean surface and interfacial level alignment. 22,23 To model H 2 O in the gas phase, we employed a unit cell with C 2v symmetry and 16 Å of vacuum in each direction. At the G 0 W 0 level, we used a smaller energy cutoff of 40 eV for the number of Gvectors, which has previously shown to provide an accurate description of the optical absorption spectra for isolated molecules. 55,56 To obtain DFT total energies and the relaxed structure of the clean reduced TiO 2− 1 8 (110) we have used a monoclinic 1 × 4 unit cell of 6.497 × 11.832 × 28 Å 3 , i.e., where D ≈ 15 Å, and employed a Γ-centered 4 × 2 × 1 k-point mesh. In this study, we have performed PBE and subsequent singlepoint RPBE 57 based DFT calculations for the H 2 O adsorption energies E ads on the stoichiometric and reduced surfaces. The RPBE xc-functional was especially developed for the prediction of adsorption properties on metal surfaces. 57 The H 2 O adsorption energy on the Ticus site of a stoichiometric TiO 2 (110) surface is given by (e) H 2 O molecular orbitals, G 0 W 0 calculated eigenenergies marked in cyan, and experimental gas phase spectrum aligned with the 1b 1 level of (c). 37 Energies are relative to the VBM (ε VBM ). Intensity references are provided for ε > ε VBM when available. Figure 1 we disentangle adsorbate and substrate contributions to the spectrum of intact H 2 O@Ticus, and compare the H 2 O PDOS to the theoretical and experimental difference DOS. Specifically, we model a monolayer (ML) of H 2 O molecules with parallel (⇉) interfacial hydrogen bonds aligned along the [001] direction ( Figure 1(b)). 58,59 Note that 1ML of intact H 2 O is the most stable coverage and structure on the stoichiometric rutile TiO 2 (110) surface. 32 The theoretical difference DOS is the difference between the total DOS of the H 2 O covered (H 2 O@Ticus) and clean stoichiometric (TiO 2 (110)) surfaces, as shown schematically in Figure 1(a). Turquoise areas in the H 2 O@Ticus and difference DOS indicate regions of greater density for the H 2 O covered versus clean stoichiometric surface. The gray area indicates the DOS energy range for the clean stoichiometric TiO 2 (110) surface. Figure 1(c) and (d) show two sets of UPS difference spectra obtained either by raising the temperature (from 150 K to 190 K) for a consistent exposure to H 2 O (0.2 L) for an annealed TiO 2 (110) surface 25 (Figure 1(c)), or by increasing the H 2 O dose (from 0.01 L to 1 L) at low temperature (160 K) for a nearly perfect surface 18 (Figure 1d). The experimental spectra have been referenced to the VBM, which is positioned 3.2 eV below the experimental Fermi level. 23 Comparing the difference DOS to the H 2 O PDOS, we find the peaks lying outside the TiO 2 (110) DOS energy range are clearly attributable to H 2 O levels. As shown in Figure 1(b), these levels are related to the 1b 2 and 3a 1 H 2 O orbitals shown in Figure 1e. This is not the case within the TiO 2 (110) DOS region, where the adsorbate levels are broadened by hybridization with the surface. This hybridization with the surface has been severely underestimated by previous cluster-based MP2 calculations. 60 Within the TiO 2 (110) DOS region, the peaks in the H 2 O PDOS have corresponding peaks in the difference DOS, although the relative peak intensities differ substantially between the two methods. More importantly, the difference DOS has dips centered at 61 respectively, as marked in Figure 1(a). These peaks split due to mixing with the 3a 1 and 1b 1 H 2 O orbitals. This splitting is the origin of the observed dips in the difference DOS, which are also seen experimentally in Figure 1(c) and d. Intact H 2 O on the Stoichiometric Surface. In The peak at −9.4 eV in the H 2 O PDOS, which has 1b 2 molecular character, agrees semi-quantitatively with the most strongly bound experimental peaks at −9.8 eV (Figure 1 The assignment of the peaks located within the TiO 2 (110) DOS is much more complicated. The assumption that the highest peak in the experimental spectra originates solely from the H 2 O 1b 1 level 25,26 is an oversimplification. In fact, both the 3a 1 and 1b 1 molecular levels contribute within this region ( Figure 1(b)). While the levels with intermolecular 3a 1 bonding character give rise to a distinct peak below the TiO 2 (110) DOS region, those with intermolecular 3a 1 antibonding character are pushed to higher energies and mixed with the 1b 1 molecular levels ( Figure 1(b)). The latter is due to symmetry breaking at the interface. Consequently, the H 2 O PDOS is broadened into several peaks between −5 and −1 eV. These levels have interfacial (3a 1 /1b 1 -O 2pσ/2pπ) bonding and antibonding character (not visible at the isosurface value used). Dissociated H 2 O on Reduced Surfaces. To see how dissociation of H 2 O@O vac br affects the spectrum, we now consider 1 ⁄2ML of H 2 O dissociated on a reduced TiO 2− 1 4 (110) surface (Figure 2). Here, we have used TiO 2− 1 4 (110) to denote a surface consisting of 1 ⁄2ML of O vac br defects. This structure corresponds to the staggered O br H surface species, shown in Figure 2 The theoretical difference DOS is the difference between the total DOS of the H 2 O covered (H 2 O@O vac br ) and the clean reduced (TiO 2− 1 4 (110)) surfaces, shown schematically in Figure 2(a). Turquoise areas in the H 2 O@O vac br and difference DOS indicate regions of greater density for the H 2 O covered versus clean reduced surface. The gray area indicates the DOS energy range for the clean reduced TiO 2− 1 4 (110) surface. The O vac br defects give rise to occupied levels with Ti 3d character that are just below the conduction band minimum and outside the energy range shown. 62 for T = 300 K after between 0.01 and 100 L exposure, 18 and (e) for T = 120 K after 0.14, 0.3, 0.4, 0.5, and 0.7 L exposure. 26 Peak positions 18,25,26 are marked in brown. Energies are relative to the VBM (ε VBM ). Intensity references are provided for ε > ε VBM when available. make up the O br H species. In this way the PDOS is provided in terms of H 2 O formula units. The peak in the difference DOS and PDOS at −7.0 eV has O br H σ character, as shown in Figure 2(b). Note that the peak intensity in the PDOS is about half that in the difference DOS, as the PDOS includes half the O br atoms. This peak's position agrees semiquantitatively with the experimental peaks at −7.1 (Figure 2 Much of the theoretical difference DOS's structure is attributable to the defect healing of O vac br , as seen from the difference DOS between TiO 2 (110) and TiO 2− 1 4 (110) in Figure 3. This suggests that the observed features in the experimental difference spectra over- generalized gradient approximation (PBE), 48 long-ranged van der Waals interactions (vdW-DF) 51 , or a range-separated hybrid (HSE06) 53 are employed for the xc-functional. This is consistent with the previously reported similarities between PBE and HSE based G 0 W 0 PDOS for CH 3 OH on TiO 2 (110). 23 Figure 5(d,e) shows that scQPGW1 provides a similar H 2 O PDOS level alignement to G 0 W 0 . This is consistent with what was previously reported for the CH 3 OH-TiO 2 (110) interface. 22,23 We clearly see that the differences between the DFT and G 0 W 0 PDOS, i.e., the QP energy shifts, are far from simply being rigid. For instance, we find for PBE that the QP energy shifts for the levels that contribute to the highest-energy PDOS peak ε PDOS peak are almost negligible (cf. Figures 4(d,e) and 5(b,e)) . As a result, the QP G 0 W 0 ε PDOS peak is only ∼ 0.1 eV lower compared to DFT. On the other hand, we find significant QP shifts to stronger binding for the levels that contribute to the most strongly bound PDOS peak with 1b 2 σ molecular character. For example, with PBE the QP G 0 W 0 lowest energy peak is shifted by ∼ −1.7 eV compared to DFT (cf. Figures 4(d,e) and 5(b,e)). As previously shown for the CH 3 OH-TiO 2 (110) interface, these differences in the shifts of the peaks are directly related to differences in the spatial distribution of the wave functions for the levels contributing to the peaks. [22][23][24] This is because the QP G 0 W 0 corrections to the DFT eigenenergies for interfaces are directly correlated with the spacial distribution of the wave functions. [22][23][24] The negligible shift of the DFT highest-energy PDOS peak (Figures 4 (b,d,f) and 5(b,c)) is due to its strong hybridization with the surface, i.e., weight on TiO 2 (110), for the levels contributing to this peak. [22][23][24] On the other hand, the levels that contribute to the most strongly bound PDOS peak have little weight on TiO 2 (110), and have σ character. Both their localized H 2 O character as well as their σ nature explain why these levels have large QP energy shifts to stronger binding. [22][23][24] Oxygen defective and hydroxylated (h−)TiO 2 surfaces have occupied 3d levels which are associated with reduced Ti 3+ atoms. 64 One such example is the 1 ⁄2ML dissociated H 2 O@O vac br on reduced TiO 2− 1 ⁄4 (110) with 1 ⁄2ML of O vac br shown in Figure 6(a). The spacial distribution of the 3d density for O defective surfaces has been characterized by low temperature scanning tunneling microscopy (STM). 63,67 STM measurements find at 77 K the 3d density is homogeneously distributed along the [001] direction, 63 while at ∼ 5 K the 3d density exhibits an asymmetric localized character. 67 A localized description of the Ti 3+ occupied 3d levels is not obtained from DFT with standard xc-functionals. For example, the occupied 3d levels obtained with PBE are highly delocalized, as clearly shown in Figure 6(b). This is due to self-interaction errors which are inherent in such xc-functionals. If one performs spinpolarized DFT calculations with a hybrid xc-functional on such systems, one obtains localized Ti 3+ 3d 1 levels between 0.7 and 1.6 eV below the CBM, along with a structural deformation of the TiO 2 (110) surface. 63,64 However, spin-paired calculations with HSE06 on the PBE relaxed geometry only yield an occupied shoulder at the CBM ( Figure 6(c)). At the QP G 0 W 0 level based on PBE, this shoulder evolves into a distinct peak about 0.6 eV below the Fermi level, ε F . This effect is even more pronouced when the G 0 W 0 calculation is based on HSE06 (cf. Figure 6(d,e)), which yields peaks at 0.6 and 0.9 eV below ε F . As compared to G 0 W 0 PBE, G 0 W 0 HSE06 shifts the unoccupied 3d levels further up in energy revealing the double peak structure. These energies are in very good agreement with the peak at 0.8 eV below ε F in the UPS spectra of H 2 O@O vac br of Figure 2(d). This peak is not shown in Figure 2(d) as it is slightly above 2 eV with respect to VBM. 18 However, note that G 0 W 0 overestimates by about 1 eV the VBM position relative to ε F as compared with UPS experiments. 18 This result is completely independent of the wavefunction's spacial distribution, i.e., localization, as the G 0 W 0 calculations are based on the KS wavefunctions. This is different from previous findings, which showed DFT with either PBE or hybrid xcfunctionals is only giving distinct peaks for the occupied 3d levels provided the relaxed spin-polarized distorted structure is used in the calculations. 63,64 While for G 0 W 0 based on PBE and HSE06 one sees noticeable differences in the description of the 3d occupied levels, the QP H 2 O PDOS and its alignment relative to the VBM are unchanged. Although localization of the Ti 3+ occupied levels and associated structural deformations are absent from our approach, such features should not significantly alter the QP H 2 O PDOS. This is because the Ti 3+ levels are too far above the VBM (∼ 2 eV 64 ) to hybridize with the H 2 O. Moreover, as we will show in Section 3.4, the QP H 2 O PDOS is rather robust to local deformations of the surface structure, e.g., due to changes in coverage. br defects, respectively. The relative importance of these geometries is illustrated in Figure 10(a) and 10(b) by the average absorption energy E ads per H 2 O molecule on the stoichiometric or reduced surfaces 68 with either PBE 48 or RPBE 57 xcfunctionals. In so doing, the contribution of different structures to the measured spectra can be disentangled. Note that an intact 1 ⁄2ML of H 2 O@O vac br (Figure 8(b)) is probably only a transient locally stable state of the reduced H 2 O-TiO 2− 1 ⁄4 (110) interface, 29 which may easily evolve into the ∼ 0.7 eV more stable dissociated 1 ⁄2ML H 2 O@O vac br (Figure 8(c)). For this reason, we only consider dissociated H 2 O@O vac br structures in Figure 10d. By comparing to lower coverage H 2 O structures ( 1 ⁄2ML [30][31][32]69 to 1ML [30][31][32]69 in Figure 7 and 1 ⁄4ML 70 in Figure 9 to 1 ⁄2ML 71 in Figure 8), we can disentangle the effect of interaction between the H 2 O molecules on the spectra. Further, these structures allow us to probe the isolated molecule limit. Coverage and Dissociation Dependence of H 2 O Spec As shown in Figure 10, at lower coverages the overall width of the spectra is reduced with fewer distinct peaks. When the coverage is increased to include intermolecular interactions between adjacent species, the molecular levels hybridize into bonding and antibonding intermolecular levels. This produces additional peaks above and below those present at low coverage. As a result, the peak with intermolecular bonding 3a 1 character at −6.3 eV for 1ML of H 2 O@Ticus is absent for a 1 ⁄2ML coverage. This reinforces the assignment of the experimental spectra shown in Figure 1 to an intact 1ML H 2 O@Ticus geometry with interacting molecules. To see how the spectra for dissociation of H 2 O@Ticus compare to H 2 O@O vac br , we have considered the half-dissociated ( 1 ⁄2D) and fully dissociated (D) H 2 O structures shown in Figure 7. As shown in Figure 10(c), the peak at −7.0 eV with O br H σ character for H 2 O@O vac br splits into two peaks for dissociated H 2 O@Ticus. The lower energy peak has both OcusH and O br H σ character, while the higher energy peak is mostly OcusH in character. Furthermore, we find a similar couple of peaks for 3 ⁄4ML mixtures of dissociated H 2 O@Ticus and H 2 O@O vac br shown in Figure 10d. This means one may recognize dissociated H 2 O@Ticus by both the presence of two peaks at about −7.0 and −6.3 eV, and the absence of the low-energy peak with 1b 2 character for intact H 2 O@Ticus. The absence of a peak at about −6.3 eV in the experimental spectra shown in Figure 2(c) reinforces its attribution to dissociated H 2 O@O vac br rather than dissociated H 2 O@Ticus. This is further supported by the calculated H 2 O absorption energies (Figure 10(a) and 10(b)). These are generally weaker for dissociated H 2 O@Ticus, and stronger for H 2 O@O vac br , as in previous calculations. 29 To check whether changes in the absorption geometry of H 2 O affect the spectra for the same coverage, we compare 1ML of H 2 O {I, 1 ⁄2D, D} adsorbed with either parallel (⇉) or antiparallel (⇄) interfacial hydrogen bonds 58 (black dashed lines in Figure 7). Overall, the two sets of spectra are consistent, and demonstrate the general robustness of the DOS to minor changes in the water absorption geometry. However, as the H 2 O molecules are no longer equivalent when the interfacial hydrogen bonds are antiparallel, there is a greater splitting between bonding and antibonding contributions for the peaks with 1b 2 and 3a 1 molecular character. In particular, for intact H 2 O, the lowest energy peak with molecular 1b 2 character splits with a separate peak at −9.6 eV, which is closer to the peaks at −9.8 25 (Figure 1(c)) and −10.0 eV 18 (Figure 1d) (Figure 7), and consider the effect of additional H 2 O@Ticus to 1 ⁄4ML ( Figure 9) and 1 ⁄2ML (Figure 8) H 2 O@O vac br . 69 In this way we can can see how robust the observed features in the individual spectra for isolated species are to screening by H 2 O layers, 6,7 and probe the liquid water limit. 73 When a second layer of H 2 O is added to the low coverage intact 1 ⁄2ML H 2 O@Ticus structure, the levels with H 2 O 1b 2 character are unchanged, while the levels with 3a 1 and 1b 1 second layer character are more localized and weakly hybridized with the surface. These levels are seen as the two most intense peaks at −4.3 and −2.2 eV (Figure 10(c)). The former coincides with the peak at −4.2 eV observed experimentally at low temperatures (Figure 1(c)), suggesting multilayer H 2 O structures may be present under these experimental conditions. The intermolecular H bonding between the layers delocalizes the molecular levels of the first layer. This is seen from the peak at −6.1 eV with antibonding 3a 1 character on the first layer. We saw the same behavior when increasing the first layer's coverage from 1 ⁄2ML to 1 ML. This is further confirmation that the peak observed experimentally at −6.4 eV has intermolecular character. When a second 1 ⁄2 layer of H 2 O is added to the 1ML H 2 O@Ticus { 1 ⁄3D, 2 ⁄3D} structures, 72 a denser network of intermolecular and interfacial hydrogen bonds is formed, as shown in Figure 7. This causes a stronger hybridization between the OH and H 2 O σ levels. For the 1 ⁄3D structure, this results in the four distinct σ peaks shown in Figure 10(c). On the one hand, the peaks at −9.1 and −6.2 eV have predominantly intact H 2 O and OcusH character, as was the case for 1ML of 1 ⁄2D H 2 O@Ticus. On the other hand, the peaks at −7.9 and −7.4 eV are most related to the second layer. In effect, the H 2 O σ level of the second-layer H 2 O, which is fully saturated with four hydrogen bonds, is upshifted by more than an eV. This is not the case for the 2 ⁄3D structure (Figure 7), where the peak at −9.1 eV instead has mostly intact second-layer H 2 O 1b 2 character. As was the case for intact 1 1 ⁄2ML H 2 O@Ticus, the addition of a second 1 ⁄2 layer of H 2 O induces a stronger hybridization of the O br H levels, and introduces an additional intense peak at −4.4 eV (Figure 10(c)). This again suggests the experimentally observed peak at −4.2 eV (Figure 1(c)) may be due to multilayer H 2 O. Overall, we find the addition of second-layer H 2 O affects the resulting spectrum qualitatively. We find both additional features and a redistribution of those due to the first H 2 O layer. When we instead add H 2 O@Ticus to the 1 ⁄4ML and 1 ⁄2ML H 2 O@O vac br structures (Figures 9, and 8) we find the resulting spectrum is the sum of the separate spectra to within 0.2 eV (Figure 10). For example, the 1 1 ⁄2ML 1 ⁄3D spectrum (Figure 10d) for 1ML of intact H 2 O added to 1 ⁄2ML H 2 O@O vac br (Figure 8) is basically the sum of the 1ML in-tact H 2 O@Ticus (Figure 1(a)) and 1 ⁄2ML H 2 O@O vac br (Figure 2(a)) PDOS spectra downshifted by 0.2 eV. This explains the ease with which the experimental single-layer H 2 O spectra may be analyzed for levels outside the surface DOS region. Alignment of the Highest H 2 O Occupied Levels. So far, we have concentrated our analysis on the lower energy peaks observed in the experimental spectra. This was done to demonstrate the robustness of the calculated QP DOS. Having established this, we now focus on the adsorbate levels near the VBM, which play an important role in photooxidation processes. In this respect, the highest H 2 O occupied levels' alignment for 1ML intact and dissociated H 2 O@Ticus, and 1 ⁄2ML dissociated H 2 O@O vac br is of utmost importance. The former structure corresponds to the reactant species on stoichiometric surfaces, 32 which undergoes photoirradiation. The latter structures act as hole traps and are thus the main oxidizing agents on TiO 2 (110). 74,75 We have shown that the experimental peak at −4.2 eV 25 is not, in fact, the highest energy peak of H 2 O@Ticus. We instead find the highest-energy PDOS peak, ε PDOS peak , for 1ML intact H 2 O@Ticus at −1.1 eV relative to the VBM (Figure 10(c)). This is 0.6 eV closer to the VBM than the ∼ −1.7 eV estimate 15 deduced from the onsets of the UPS difference spectra in Ref. 18. Moreover, as 1ML H 2 O@Ticus dissociates, ε PDOS peak moves up to −1.0 eV ( 1 ⁄2D) and −0.9 eV (D) (Figure 10(c)). This is again significantly higher than the ∼ −1.8 eV estimate 7 based on UPS difference spectra for the TiO 2 (100) surface from Ref. 76. As was the case for CH 3 OH on TiO 2 (110), 22 this raising of ε PDOS peak can be related to the charge transfer of −0.4e that accompanies deprotonation (arrows in Figure 7). We find for the 1ML intact structure on TiO 2 (110) ε PDOS peak is 0.2 eV closer to the VBM for H 2 O than for CH 3 OH, 22-24 while for the 1ML 1 ⁄2D structures ε PDOS peak is the same. However, the highest PDOS peak is both less intense and broader for H 2 O compared to CH 3 OH, due to the stronger hybridization with the surface. This is why, as discussed in Section 3.1, the QP G 0 W 0 ε PDOS peak is only ∼ 0.1 eV lower compared to DFT [22][23][24] (Figure 4). After adding second-layer H 2 O, ε PDOS peak is unchanged with weight mostly remaining on the first layer. We find for 1 ⁄2ML dissociated H 2 O@O vac br ε PDOS peak ≈ −1.1 eV rel-ative to the VBM (Figure 10d), the same as for intact H 2 O@Ticus. This is much higher than the previous estimate of ∼ −3.7 eV 7 for O br H based on the UPS difference spectra in Ref. 25. Our corrected ε PDOS peak value agrees with the recently demonstrated photocatalytic importance of O br H sites as the main oxidizing species on TiO 2 (110). 74 Based on ε PDOS peak for 1ML intact H 2 O@Ticus, vertical excitations from the highest H 2 O occupied levels to the TiO 2 (110) conduction band require photon energies that exceed the electronic band gap for bulk rutile TiO 2 (3.3 ± 0.5 eV 77 ) by ≳ 1 eV. However, the hole generated by such supra-band gap excitations should be mostly located on TiO 2 (110) O 2pπ rather than H 2 O O 2p levels. This is because the H 2 O highest levels are hybridized with TiO 2 (110) and are predominantly TiO 2 (110) in character. The fact that the highest H 2 O levels are ∼ 1 eV below the VBM does not necessarily mean that they cannot be photooxidized by holes photogenerated within the TiO 2 (110) valence band. A recent DFT study with HSE06 found trapped holes at surface O sites, i.e., three-fold coordinated O 3fold , are shared with nearyby HO-Ticus groups. 5 Moreover, it has been suggested that H 2 O can only be photooxidized, i.e., trap a hole, upon deprotonation. 78,79 In other words, hole transfer to the HO-Ticus site should be mediated by the deprotonation of intact H 2 O@Ticus to the nearest O br site. Altogether, this suggests that H 2 O@Ticus photooxidation should be initiated by band-to-band and supra-band photo-excitations, which result in the generation of holes within the TiO 2 (110) valence band. These TiO 2 (110) free holes may then be trapped at O 3fold sites, and partially transferred to nearby HO-Ticus upon H 2 O deprotonation. 3.6. Vacuum Level Alignment. So far, we have considered the level alignment of the interfacial levels relative to the VBM of the substrate. This allows a direct comparison of the occupied PDOS with the measured UPS spectra. However, to assess the photoelectrocatalytic activity of the interface, one needs the absolute level alignment relative to the vacuum level Evac. In Figure 11 we show the level alignment for gas phase H 2 O and 1ML intact H 2 O@Ticus relative to Evac from DFT, scQPGW1, and G 0 W 0 based on PBE and HSE xc-functionals. These are compared to the measured CBM for the liquid H 2 O-TiO 2 (110) interface, 40,86 and the measured and coupled-cluster (CCSD(T)) gas phase H 2 O ionization potential. 80 Our calculated IP values for H 2 O in gas phase are consistent with those reported previously in the literature. 80,[87][88][89] Although the relative energies of the 1b 1 , 3a 1 , and 1b 2 H 2 O levels are consistent over all five levels of theory, the levels are rigidly downshifted. We observe a clear ordering in increasing IP of PBE DFT (7.2 eV) < HSE DFT ≪ PBE scQPGW1 < PBE G 0 W 0 ≲ HSE G 0 W 0 ≲ PBE scQPGW (12.8 eV) < Hartree Fock (HF 13.9 eV 88 ). To understand the origin of this ordering, we have probed the dependence of the IP on the fraction of Hartree-Fock exact exchange included in the range-separated HSE xc-functional via the parameter α in Figure 12. On the one hand, for DFT, we find a strong linear dependence of IP on α, i.e., IP ≈ IP PBE + (IP α=1 − IP PBE )α ≈ 7.2 + 5.9α, with α ∼ 0.9 providing a quantitative agrement with experiment and CCD(T) calculations. Overall, this linear dependence is not surprising, as α may be interpreted as the amount of electron-electron screening, i.e., the inverse dielectric constant ε −1 ∞ . 90,91 In other words, the fraction of exact exchange α included, determines the amount of screening, ε −1 ∞ , incorporated within the xc-functional. The quantitative agreement of the IP for α ∼ 0.9 is because small molecules, e.g., H 2 O, are weakly screened in the gas phase (ε∞ ∼ 1). On the other hand, for G 0 W 0 , the calculated IP has a much weaker dependence on α, i.e., the starting xc-functional, with IP ≈ IP α=1 − ∆IP(10 α−1 − 1) ≈ 13.4 − 1.2 × 10 −α . Further, the G 0 W 0 and DFT IP coincide when α → 1. For G 0 W 0 based on PBE (α = 0), the IP already agrees semi-quantitatively with experiment, with full quantitative agreement obtained for G 0 W 0 based on HSE06 (α = 0.25). This is because the RPA ε∞ ∼ 1, independently of α. Essentially, the calculated G 0 W 0 IPs would also be obtained from DFT using an HSE xc-functional with 0.84 < α < 1.0, i.e., 1 < ε∞ < 1.2. Overall, this implies G 0 W 0 is a predictive method for the IP of small molecules. However, the scQPGW technique has the added advantage of being completely independent of the starting xc-functional, 23,88 while providing a nearly quantitative IP. For the H 2 O-TiO 2 (110) interface, e.g., 1ML intact H 2 O@Ticus, the highest energy H 2 O PDOS peak, ε PDOS peak , is pinned ∼ 1 eV below the VBM across PBE DFT, HSE DFT, PBE scGW1, PBE G 0 W 0 , and HSE G 0 W 0 . For this reason, the IP of the H 2 O interfacial levels is controlled by the alignment of the VBM with respect to the vacuum. This means we only need to consider the absolute VBM level alignment of the interface, i.e., the interface's IP = −ε VBM + Evac, as a descriptor of photoelectrocatalytic activity. In Figure 11 we see that the IP of the interface follows a different ordering across the methodologies from that of gas phase H 2 O. In particular, we find PBE G 0 W 0 (6.0 eV) ∼ PBE DFT < HSE06 G 0 W 0 ≈ PBE scQPGW1 < HSE06 DFT (7.3 eV). Figure 12 shows that, as was the case for H 2 O in gas phase, the IP of the H 2 O@Ticus interface across the various methods is ordered according to the method's description of the screening, ε −1 ∞ . As discussed above, for hybrid xc-functionals such as HSE, the effective screening is determined by the fraction of exact exchange α included. Essentially, α plays the role of the effective screening within the method, ε −1 ∞ . Although HSE06 incorporates less screening (ε∞ ≈ 4) than experiment for rutile TiO 2 (ε TiO2 ∞ ≈ 7.6), 85 the HSE06 IP for the interface is in agreement with the experimental estimate of IP ≈ 7.1 eV. 40,86 If one performs G 0 W 0 based on HSE06, a stronger screening is applied, i.e., ε∞ ≈ 5.7, yielding a lower IP for the interface. In fact, as indicated by the red arrow in Figure 12, a similar IP to HSE06 G 0 W 0 should be obtained from HSE DFT by setting the fraction of exact exchange to the inverse dielectric constant of bulk TiO 2 , i.e., α = 1 ε TiO2 ∞ . Adjusting α to the measured inverse dielectric constant has been previously found to give improved band gaps. 90 From PBE scQPGW1, one obtains an IP consistent with that of HSE06 G 0 W 0 . This is because we find the screening in scQPGW decreases from PBE RPA with each self-consistent cycle. Essentially, the final screening incorporated in scQPGW1 is similar to that of HSE06 RPA. As shown in Figure 11, PBE G 0 W 0 gives an IP slightly lower than PBE DFT for the interface, while the PBE G 0 W 0 CBM is shifted up by about 2 eV. This is surprising, since PBE DFT already yields a CBM level alignment for the interface in excellent agreement with experiment. This is partially due to PBE RPA's overestimation of the screening of TiO 2 (ε∞ ∼ 8.3). Although HSE06 G 0 W 0 has a weaker screening than PBE G 0 W 0 , the resulting absolute alignment of the CBM is quite similar. If instead, the self energy corrections are applied self-consistently via PBE scQPGW1, the absolute alignment of the CBM is significantly lower, but still greater than that of PBE DFT or HSE06 DFT. This is again related to decreases in the dielectric constant with each self-consistent cycle. For this reason, scQPGW1 tends to provide reasonable band gaps for TiO 2 (110) interfaces. Overall, we observe an ordering in increasing band gap of PBE DFT < HSE06 DFT ≲ PBE scQPGW1 < PBE G 0 W 0 ≈ HSE06 G 0 W 0 , with HSE06 DFT providing the best absolute alignment of the CBM and VBM for the H 2 O@Ticus interface. HSE06 DFT provides the most accurate description of the IP of the clean and H 2 O@Ticus covered stoichiometric TiO 2 (110) surfaces. Although the HSE06 DFT IP for H 2 O@O vac br is significantly lower than the one measured for h−TiO 2 (110), in both cases, the IP is shifted to lower energies relative to the clean stoichiometric surface. Differences in the magnitude of the shifts are probably due to the differences in defect coverage between the experiment (6-9%) 83 and the calculation (50%). The similarty between HSE06 DFT and scQPGW based on either PBE or HSE06 for the clean TiO 2 (110) surface, 23 points to a similar screening from these two techniques. This also demonstrates the starting point independence of the scQPGW technique. To summarize, although scQPGW provides accurate IPs, the band gap is greatly overestimated, as reported previously. 22,23,41,92 While scQPGW1 provides a more accurate band gap, it achieves only a qualitative description of the IP. HSE06 achieves a quantitative description of both the IP and band gap, but provides a poor description of the molecular level alignment relative to the VBM. 22,23,92 However, since the highest occupied H 2 O levels are significantly hybridized with the substrate, this is not a major drawback in this case. In general, for TiO 2 (110), a more effective strategy is to combine the calculated IP from HSE06 with the occupied interfacial levels' alignment from G 0 W 0 or scQPGW1. CONCLUSIONS The level alignment prior to photo-irradiation is an important piece of the puzzle needed to get a complete atomistic picture of photocatalytic processes. Here we have shown that the complex UPS spectra for the H 2 O-TiO 2 interface may be disentangled using QP G 0 W 0 PDOS. We have firmly established the robustness of the QP G 0 W 0 H 2 O PDOS by: (1) demonstrating its xc-functional (PBE, LDA, vdW-DF, and HSE06) independence, (2) comparing to selfconsistent QP GW techniques (scQPGW1), and (3) considering its dependence on surface coverage and dissociation. Altogether, these calculations provide an accurate interpretation of the complex UPS and MIES experiments 18,25,26 for the H 2 O-TiO 2 (110) interface, and provide accurate estimates of the highest H 2 O occupied levels' alignment relative to the VBM. Our results provide two important pieces of the puzzle: (1) the molecular structure of the photocatalytic interface and (2) the molecular alignment of the doubly occupied levels near the VBM responsible for hole trapping prior to irradiation. To complete the picture, the molecular structure and level alignment in the presence of the photo-generated hole is also needed. Previous DFT studies using the hybrid HSE xc-functional have found a hole can be trapped at surface O 2pπ levels of O br and HO-Ticus sites. 5 However, the screening of such localized levels may not be well described by HSE, which tends to underbind localized interfacial levels. 23 This underbinding is corrected upon inclusion of manybody effects via QP G 0 W 0 . 23 Having demonstrated the capability of G 0 W 0 for the description of level alignment prior to irradiation, this work points the way forward via future QP G 0 W 0 studies of level alignment for trapped hole levels. Supporting Information Total energies and optimized geometries. This material is available free of charge via the Internet at http://pubs.acs.org.
11,652.2
2015-01-13T00:00:00.000
[ "Chemistry" ]
Intestinal Permeability and Circulating CD161+CCR6+CD8+T Cells in Patients With Relapsing–Remitting Multiple Sclerosis Treated With Dimethylfumarate Background: The changes of the gut-brain axis have been recently recognized as important components in multiple sclerosis (MS) pathogenesis. Objectives: To evaluate the effects of DMF on intestinal barrier permeability and mucosal immune responses. Methods: We investigated intestinal permeability (IP) and circulating CD161+CCR6+CD8+T cells in 25 patients with MS, who met eligibility criteria for dimethyl-fumarate (DMF) treatment. These data, together with clinical/MRI parameters, were studied at three time-points: baseline (before therapy), after one (T1) and 9 months (T2) of treatment. Results: At baseline 16 patients (64%) showed altered IP, while 14 cases (56%) showed active MRI. During DMF therapy we found the expected decrease of disease activity at MRI compared to T0 (6/25 at T1, p = 0.035 and 3/25 at T2, p < 0.00), and a reduction in the percentage of CD161+CCR6+CD8+ T cells (16/23 at T2; p < 0.001). The effects of DMF on gut barrier alterations was variable, without a clear longitudinal pattern, while we found significant relationships between IP changes and drop of MRI activity (p = 0.04) and circulating CD161+CCr6+CD8+ T cells (p = 0.023). Conclusions: The gut barrier is frequently altered in MS, and the CD161+ CCR6+CD8+ T cell-subset shows dynamics which correlate with disease course and therapy. INTRODUCTION Multiple sclerosis (MS) is a chronic disease of the central nervous system (CNS), with inflammation, demyelination, and neurodegeneration. The pathogenic process is immunemediated, and the etiology is probably multifactorial, with interaction of heritable and non-heritable factors (1). Among the other factors, microbiota and gut function are increasingly recognized as relevant in this immune-mediated disorder (2). Several studies have recently shown that the microbiota, as a part of the intestine-brain axis, plays a role in the etiopathogenesis of MS (3,4). However, a crucial component of this axis, the intestinal barrier, has received much less attention. The question of whether or not intestinal permeability (IP) is affected during the disease course is at least as important as the changes in the microbiota balance (5,6). IP changes may underlie gastro-intestinal or even far-fromgut autoimmune disorders. In fact, increased gut permeability allows the passage of macromolecules, toxins, and bacterial species that may trigger immune-mediated diseases in different systems, even distant from the gastrointestinal tract, such as the CNS (7,8). On the other hand, CNS inflammation can increase gut permeability and alter mucosal structure in the small intestine (9). In a previous work, we investigated the gut permeability in relapsing-remitting MS (RRMS) patients and healthy donors, finding that alteration of IP represents a relatively frequent event in patients with MS (10). This study and a previous one, showing that CD161 high CD8+ T cells, encompassing the mucosal associated invariant T (MAIT) cell subset, play a role in MS pathogenesis (11), prompted us to focus on the gut triggers that may lower the threshold for disease development in susceptible individuals. Dimethylfumarate (DMF) has both neuroprotective and antiinflammatory effects, and it is currently used as an oral, firstline, disease-modifying therapy (DMT) in MS. Some of the mechanisms responsible for its efficacy have been clarified, while others remain unexplored. Gastrointestinal tract irritation is one of the most frequent side effects of DMF (12). On the other hand, studies on experimental models of inflammatory bowel diseases showed that DMF might beneficially affect IP (13). In this study, we investigated IP changes, the circulating CD161+CD8+ T-cell subset, and clinical/neuroradiological data in a cohort of RRMS patients before and after 9 months of DMF therapy, with a longitudinal design aimed at analyzing data at three time points: baseline (before therapy) and after 1 (T1) and 9 months (T2) of treatment. Subjects and Procedures Twenty-five patients, candidate to DMF therapy according to the approved indications, were enrolled and completed the followup. The other inclusion criteria were as follows: age between 18 and 60 years; a treatment-naïve status or being free from "first-line" DMT for at least 3 months; EDSS up to 5.5. The exclusion criteria were the following: any serious internal medicine disease; any condition that may possibly interfere with the IP test, such as gastrointestinal disorders, renal function, and bladder dysfunction; pregnancy and breast-feeding. The study was conducted after approval of the local Ethics Committee, and a signed informed consent was obtained from each patient. Each participant underwent the following procedures at baseline (T0) and after 1 (T1) and 9 months (T2) of DMF therapy, and in case of relapse: clinical evaluation, including the recording of gastrointestinal side effects after DMF start; data-sheet safety laboratory tests; urine sampling for IP test; blood sample for CD161+CD8+ T subset analysis; magnetic resonance imaging (MRI) of brain and spinal cord with gadolinium (Gd) to monitor the disease activity. MRI Protocol All subjects underwent gadolinium (Gd)-enhanced MRI (brain and spinal cord). MRI was performed in all the patients with a 1.5-T magnet (Philips Gyroscan NT 1.5), with sequences Flair, T2-and T1-weighted after Gd. The presence of at least one Gd-enhancing lesion or of at least one new/enlarging T2hyperintense lesion was considered indicative of disease activity at MRI. Intestinal Permeability Analysis To evaluate IP, we used a solution composed of 5 g of lactulose and 2 g of mannitol in 50 ml of deionized water. All patients followed a lactulose-, mannitol-, lactose-free diet for 72 h before the test, as reported in a form delivered to the patient at the time of enrollment. After the assumption of the solution, the patients collected their own urine for the following 6 h, during which they have been encouraged to drink tap water. A pre-test urine sample was collected at the beginning and subtracted from the ending total. We calculated the total volume, and we stored 10 aliquots of 5 ml and 5 aliquots of 10 ml at −20 • C until analysis. Lactulose and mannitol concentrations in urine samples were analyzed using a modified Liquid Chromatography Tandem Mass Spectrometry (LC-MS/MS) method (14). The HPLC analysis was performed using an Agilent Liquid Chromatography System series 1100 (Agilent Technologies, USA). Chromatographic separation was performed using a column (Luna R Omega 3 µm SUGAR 100 Å, LC Column 100 × 2.1 mm, Ea Phenomenex, CA, USA) equipped with a security guard precolumn (Phenomenex, Torrance, CA, USA) containing the same packing material. The mobile phase consisted of a solution of HPLC-grade water (eluent A) and 100% HPLCgrade acetonitrile (eluent B); elution was performed at flow rate of 300 µl/min. The oven temperature was set at 40 • C. The injection volume was 10 µl, and the total analysis time was 13 min. The mass spectrometry method was performed on a 3200 triple quadrupole system (Applied Biosystems, Foster City, CA, USA) equipped with a Turbo Ion Spray source. The detector was set in the negative ion mode. The Q1 and Q3 quadrupoles were tuned for the unit mass resolution. The transitions of the precursor ions to the product ions were monitored with a dwell time of 200 ms for each analyte. The instrument was set in the multiple reaction monitoring mode. Mass spectrometer parameters were optimized to maximize sensitivity for all analytes. Data were acquired and processed with Analyst 1.5.1 software. Therefore, we calculated the fractional excretion of lactulose as the following ratio, lactulose: lactulose (mg) excreted /lactulose (mg) assumed . We used the same method to evaluate excretion of mannitol. Our results have been reported as ratio of the lactulose fractional excretion to the mannitol fractional excretion (L/M ratio). Therefore, we were able to quantify the IP status: Lactulose:Mannitol ratio > 0.03 corresponded to an altered permeability, which had to be associated with a urinary mannitol concentration <900 mg/L (15). Flow Cytometry Peripheral blood samples were collected into sodium heparin Vacutainer tubes (BD Biosciences, San Jose, CA) at baseline and at month 9 after starting DMF treatment. Peripheral blood mononuclear cells (PBMCs) were isolated from whole blood by density gradient centrifugation using standard procedures (Ficoll-Paque Plus, GE Healthcare). Fresh PBMC (ex vivo) from MS patients were labeled with antibodies directed to cell surface proteins along with a dead-cell discrimination reagent for 20 min at room temperature (RT) in the dark. measure up to 15 parameters simultaneously on each cell. For each sample, ∼300,000 lymphocytes were selected based on scatter parameters, and the analysis was conducted after the exclusion of dead cells and coincident events. The data was compensated and analyzed using FlowJo v10.6.1 (TreeStar, Ashland, OR). Statistical Analysis All variables were inspected for normal distribution. Betweengroup comparisons for continuous variables included parametric Student's t-test and ordinary one-way ANOVA, as well as non-parametric Kruskal-Wallis and Dunn multiple comparison tests (GraphPad Prism, v6.2). Between-group comparisons for categorical variables were performed by Pearson's chi-squared test. Statistical significance was inferred for p-values below 0.05. Logistic regression models were fitted to describe the interplay between IP changes and the dynamics of CD161+CCR6+CD8+ T cells and MRI activity across the follow-up period. The same interplay was also explored by Kaplan-Meier and Cox proportional-hazard analysis based on time from MS onset. Multivariate and survival analyses were performed with the Stata software (version 16). RESULTS The demographic, clinical, and neuroradiological characteristics of 25 patients at baseline are summarized in Table 1. Sixteen patients (64%) showed an altered IP, while 14 cases (56%) showed active MRI (4 of them were also in clinical relapses). Moreover, we investigated the frequencies of CD161+ CCR6+ CD8+ T lymphocytes in PBMCs obtained from 23 MS patients. During DMF therapy, two significant changes emerged. At first, we could confirm the decrease of disease activity as evaluated by MRI (6/25 at T1, p = 0.035 for the comparison between T1 and T0; 3/25 at T2, p < 0.001 for the comparison between T2 and T0); consistent with this result, the Kaplan-Meier analysis showed that the proportion of patients with normal MRI signal is higher at T1 and T2 compared to T0 (Figure 1). Then, we showed that the frequency of circulating CD161+CCR6+CD8+ T cells in MS patients is reduced after 9 months of DMF treatment. In Figure 2A, representative plots depict the progressive drop of frequencies of subpopulations of CD8 T cells (CD161hi CCR6+) in an MS patient during DMF treatment, while Figure 2B shows the cumulative data with the significant drop of the T-cell subset at T2 (p < 0.001). We found that the decrease in disease activity evaluated radiologically was 12 times higher in subjects showing reduced frequencies of CD161+CCR6+CD8+ T cells in the peripheral Other covariates include EDSS and gastro-intestinal symptoms at baseline blood, and 15 times higher in cases with IP changes at T1 ( Table 2). Furthermore, a logistic regression model showed a relationship between the drop of CD161+CD8+ T cells in the peripheral blood and IP changes at T2, considering as covariates both EDSS and MRI activity (p = 0.023). Consistent with this result, the Cox analysis showed that the decline of the T-cell subset was more evident in patients with persistent IP changes (Hazard Ratio, HR = 4.19; p = 0.03; Figure 3). Concerning the possible effects of DMF treatment on gut barrier alterations, no significant difference emerged: some cases improved, while others worsened during follow-up, without a clear longitudinal pattern (Supplementary Table); the proportion of patients having IP changes at T1 and T2 was 16/25 (64%) and 15/25 (60%), respectively (figures that were quite comparable to the baseline data). A minority of patients (3/25) had mild gastro-intestinal side effects during treatment with DMF; other mild side effects were within the known safety profile of the drug (not shown). One patient presented a relapse during the follow-up, and no significant change occurred in patients' EDSS at T2 compared to baseline. DISCUSSION This work, together with our pilot studies on IP and mucosal immunity in RRMS (10,11), provides evidences that the gut barrier is frequently altered in these patients and that the CD161+CD8+ T-cell subset shows dynamics compatible with disease course and therapy. No other studies on IP changes in MS have been reported so far, since the other works are mainly focused on microbiota changes (3). However, we deem IP alterations and dysbiosis as two faces of the same coin (16), and further studies correlating IP and gut microbiota changes in MS will certainly be informative on disease etiopathogenesis. The CD161+CD8+ T-cell subset encompasses the MAIT cells, which were the object of several investigations after our first study on MS in 2011 (11), all largely confirming the involvement of MAIT cells in MS pathogenesis. Among the evidences repeatedly reported were an IL18-driven activation and consequent CNS infiltration of MAIT cells in the diseased brain, and an increased type-17 differentiation and oligoclonality of circulating MAIT cells in MS patients compared to controls (17)(18)(19)(20)(21). The IL18-driven activation, and the consequent CNS infiltration of CD8+ MAIT cells in MS, may cause reduced frequency in blood, helping to reconcile, at least in part, the conflicting results on the frequency of circulating MAIT cells in MS. A recent work showed indeed that MAIT cell subtype, smoking habit, and disease onset (primary progressive vs. relapsing-remitting) affect the number of circulating MAIT cells (22). Smokers with primary progressive MS showed low frequency of circulating MAIT cells, suggesting a tendency to reside in the inflamed organ, in apparent contrast to what was observed in most studies on patient with RRMS. Concerning the effects of DMF on the variables under study, we found the expected decline of disease activity, which was in keeping with the initial pivotal trials (23)(24)(25). The parallel drop in the fraction of circulating CD161+CD8+ T cells is in accord with two previous works on the effects of DMF therapy in MS patients (22,26). The action of DMF on all the proinflammatory T-cell subsets, including the CD161+ IL17-producing T cells, is mediated by a dose-dependent induction of apoptosis and decrease of proliferation (27). Other works, showing a decrease of proinflammatory MAIT cells after hematopoietic stem cell transplant or alemtuzumab for treatment-refractory forms (28,29), support results obtained after DMF treatment, and indirectly confirm the pathogenic role of MAIT cells in MS. No clear DMF effects were evident on IP changes, and the gastrointestinal side effects in our group of patients were relatively rare and apparently unrelated to IP changes. The meaning of this finding requires further studies (such as those based on novel multi-sugar assay for site-specific gastrointestinal permeability analysis) and suggests that the alterations of the gut barrier in MS are complex: the decreased disease activity at MRI and the reduction of the percentage of circulating MAIT cells during treatment with DMF seem to occur more frequently in patients with IP changes. These relationships raise the possibility that the gut barrier alteration may represent a predictor of pathophysiological transitions, besides its possible role in disease pathogenesis. Our study adds evidences to the potential role of mucosal immunity in MS pathogenesis, and yet suggests questions that remain unanswered. Among those are whether IP changes somehow drive demyelinating process [as seen in experimental models of MS; (9)] or simply contribute to the organ-specific immune dysfunction. Also, it is unclear through which mechanisms MAIT cells [or subsets of them; (22)] become activated and pathogenic at the CNS level in apparently sterile conditions. Answering these questions may provide new fruitful lines of attack against neuroinflammation, such as IP enhancers or stabilizers, already under scrutiny in gastro-intestinal conditions, as well as compounds coming from reworking the increasingly growing data coming from microbiota studies in experimental and human autoimmune diseases. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Azienda Ospedaliero-Universitaria Sant'Andrea, Università Sapienza. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS MSa, GR, and LB conceived the study. MB, GB, and MSi coordinated the project. FG, MC, and LL performed data analysis. CF carried out statistical analysis. All authors contributed to the project, writing of the manuscript, and approved its final version. FUNDING This work was partially supported by Biogen company and the intramural research funds of the Center for Experimental Neurological Therapies (CENTERS), Rome, Italy. CENTERS was supported by a special project of the Fondazione Italiana Sclerosi Multipla-FISM, Ospedale S. Andrea-site, NESMOS. Schering provided an unrestricted research grant to partially cover administrative cost of the study. No funding source had a role in the trial design, data collection, analysis and interpretation, and manuscript preparation.
3,815
2021-08-26T00:00:00.000
[ "Medicine", "Biology" ]
THE ROLE OF EPITHELIAL AND VASCULAR-ENDOTHELIAL CADHERIN IN THE DIFFERENTIATION AND MAINTANCE OF TISSUE INTEGRITY Cell adhesion molecules are substances of a protein character that are necessary for normal embryogenesis, morphogenesis, tissue formation and reparation, but they are also involved in many pathophysiological processes such as inflammation, angiogenesis, thrombosis, tumour invasion and metastasis (19). At present, there are four main classes of cytoadhesion molecules: the immunoglobulin group of adhesion molecules, cadherins, selectins, and integrins (22). In the present review the authors concentrate on cell adhesion molecules from the cadherin family. The cadherins are a family of transmembrane glycoproteins that mediate adhesion through a Ca2+ dependent mechanism. Cadherins are usually localized at intercellular junctions called adherens junctions (41). The cadherins form a superfamily with at least six subfamilies, which can be distinguished on the basis of the protein domain composition, genomic structure, and phylogenetic analysis of the protein sequences. These subfamilies comprise classical or type-I cadherins (E-, N-, R-, P-cadherin), atypical or type-II cadherins (Cadherin-6, -7, -8, -10), desmosomal cadherins (desmocollins, desmogleins), protocadherins (Protocadherin-1, -2, OL-protocadherin, CNR protocadherins), and Flamingo cadherins (25). Structural properties of the cadherin superfamily Introduction Cell adhesion molecules are substances of a protein character that are necessary for normal embryogenesis, morphogenesis, tissue formation and reparation, but they are also involved in many pathophysiological processes such as inflammation, angiogenesis, thrombosis, tumour invasion and metastasis (19). At present, there are four main classes of cytoadhesion molecules: the immunoglobulin group of adhesion molecules, cadherins, selectins, and integrins (22). In the present review the authors concentrate on cell adhesion molecules from the cadherin family. Structural properties of the cadherin superfamily In this review the present authors have aimed at the structure of classical cadherins. The cadherins consist of an extracellular adhesive domain, transmembrane segment, and cytoplasmic domain (5). The extracellular domain is responsible for cell-to-cell adhesion between cadherins. It is composed of four or five repeated Ca 2+ binding subdomains of about 110 amino acids (39). The structure of this domain is determined by the binding of calcium ions, and the presence of Ca 2+ ions is also necessary for the adhesion with cadherin molecules on neighbouring cells. Other classes of adhesion molecules, e.g., members of the Ig superfamily, do not depend on Ca 2+ for their adhesive function (29). The adhesive interface is located at the most aminoterminal repeat. Structural data suggest that the extracellular domains form lateral dimers on the cell surface which interact with dimers from the opposing cells to create a zipper-like structure (34). Highly conserved cytoplasmic domains interact with cytosolic proteins from "armadillo family" called catenins. It has been demonstrated that there are three main catenins in cadherin-catenin complexes that provide the linkage of cadherin to the actin cytoskeleton (18). β-Catenin and plakoglobin (also called γ-catenin), interact with α-catenin that provides connection with the actin cytoskeleton. Other catenins can also be associated with the cytoplasmic domain of cadherins, including different isoforms of the so-called p120 ctn . It has been found that this catenin interacts with a membrane-proximal region of cadherin that has been shown to be responsible for lateral clustering. This suggest that p120 ctn can regulate the strength of cadherin-mediated adhesion (43). In general, the interaction of cytoplasmic domain with the actin cytoskeleton via catenins significantly increases the strength of the intercellular junctions (45). Function of the cadherins There are two main functional features of cadherins. They are responsible for cell-to-cell adhesion and they have a very important cell signalling function. Cadherins mediate homotypic interactions by binding to their homologues on an adjacent cell. However, it has been found that some members of this family can provide heterotypic interactions, but this is not the predominant adhesion mechanism (27). There are several ways of how cadherin could be involved in cell-signalling events. First, through their homotypic binding they can approach the opposing membranes of neighbouring cells in close proximity and enable interactions of ligands and receptors of these opposing cells and stimulate juxtacrine signalling. Second, because cadherins are able to control the polarity of cells they can affect sig-nalling via their influence on the distribution of membrane proteins, including transmembrane receptors. Finally, cadherins may behave as ligands or receptors and hence they have direct cell signalling activity (18). All these signalling functions of cadherins are associated with the activation of some regulatory cascade comprising the action of tyrosin kinases and tyrosin phosphatases. Cytoplasmic domains of cadherin, β-catenin and plakoglobin are common targets of these regulatory proteins (38). Adhesive and signalling properties of cadherins can not be separated. Any changes in the expression or function of cadherins might lead to initiation or progression of pathological processes. Within many members of the cadherin superfamily, the present authors have focused on two extensively studied ones, E-cadherin and VE-cadherin. Epithelial cadherin (E-cadherin, LCAM, ovomorulin) E-cadherin is an epithelium-specific cadherin. This cadherin is a member of the classical cadherin subfamily. Normal expression and function of E-cadherin is required for proper embryogenesis and morphogenesis of various tissues. Variations in E-cadherin expression have been noted during specific events in embryonic morphogenesis (30). Mutation of the E-cadherin gene leads to early embryonic lethality which is proceeded by a loss of cell-to-cell adhe- Representation of cadherin-catenin complex depicting cadherin ectodomain, transmembrane region (TM) and carboxy-terminal cytodomain (CYTO) and catenins that link cadherin to the cytoskeleton. Ectodamain of classical cadherins consists of five repeated domains (C1-C5) with adhesive interface located at C1 domain. Carboxy-terminal cytodomain has binding sites for β-catenin and γ-catenin (plakoglobin) and membrane proximal binding site for p120. β-catenin and γ-catenin are associated with α-catenin that provide linkage to actin cytoskeleton. sion at the morula stage (5). E-cadherin is necessary during normal neural development (35). It has been demonstrated to occur locally and persistently in the murine central and peripheral nervous system during neural development (36). Expression of E-cadherin (together with P-and N-cadherin) has been observed in murine primordial germ cells. E-cadherin is concentrated at the sites of cell-to-cell contacts of primordial germ cells (PGCs), suggesting an active role in PGCs-PGCs interaction and recognition (6). In the past few years, the alteration in expression and function of E-cadherin has been correlated with cancer development (5). Loss or reduction of E-cadherin expression is in relation with enhanced aggressiveness and dedifferentiation of many carcinomas which has been reviewed by Beavon (4). Asgeirsson et al. have shown that decreased E-cadherin expression is frequent in breast cancer and that a loss of E-cadherin expression is associated with a loss of heterozygosity in the infiltrating lobular breast carcinomas but not in the infiltrating ductal carcinomas. Furthermore, the loss of expression of E-cadherin is an important prognostic marker, especially for disease recurrence in nodenegative breast cancer patients and may even be more informative than tumour size or oestrogen receptor expression (3). Ghadimi et al. have shown that reduced expression of E-cadherin and even α-catenin is observed in primary colorectal carcinoma. Moreover, they were able to demonstrate a significant correlation between the histopathological grading of the tumours and an increased loss of E-cadherin and α-catenin expression. Defective expression was significantly more frequent in less differentiated carcinomas (G3-4) with a pronounced loss of epithelial morphology than in better differentiated tumours (G1/G2) (17). Garcia del Muro et al. have suggested that losses of E-cadherin and β-catenin as confirmed by immunohistochemistry are important prognostic markers in patients with bladder carcinoma. This leakage of E-cadherin expression was associated with high grade and invasive stage of bladder carcinoma. Further, a loss of E-cadherin expression was a significant prognostic indicator of decreased survival, independent of known prognostic factors such as grade, stage, or p53 status (23). There are many other studies which have described a relation between decreased E-cadherin and/or catenin expression and its correlation with dedifferentiation, infiltrative tumour growth, distant metastasis, and poor survival for patients with gastric carcinoma (37), pancreatic carcinoma (40,16), prostate cancer (12). Despite the fact that E-cadherin is extensively studied in relation to cancerogenesis, Bobryshev et al. have elucidated expression of E-cadherin in human atherosclerotic lesions. It has been described that E-cadherin is expressed by macrophage origin intimal cells transforming themselves into foam cells, but there were no expression of E-cadherin in non-atherosclerotic intima. They suggested that E-cadherin might be important for foam cell aggregation. If E-cadherin is involved in foam cell aggregation, it might be also invol-ved in the development of the lipid core which is an important step of progression of atherosclerosis (9). Vascular endothelial cadherin (VE-cadherin, cadherin 5 or 7B4) VE-cadherin is endothelium-specific cadherin and it is located strictly at intercellular junctions (zonulla adherens) of essentially all types of vessels both in vitro and in vivo (14,15). VE-cadherin has been first identified by Lampugnani et al. by adopting an indirect approach of developing mouse mAbs to human endothelial cells (20). In term of the structure, VE-cadherin is composed of an extracellular domain, a transmembrane segment, and a cytoplasmic domain that form complexes with catenins and mediate the association of VE-cadherin with the actin cytoskeleton. However, compared with classical cadherins, the VE-cadherin amino acid sequence shows considerable differences (only 23% identity when compared with classical cadherins such as E-, N-, and placental (P)-cadherins) (11,24). VE-cadherin is required for normal vasculogenesis and angiogenesis and for the maintenance of vascular integrity and permeability in adults (13). VE-cadherin is expressed in the embryo at very early stages of vascular development in mesodermal cells of the yolk sac mesenchyme. At later embryonic stages, VE-cadherin expression is restricted to the peripheral layer of blood islands, which gives rise to endothelial cells (10). The role of VE cadherin permeability control is consistent with the observation that the VE-cadherin-catenin complex is the target of the action of permeability-increasing agents. Rabiet et al. have shown that thrombin, which is known to induce profound alterations of endothelial cell monolayer permeability in vitro and in vivo, caused endothelial cell retraction accompanied by a redistribution of VE-cadherin and catenin from adherence junctions. This disassembly of adherence junctions was accompanied by an increase in vascular permeability (28). The proinflammatory cytokines tumour necrosis factorα (TNF-α) and interferon-γ (INF-γ) act synergistically in vitro and in vivo to activate the endothelium, resulting in cellular responses such as altered morphology, loss of barrier function, and adhesion molecule upregulation and/or redistribution (31,32). Wond et al. have described that the tumour necrosis factor-α (TNF-α) and interferon-γ (INF-γ), in combination, affect the barrier function of the vascular endothelial lining by direct stimulation of the endothelium that results in the disruption of VE-cadherin mediated cellto-cell adhesion, which is succeeded by an increase in the permeability of mesenteric venules (44). Andriopoulous et al. have studied the effect of histamine, another mediator of inflammatory reaction, on adherens junction organization in cultured endothelial cells. They have reported that histamine induces tyrosine phosphorylation of VE-cadherin and catenins, which results in an increase in endothelial permeability. The effect of histamine was specific for VE-cadherin, there was no phosphorylation in N-cadherin, another major endothelial cadherin (2). In addition to agonists described above, endothelial permeability is affected by a group of inflammatory cells, namely polymorphonuclear leukocytes (PMNs) (33). Several groups have shown that activated polymorphonuclear leukocytes (PMNs) dramatically alter molecular composition and organization of VE-cadherin-catenin complexes in endothelial cells. This disruption of VE-cadherin-catenin complexes leads to disassembly of adherens junctions which is followed by an increase in endothelial permeability (42,1). These inflammatory agents and cells are not the only ones that might exert effects on VE-cadherin-catenin complexes. The structure and physiology of endothelial cells are influenced by shear stresses of blood flow. The most obvious structural responses of endothelium to shear stress are changes in the cell shape and orientation; in areas of low or inconsistent shear stress, in vivo or in vitro endothelial cells form a cuboidal, cobblestone shape, whereas they elongate and align in the direction of flow when shear stress is moderate or high (21). Noria et al. have examined transient and steady-state effects of shear stress on the cadherin-catenin complex at endothelial adherens junctions. They have reported that initiation of shear stress on endothelium causes partial disassembly of adherens junctions followed by a reassembly that reflected shear-induced reorganization of actin distribution. After adaptation to shear stress, adherens junction proteins were localized in adhesion plaques (adherens plaques) that were distinct from the linear belt-like distribution that predominates in static cultures. Thus, adherens junctions in the endothelium exposed to physiological levels of shear stress are structurally distinct from such junctions in static endothelial cell cultures or in other epithelial monolayers (26). As mentioned above, the normal expression and function of VE-cadherin is necessary for the maintenance of normal endothelial permeability; however, it is important in vasculogenesis in the embryo as well as in adults. Bobryshev et al. have examined the expression of VE-cadherin in atherosclerotic lesions. They have demonstrated that VE-cadherin is expressed in early sprouts of neocapillaries and it suggests that VE-cadherin is involved in the ingrowth of medial capillaries into the intima (8). This neovascularization is important for local immune-inflammatory reactions in atherosclerotic plaques (7). Conclusion In this review the authors have described structure and function of cell adhesion molecules from cadherin family. We have focused on two extensively studied E-cadherin and VE-cadherin. Both VE-cadherin and E-cadherin are crucial for proper embryogenesis, morphogenesis but they are involved in many pathological states. Changes in expression and function of E-cadherin seems to be important for development of carcinoma in various tissues. VE-cadherin is specific cadherin which is expressed by endothelial cells and plays important role in vascular homeostasis. It is re-quired for vasculogenesis and maintance of vascular permeability. Its expression and function is disturbed during inflammation. In our prospective study we would like to described behaviour of these two cadherins during development and progression of atherosclerosis (on rabbit model af atherosclerosis) because changes in vascular permeability, neovasculogenesis and formation of lipid core are crucial for formation of atherosclerotic plaques.
3,213
2001-01-01T00:00:00.000
[ "Biology", "Medicine" ]
Utilization of HIV-1 envelope V3 to identify X4- and R5-specific Tat and LTR sequence signatures Background HIV-1 entry is a receptor-mediated process directed by the interaction of the viral envelope with the host cell CD4 molecule and one of two co-receptors, CCR5 or CXCR4. The amino acid sequence of the third variable (V3) loop of the HIV-1 envelope is highly predictive of co-receptor utilization preference during entry, and machine learning predictive algorithms have been developed to characterize sequences as CCR5-utilizing (R5) or CXCR4-utilizing (X4). It was hypothesized that while the V3 loop is predominantly responsible for determining co-receptor binding, additional components of the HIV-1 genome may contribute to overall viral tropism and display sequence signatures associated with co-receptor utilization. Results The accessory protein Tat and the HlV-1 long terminal repeat (LTR) were analyzed with respect to genetic diversity and compared by Jensen–Shannon divergence which resulted in a correlation with both mean genetic diversity as well as the absolute difference in genetic diversity between R5- and X4-genome specific trends. As expected, the V3 domain of the gp120 protein was enriched with statistically divergent positions. Statistically divergent positions were also identified in Tat amino acid sequences within the transactivation and TAR-binding domains, and in nucleotide positions throughout the LTR. We further analyzed LTR sequences for putative transcription factor binding sites using the JASPAR transcription factor binding profile database and found several putative differences in transcription factor binding sites between R5 and X4 HIV-1 genomes, specifically identifying the C/EBP sites I and II, and Sp site III to differ with respect to sequence configuration for R5 and X4 LTRs. Conclusion These observations support the hypothesis that co-receptor utilization coincides with specific genetic signatures in HIV-1 Tat and the LTR, likely due to differing transcriptional regulatory mechanisms and selective pressures applied within specific cellular targets during the course of productive HIV-1 infection. 35 residues). The V3 domain has been extensively studied as it has been identified as the principal neutralizing domain (PND) on the viral envelope gp120 [2][3][4] and the viral partner in selective interactions with the different co-receptors [5]. The V3 region is a major determinant in predicting HIV-1 entry phenotype, distinguishing non-syncytium inducing (NSI) from syncytium inducing (SI) strains and macrophage tropic from non-macrophage tropic strains [5][6][7][8]. In this regard, it was discovered that a T cell tropic (T-tropic) isolate, or SI virus, preferentially uses the coreceptor CXCR4, and has consequently been referred to as an X4 virus. In contrast, a primary macrophage tropic (M-tropic) isolate, or NSI virus, was found to preferentially use the co-receptor CCR5 for entry and therefore referred to as an R5 virus [9,10]. Previously, viral tropism based on co-receptor usage was used interchangeably with tropism defined by cellular target; however, several studies have shown that while co-receptor usage can at times be linked with cellular tropism, it is important to discuss viral infection in terms of utilization of a coreceptor in conjunction with the phenotype of the target cell [11,12]. As an example, recent investigations have shown that HIV-1 infectious molecular clones containing transmitted/founder (T/F) genome sequences preferentially utilized CCR5 as a co-receptor during entry and were able to replicate efficiently in primary CD4+ T cells [13,14]. These molecular clones also exhibited reduced replication efficiency in monocyte-derived macrophages (MDMs), in contrast to the prototypic M-tropic strains of HIV-1 [13]. For high-throughput applications, co-receptor utilization predictions can be performed on Env-V3 sequences computationally [15,16]. To this end, the internet-based bioinformatic method, position-specific scoring matrices (Web-PSSM), utilizes sequences of known entry phenotype to determine if an Env-V3 sequence is CCR5-or CXCR4-utilizing (R5 or X4 Env-V3 sequences). This algorithm indicates the propensity of the virus to utilize CXCR4 at both high sensitivity (84 %) and specificity (96 %), with X4 virus sequences exhibiting high scores and R5 sequences exhibiting low scores, while intermediate Web-PSSM scores indicates both R5 and X4 virus sequences as well as dual tropic X4/R5 virus sequences [17]. Typically, X4 viruses emerge gradually in a subset of patients due to accumulation of amino acid changes within the V3 loop, particularly at positions 11 and 25. While HIV-1 co-receptor usage has been demonstrated to be directly associated with the genotype of V3 loop, as described above, other amino acid residues within gp120 V1, V2, C4, and a number of regions of gp41, have also been associated with co-receptor usage [18][19][20][21][22][23][24][25][26][27]. As conformational changes within the V3 loop occur upon the binding of gp120 and CD4, it is possible that the coreceptor usage-associated amino acid residues within other regions of envelope participate in the structural rearrangement of gp120 [28]. The overall goal of this study was to identify and characterize genetic differences between CCR5-and CXCR4-utilizing HIV-1 sequences beyond the V3 loop of envelope as defined by genotypic prediction. Specifically, HIV-1 gp120 and Tat amino acid sequences and the HIV-1 long terminal repeat (LTR) nucleotide sequences were selected for this analysis. The HIV-1 Tat protein derives its name from the fact that its primary function during viral replication is to serve as the trans-activator of transcription. While not directly involved in HIV-1 entry, Tat has a multitude of intracellular host binding partners and functions. The HIV-1 LTR contains an abundance of transcription factor binding sites upstream of the transcription start site that alter levels of HIV-1 transcription, often in a cell type-dependent manner. Importantly, these binding sites can work independently, in concert, or antagonistically, with a single nucleotide polymorphism capable of producing dramatic changes in binding, including the complete abrogation of transcription factor binding [29,30]. We hypothesized that co-evolved R5-or X4-associated genetic signatures emerge in viral genes and proteins that are not directly associated with entry, and suggest that these differences are reflective of evolutionary constraints applied by different cellular milieus that associate, coevolve, or co-adapt with co-receptor usage and may collectively guide tissue-and/or cell type-specific replication patterns, as well as organ-specific disease pathogenesis. The validity of this hypothesis is supported by the association of X4 virus with depleted CD4+ T-cell levels and enhanced disease progression, as well as the tendency of R5 virus to infect cells of the monocyte-macrophage lineage and to be found at the time of transmission [31]. However, the full extent of the underlying changes in the viral genome that could produce such a shift remains unknown. To remedy this, we have taken a genetic approach with functional underpinnings that have centered on dividing HIV-1 co-linear sequences (gp120, Tat, and LTR) into two co-receptor utilization groups using genotypic prediction methods. Subsequently, we used these two groups of sequences to explore the differences in the remainder of gp120 sequences outside of Env-V3 as well as co-linear Tat and LTR sequences (Table 1). Sequence alignments of each co-receptor usage-defined population of co-linear gp120, Tat, or LTR sequences were quantitatively evaluated at each amino acid (gp120 and Tat) or nucleotide (LTR) position utilizing first-order diversity and Jensen-Shannon divergence. Together, diversity and divergence provide metrics to characterize the positionspecific variation of amino acids or nucleotides, as well as provide a quantitative method to compare this information between R5-and X4-defined sequences. This strategy has provided a straightforward genetic approach to define specific sequences in Tat and the LTR, or potentially any other HIV-1 sequence, that are co-selected with CXCR4-and CCR5-dependent entry. Results and discussion Given the goal of this research was to determine if genetic signatures co-evolved between different regions of the HIV genome, it was first necessary to find patients that had sequences from the genomic areas of interest. From the Los Alamos National Library (LANL) database, subtype B sequences derived from patient samples were downloaded and aligned to the HXB2 genome. In total, more than 2500 samples were isolated, which included a full V3 of 35 amino-acids and at least one other co-linear sequence in the Tat or LTR regions. This also included 1730 full gp120 sequences. Table 1 shows the breakdown of sequences for each region and the selection criteria are further described in the Methods. For all results discussed below, we have analyzed the diversity/divergence of the amino acid sequences of gp120 and Tat proteins while the nucleotides of the LTR have been analyzed. Genetic diversity is correlated in R5-and X4-classified HIV-1 gp120, Tat, and LTR sequence populations Spearman's rank correlation was performed to assess the correlation between R5 and X4 diversity for gp120 (ρ = 0.8678, P = 2.00 × 10 −156 ), Tat (ρ = 0.8873, P = 4.67 × 10 −35 ), and LTR (ρ = 0.7021, P = 4.06 × 10 −78 ) (Fig. 1). In all cases, R5 and X4 diversity were well-correlated, with the P value indicating support for the alternative hypothesis that X4 and R5 diversity is unrelated. Because first-order diversity was utilized in this analysis rather than richness (order = 0), and further supported by rarefaction analysis of the sample sizes, it is unlikely that differences in diversity are a reflection of the differences in sample size between the R5 and X4 sequence groups. This result indicates that, in general, corresponding amino acid (gp120 and Tat) or nucleotide (LTR) positions are similarly constrained in their usage with respect to R5 and X4 sequences. Jensen-Shannon divergence correlates with differences in diversity in HIV-1 gp120, Tat, and LTR sequences Notably, Jensen-Shannon divergence correlates well with both mean diversity (ρ = 0.9226, 0.8552, and 0.9295 for With this visualization, the vertical distance between the line and the corresponding dot represents the difference in diversity between the R5-and X4-classified sequences at each position. In general, the X4 values were found to cluster around the R5 values, with a slight skew towards less diversity within the X4 population. Spearman's rank correlation was performed to assess the correlation between R5 and X4 diversity for gp120 (ρ = 0.8678, P = 2.00 × 10 −156 ), Tat (ρ = 0.8873, P = 4.67 × 10 −35 ), and LTR (ρ = 0.7021, P = 4.06 × 10 −78 ). In all cases, R5 and X4 diversity were well-correlated gp120, Tat, and LTR, respectively) and the absolute difference in diversity (ρ = 0.9005, 0.8852, and 0.9685 for gp120, Tat, and LTR, respectively) ( Fig. 2). Together, these observations indicate that the gain or loss of diversity in one of the populations is closely associated with high Jensen-Shannon divergence. Amino acid diversity and Jensen-Shannon divergence identified domains in gp120 responsible for CCR5 or CXCR4 co-receptor utilization The HIV-1 envelope protein gp120 was evaluated to detect the effectiveness of first-order sequence position diversity and Jensen-Shannon divergence with respect to identifying genetic signatures of co-receptor utilization, with the expectation that the V3 domain exhibits higher Jensen-Shannon divergence between R5-and X4-associated gp120 sequences. Diversity analysis, which as stated above was highly correlated between R5 (red) and X4 (blue) sequence populations, confirmed that the variable loops of gp120 display the greatest amount of amino acid diversity (Fig. 3a). The V1 and V4 variable domains, in particular, contain several positions that display diversity of greater than 10 at order = 1, with a large proportion of positions having a diversity >3. Calculation Fig. 2 Jensen-Shannon divergence is correlated with both mean genetic diversity and the absolute difference in genetic diversity. The relationship between Jensen-Shannon divergence and genetic diversity (order = 1) in HIV-1 gp120, Tat, and LTR sequences was evaluated using Spearman's rank correlation. Both the mean diversity of R5-and X4-classified sequences and the absolute difference between R5 and X4 diversity correlated with Jensen-Shannon divergence. This result indicates that large divergence can be a reflection of not only increased amounts of information (as indicated by high mean diversity), but also by the loss of information in one of the two groups (as indicated by the absolute difference in mean diversity) of Jensen-Shannon divergence between the R5 and X4 populations identified 92 statistically significant positions throughout gp120 (Fig. 3b). A hypergeometric statistical test was used to determine if any domains of gp120 were enriched in statistically divergent positions, as opposed to the null hypothesis of equal distribution. Overall, the variable domains were enriched in divergent positions when compared to the null model, while the conserved domains were depleted, although the C3 and V5 domains were in slight opposition to this trend ( Fig. 4). Specifically, the V3 loop was very highly enriched [log 2 (fold change) = 1.89, P = 1.74 × 10 −11 ] while the C1 were statistically depleted at P < 0.01 using a Benjamini-Hochberg multiple testing correction. With the understanding that gp120 and V3 behaved as expected following the application of diversity and divergence in this study, Tat and the LTR were investigated for similar signatures that may co-evolve with alterations in coreceptor utilization patterns exhibited by Env-V3. X4 Tat sequences demonstrate purifying selection in amino acid usage In general, we observed an overall trend for Tat to have higher amino acid diversity in the fifth and sixth domains of both groups (Fig. 5a). When we considered Jensen-Shannon divergence, positions 7, 23, 57, and 60 were found to be statistically divergent and P < 0.01 when accounting for multiple testing with the Benjamini-Hochberg procedure (Fig. 5b). For all four divergent positions, the set of amino acids used in each population was similar, both with regard to the amino acids observed as well as the physiochemical properties of these amino acids ( Fig. 6; Table 2). In all cases, the amino acid diversity of the X4 population was less than the diversity of the R5 population, demonstrating the qualitative trend that a subset of major variants become further enriched within the X4 population (Fig. 7). While there was no adequate statistical methodology to test the significance [17]. a The diversity index at a Hill number of 1 was calculated for each position for both R5 (red) and X4 (blue) gp120 amino acid sequence populations. Diversity values range from 1 to greater than 10, with the variable domains of gp120 displaying the greatest diversity. b The Jensen-Shannon divergence between R5 and X4 gp120 sequence populations was computed for each amino acid position and plotted with a diamond. Statistically divergent positions (P < 0.01) were plotted in red. A Monte Carlo permutation test was performed to iteratively group gp120 sequences into random groups and calculate a distribution of expected Jensen-Shannon divergence values. The full range of this distribution was plotted in light blue with the interquartile range plotted in dark blue. The full range of divergence for randomly generated groups is in close agreement with the combined diversity of the R5 and X4 populations of the difference in diversity at a single position, a Fisher's exact test was used to demonstrate the statistical enrichment of the consensus variants R7 (P = 0.00053), T23 (P = 0.0012), and Q60 (P = 0.0158) within the X4 group that is not a reflection of differences in R5 and X4 group sizes. In contrast, Tat variant R57 was not statistically enriched in the X4 population (P = 0.17). Regardless, this observation lends support to a mechanism in which the HIV-1 Tat X4 genotype undergoes purifying selection in concert with the change in V3 co-receptor usage from R5 to X4, whereas R5 Tat may be able to persist within a less constrained sequence space that allows it to effectively drive HIV-1 transcription in both T cells and MDMs and perhaps other cell lineages. The four Tat amino acid positions of interest are located within domains responsible for transactivation, cysteine-rich, TAR binding, and nuclear localization, respectively (Fig. 5). In order to further evaluate the amino acid usage of these positions, the relative abundance of each variant was plotted (Fig. 6). In both R5 and X4 Tat, arginine is the most common variant at position 7, although R7 is much more conserved within the X4 population. Previous studies have shown that an R7G substitution has only 93.9 ± 6.5 % of the transactivation activity of R7 [32]. While this effect size is small, it may contribute to differences in the transactivation activity between R5 and X4 HIV-1. Position 23 of Tat also is likely to play an important role in robust transactivation. In our data set, threonine was the most common variant observed at position 23, although asparagine was a frequent variant in both the R5 (46 %) and X4 (16 %) sequences. The T23N substitution has been shown to increase transactivation of the HIV-1 LTR, as well as binding to P-TEFb [33]. Accordingly, Tat N23 has been suggested to confer an advantage to HIV-1 by compensating for deleterious Tat mutations and supporting the (7,23, 57, and 60) was plotted for both R5 and X4 groups as a stacked bar chart representing the total genetic variation within each population at the respective positions. Amino acids were color coded according to physiochemical property using the following scheme: positively charged (red), negatively charged (blue), polar uncharged (purple), hydrophobic (green), and unclassified (glycine, proline, and cysteine, yellow). The amino acid positions 7, 23, 57, and 60 were selected due to their statistically significant Jensen-Shannon divergence replication of less fit drug-resistant or immune-evasive quasispecies [33]. Furthermore, the enrichment of the T23N substitution in the R5 Tat population may relate to the ability of R5 HIV-1 to productively infect both T cells and monocytes and the selection of R5 HIV-1 at the time of transmission, as opposed to X4 HIV-1 which lacks a strong association with monocyte infection and is selected against at the time of transmission. The coselection and possible synergistic effect of these Tat variants remains an area of future investigation. Predicted transcription factor binding sites have statistically different binding affinity scores between X4 and R5 LTR sequences Genetic diversity and Jensen-Shannon divergence analyses were performed on LTR nucleotide sequences. 518 nucleotide positions spanning the HIV-1 U3, R, and U5 regions were evaluated. High levels of nucleotide diversity were present throughout the entire LTR and did not display a general pattern beyond being low, i.e. more highly conserved, in the approximately 50 nucleotides immediately downstream of the transcription start site that correspond to the TAR region of the LTR. This observation translated to large numbers of statistically significant Jensen-Shannon divergence scores (n = 48) between the two populations (Fig. 8). A number of these statistically divergent positions were identified at nucleotide positions within the core enhancer domain, the region of the LTR spanning approximately 200 nucleotides upstream of the transcription start site, while a high number of divergent positions were also identified in the less well characterized modulatory domain further upstream. Due to the high concentration of known transcription factor binding sites within the core enhancer domain, this region of the LTR was the focus of further analysis. The difference in nucleotide usage within known transcription factor binding sites was visualized using two-sequence logos (Fig. 9). This visualization creates a sequence logo for each transcription factor binding site that indicates nucleotides that are enriched within either the R5 or X4 populations, and scaled according to the maximum difference in relative abundance, such that nucleotides more frequently found in the R5 or X4 population are displayed on the bottom or top partition of the two-sequence logo, respectively, while completely conserved nucleotides are displayed in the middle. This analysis demonstrated that the greatest relative abundance differences in nucleotide usage occurred within sites C/EBP-I (54.7 %), C/EBP-II (34.8 %), and Sp-III , and ATF-CREB (11.8 %) showed rather modest differences between X4 and R5 in terms of nucleotide usage. Statistically divergent positions were identified within several of these transcription factor binding sites (Fig. 8), Although R5-and X4-specific nucleotide positions were identified throughout the LTR, it was not clear if those changes would result in meaningful differences between R5 and X4 in terms of transcription factor binding affinity. In order to quantitatively evaluate the , were evaluated to detect enrichment and depletion of nucleotide variants in R5 and X4 sets of aligned LTR sequences using two sample logos. Enriched nucleotides were plotted proportional to the difference between the populations, with the sum of the most differential position plotted on the vertical axis difference between R5 and X4 LTR sequence groups, analysis was performed using the JASPAR database, a collection of transcription factor DNA-binding preferences modeled as matrices [35,36]. Each LTR sequence was scanned for transcription factor binding sites by scoring against position weight matrices (PWMs), which were converted from JASPAR position frequency matrices (PFMs) downloaded from the JASPAR vertebrate database for C/EBP, SP1, NF-κB, and CREB. Each score was then compared to the maximum possible score of the corresponding PWM in order to determine a percentile score. Only binding sites with a mean percentile score >0.30 in either the R5-or X4-classified LTR sequences were considered for statistical analysis ( Table 3). The distribution of R5-and X4-binding scores was statistically compared using a Kolmogorov-Smirnov (KS) test to calculate a P value. Using the JASPAR matrices, we were able to correctly identify the locations of each of the eight known transcription factor binding sites within the LTR when using a percentile scoring threshold of 0.3. At this threshold, the JASPAR matrices for C/EBP, Sp1, and CREB also identified several other potential novel binding locations (Table 3). Statistical analysis identified differential binding scores between the R5 and X4 populations at sites C/EBP-I, C/EBP-II, and Sp-III, but not at known transcription factor binding sites Sp-I or Sp-II (Table 3). This result is in agreement with the positions identified using Jensen-Shannon divergence. Additionally, novel putative Sp1 and C/EBP binding sites with differential R5/ X4 JAPSAR scores were noted at positions 223 and 150, respectively. Interestingly, the relative magnitude of R5 and X4 mean binding scores of C/EBP-I and -II were opposite one another, with X4 LTRs having a greater mean binding score than R5 LTRs for C/EBP-I, whereas R5 LTRs have a greater mean binding score for C/EBP II. The novel putative C/EBP site followed the trend of C/EBP site I, and had a greater mean score among X4 LTRs. This relationship may be a compensatory effect by which the diminished binding affinity of C/EBP II, as the virus mutates from R5 to X4, leads to greater binding to C/ EBP I and perhaps other putative C/EBP sites. This overall trend is also reflected among Sp binding sites. As the binding affinity of Sp-III diminishes in X4 virus when compared to R5, a putative novel Sp site at position 223 gains enhanced binding affinity as indicated by JASPAR scoring. Generally, Sp sites have been shown to be more important for LTR-driven transcription in T lymphocytes than cells of the monocyte-macrophage lineage [37]. Furthermore, transcription factor binding at Sp-III varies with respect to the level of differentiation of monocytes [38]. Overall, we find that LTR-driven transcription is modulated by proteins of the Sp family in a manner that is specific to cell phenotype. As an important contrast, CREB and NF-κB binding sites did not demonstrate a difference in overall binding affinity between the R5 and X4 groups, although the analysis identified all known binding sites in both R5 and X4 populations. This is likely due to the fact that these sites have been shown to be essential for both T-lymphocyte and monocyte-macrophage replication [39][40][41][42]. Conclusion The V3 domain of the HIV-1 env gene evolves throughout the course of infection, often resulting in a switch from an R5 to an X4 swarm. However, the characterization of R5 and X4 HIV-1 has not been defined beyond the envelope, specifically with respect to the transcriptional regulation of HIV-1. Our findings indicate that when comparing X4 HIV-1 to R5 HIV-1 sequences, Tat amino acids variants are more strictly selected at several key positions and specific LTR nucleotide variants are preferentially present in X4 HIV-1 sequences when compared to R5 HIV-1. One particular caveat of our analysis, and any that involves historical sequence review, is our choice of the functional annotation algorithm used in these studies. While the specific results may differ to some extent with the choice of computational tools used in a given study, the overall finding that there is co-evolution between gp120 and other regions of the HIV-1 genome remains consistent. These results are significant because they indicate that the transcriptional phenotype of HIV-1 may diverge with respect to co-receptor utilization. Importantly, the HIV-1 amino acid positions identified in Tat as different between X4 and R5 play roles in supporting robust transactivation, while the LTR nucleotide mutations associated with X4 and R5 strains are found within known and putative transcription factor binding sites and may affect their occupancy and contributions to the regulation of HIV-1 gene expression. We regard the genetic variation between X4 and R5 HIV-1 Tat and LTR sequences to be indicative of selection caused by the differential intracellular environments of cells preferentially infected by X4 versus R5 HIV-1 quasispecies. Consequently, the evolution of HIV-1 from an R5 to an X4 swarm likely requires adaptation at the level of transcriptional control in addition to co-receptor binding and entry. HIV-1 sequence collection HIV-1 sequences containing the Env-V3 region in addition to a co-linear Tat or LTR were collected and annotated from the Los Alamos National Laboratory (LANL) HIV Sequence Database as of October 2014, while [43][44][45][46]. The sequences from the Drexel Medicine CARES Cohort have been submitted to Genbank under BioProject ID: PRJNA319822. To reduce the effect of regional and subtype differences, the LANL database query was limited to include only subtype B sequences isolated from North America. The query was further limited to a single sequence per patient using the LANL query tool which specifically excludes laboratory strain sequences or those used for functional studies. Table 1 shows the breakdown of sequences for each region. Co-receptor usage classification The in silico co-receptor usage prediction tool Web-PSSM was used to classify all sequences as CCR5-or CXCR4-utilizing based on the score of the co-linear Env-V3 amino acid sequence [17]. Numerous exclusion methods were utilized to reduce noise introduced by Web-PSSM predictions as discussed previously [47]. Sequences were excluded from the study if the V3 region was not 35 amino acid residues in length, if the V3 percentile determined by Web-PSSM was greater than 0.95 (indicating that a given sequence may not be a V3 sequence), or if the V3 PSSM score was in the 'indeterminate range' (using scoring cutoffs of >−2.88 and <−6.96 for X4 and R5 Env-V3 sequences, respectively), which was defined as a scoring range consisting of sequences with R5 and/or X4 properties including sequences that are dual tropic (X4/R5). Using these cutoffs, this predictor has an 84 % sensitivity and 96 % specificity indicating its ability to detect X4 binding sequences and non-binding sequences, respectively [17]. This filtering method allowed the genetic analysis to focus on sequences with the highest confidence classification in the PSSM-derived distribution, definitively signifying CCR5-or CXCR4utilizing Env-V3 sequences. Following classification as R5 or X4, the co-linear gp120, Tat, and LTR sequences were aligned to the HXB2 reference sequence (K03455) using Multiple Sequence Comparison by Log-Expectation (MUSCLE), (version 5.05) [48] utilizing default parameters; insertions relative to the reference were removed to simplify the analysis. This pipeline resulted in R5-and X4-associated and multiple sequence alignments for each gp120, Tat, and LTR sequence ( Table 1). Genetic diversity and rarefaction The diversity of each amino acid or nucleotide position of the respective multiple sequence alignments was calculated using a window length, w, of 1 and an order of 1 [equivalent to exp(Shannon entropy with base e)] according to Eq. 1 [49]. First-order genetic diversity Diversity, D, weighs the abundance of all variants, p, at a given position, i, in the protein. A window length, w, is applied, with w = 1 used in order to independently assess the diversity of each position within a multiple sequence alignment. At an order, or Hill number, of q = 1, D does not exist; however, the limit as q approaches 1 can be computed as shown here. Diversity at order = 1 calculates the effective number of species (amino acids or nucleotides) in a population while giving greater weight to neither rare nor abundant species. The maximum possible diversity is 20 for amino acid sequences (gp120 and Tat) and 4 for nucleotide sequences (LTR), with gaps regarded as missing data. In general, positions of high structural or functional importance are evolutionarily constrained in their use of amino acids or nucleotides and therefore demonstrate low diversity, while positions more permissive to variation in amino acid or nucleotide usage displayed higher genetic diversity [50]. Rarefaction curves for each position were generated in order to ensure that sufficient sample sizes existed for each comparison being made. Jensen-Shannon divergence Jensen-Shannon divergence is a measure of the similarity between two probability distributions that can be applied to profile-to-profile multiple sequence alignment LTR sequences classified as either R5 or X4 based on their co-linear Env-V3 sequence were scanned for potential transcription factor binding sites. All binding sites with a percentile score >0.3 in either of the two groups were included in this analysis. The overall score distribution of R5 and X4 binding sites was compared using a KS-test, and multiple testing was accounted for using the Benjamini-Hochberg procedure. Statistically significant sites (P < 0.01) are highlighted in italics comparisons, with the divergence score bound by 0 (similar) and 1 (dissimilar) [51,52]. Multiple sequence alignments (MSA) generated from R5-and X4-classified sequence populations were used to generate position frequency matrices (PFMs). Each PFM contains the relative abundance of each residue (amino acid or nucleotide) for each position (N) of the multiple sequence alignment, resulting in 20 × N or 4 × N matrices for amino acid or nucleotide sequences, respectively. Residues that are not present in any of the sequences at a particular position of the MSA were represented with a pseudo-count of 1 × 10 −7 , equivalent to a relative abundance of 1 instance per ten million sequences, which ranges from approximately 1 × 10 4 -fold to 1 × 10 6 -fold lower abundance than being present in a single sequence. PFMs derived from R5-and X4-classified sequences were used to calculate the Jensen-Shannon divergence between populations according to Eq. 2. Jensen-Shannon divergence where Jensen-Shannon divergence, D JS , is determined according to the abundance of each amino acid variant, Q a , in populations 1 and 2, using an information theory-based calculation. The value Q 0 is calculated for each amino acid variant, and a pseudo-count is utilized for amino acid variants absent in both populations. Statistically significant positions were identified by applying a Monte Carlo permutation test, which randomly re-grouped the total pool of sequences into groups of size M and N iteratively (n = 1000), where M and N are equivalent to the number of sequences in the X4 and R5 groups, and generated a probability density function (PDF) of the Jensen-Shannon divergence values of the randomized model using a Gaussian kernel density estimator implemented in SciPy. Numerical integration was used to determine the probability of finding a random value greater than or equal to the true Jensen-Shannon divergence. Statistical analysis Statistical analysis was performed in custom IPython Notebooks using the SciPy Python library (version 0.14.0). Spearman's rank correlation coefficient was used to evaluate the relationship between R5 and X4 diversity (Fig. 1), as well as the relationship of Jensen-Shannon divergence to mean genetic diversity and the absolute difference in genetic diversity (Fig. 2), respectively. gp120 domain enrichment analysis (Fig. 4) of statistically divergent positions utilized a hypergeometric test, with a null hypothesis of equal distribution of divergent positions. Enrichment of consensus amino acids within statistically divergent Tat positions was performed using a Fisher's exact test (Fig. 6). Two sequence logos Experimentally validated transcription factor bindings sites C/EBP-II ( , were evaluated using Two Sample Logo [53]. Two Sample Logo is a web-based application that calculates and visualizes the differences between two sets of aligned sequences. Each nucleotide was represented with a different color, and the height of the one-letter nucleotide code was scaled according to the magnitude of the difference in abundance of the nucleotide at a given position, with the largest difference in each comparison represented by the maximum height in the logo. Identification of putative transcription binding sites Position frequency matrices (PFMs) were downloaded from the JASPAR redundant vertebrate database for C/ EBP, Sp, NFκB, and CREB. Each PFM was converted into a position weight matrix (PWM) as previously described [52]. Each LTR sequence was scanned along its entirety to score every potential binding site using each of the PWMs. Each score was then compared to the maximum possible score for the PWM being used in order to determine a percentile score. Only binding sites with a mean percentile score >0.30 in either the R5-or X4-classified LTR sequences were considered for statistical analysis (Table 3). Binding affinities as defined by PWM score show a non-Gaussian distribution (data not shown). As such, the Kolmogorov-Smirnov (KS) test was used to compare affinities between different groups. The PWM was applied to each LTR and then the R5 and X4 distributions were compared. The P values were adjusted using the Benjamini-Hochberg procedure.
7,663.6
2016-05-03T00:00:00.000
[ "Computer Science", "Medicine" ]
Flips, dualities and symmetry enhancements We present various 4d $\mathcal{N}=1$ theories enjoying IR global symmetry enhancement. The models we consider have the $USp(2n)$ gauge group, 8 fundamental, one antisymmetric chirals and various numbers of gauge singlets. By suitably turning on superpotential deformations involving the singlets which break part of the UV symmetry we flow to SCFTs with $E_6$, $SO(10)$, $SO(9)$, $SO(8)$ and $F_4$ IR global symmetry. We explain these patterns of symmetry enhancement following two arguments due to Razamat, Sela and Zafrir. The first one involves the study of the relations satisfied by marginal operators, while the second one relies on the existence of self-duality frames. Introduction The 4d N = 1 SU (2) gauge theory with 8 fundamental chiral fields admits 72 dual frames which are rotated into each other by the action of the coset group W (E 7 )/S 8 [1]. In addition to the original description, there are 35 Seiberg dual frames [2], 35 Csaki-Schmaltz-Skiba-Terning (CSST) dual frames [3] and 1 Intriligator-Pouliot (IP) dual frame [4]. Since the theory has 8 fundamental chirals without any superpotential, it preserves the SU (8) global symmetry. In the Seiberg and CSST dual frames, however, this SU (8) symmetry is broken to SU (4) × SU (4) × U (1) in the UV whereas it is restored at the IR fixed point. In [1] it was shown that those 72 dualities form an orbit of W (E 7 )/S 8 and also found that this structure persists for higher rank U Sp(2N ) theories provided an extra matter in the traceless antisymmetric representation of the gauge group U Sp(2N ) is added. In the higher rank case 35 frames correspond to the duality discussed in [5] while the other frames correspond to generalizations of Seiberg and IP dualities. It is natural to wonder whether it is possible to construct theories which actually display E 7 , or other enhanced symmetries, rather than being rotated to a dual frame by the E 7 Weyl action. The first theories with E 7 were constructed in [6]. This E 7 model as well as many other models with enhanced global symmetries can be realised geometrically by compacftifying the 6d N = 1 SCFTs on a Riemann surfaces with fluxes for the global symmetry of the sixdimensional theory [7][8][9][10][11][12][13][14][15][16]. It is indeed expected that the subgroup of the 6d symmetry preserved by the flux will also be the global symmetry of the resulting 4d N = 1 SCFT. It also often turns out that the expected global symmetry is not visible from the UV quiver description of the SCFT but it emerges in the IR. The 6d perspective allows us to make interesting predictions for models with symmetry enhancements in 4d. These can then be tested with a more direct 4d analysis, like the computation of the superconformal index [17][18][19]. On the other hand it is also possible to develop purely 4d QFT strategies to understand and predict symmetry enhancements. In this note we follow two main strategies to discuss several models with SU (2) gauge group and 8 chirals and various amounts of singlets displaying E 6 , SO(10), SO (9), SO (8) and F 4 symmetry in the IR 1 . We also construct higher rank versions of these models, showing that N = 1 U Sp(2N ) theories with 8 fundamental and one antisymmetric chirals, with various selections of singlets and superpotentials, display E 6 , SO(10), SO (9), SO (8) and F 4 symmetries in the IR. The first strategy relies on the relation between the symmetry enhancement and the chiral ring relations of marginal operators, which was discussed in [15] and can be summarised as follows. As observed in [21] conserved currents and marginal operators contribute to the order pq of the superconformal index: I = · · · + (χ ind-mar − χ cur )pq + . . . One can remove those marginal operators by flipping either O (1) or O (2) . Say we flip O (2) by an extra flipping field F in the representation R (2) . Then, we have additional contributions originating from F and its supersymmetric partner ψ F (where ψ F denotes the fermionic partner of the scalar F ): (1.5) which cancel the contribution of O (1) O (2) and leave −χ τ pq. The remaining contribution −χ τ pq, which used to be the chiral ring relation of marginal operators, now joins the current. Thus, the total current multiplet contribution is enlarged to −(χ current + χ τ )pq and may form the adjoint representation of an enhanced global symmetry. In section 2 we will extend this method and consider rank one models where only part of the marginal operators are removed. This has the effect of breaking the UV manifest symmetry to a subgroup, but we will gain a variety of interesting IR enhancements. The second strategy was proposed in [22] and relies on the interplay between self-dualities and enhanced symmetries. With "self-duality" we mean that the dual theory has exactly the same gauge group, the same matter content (including gauge singlets) and the same superpotential of the original theory, but the two theories are related by a non-trivial map on the operator spectrum and on the global symmetries. The existence of self-dual frames implies that the theory is invariant in the IR under a larger set of transformations than the manifest UV symmetry group. In favorable situations, these additional transformations can lead to a symmetry enhancement in the IR. More precisely, if G IR is the IR symmetry group, then we should be able to identify as many equivalent frames of the theory as the dimension of the Weyl group of G IR . In the case in which part of this symmetry is enhanced in the IR, that is G U V ⊂ G IR , then we expect the transformations W (G IR )/W (G U V ) to come from non-trivial self-dualities of the theory while the transformations of W (G U V ) are trivial invariances of the UV Lagrangian. Reversing the argument, whenever we have self-duality frames we might expect an enhanced symmetry. For example for the U Sp(2N ) theories with 8 fundamental and one antisymmetric chiral fields mentioned above, by adding extra singlets, we might find a subset of the 72 dual frames which are actually self-dualities in the sense specified above and provide the missing frames to account for an enhanced IR symmetry. Following this strategy [22] constructed an SU (2) theory with E 6 global symmetry and [23] a U Sp(4N ) model with E 7 × U (1) global symmetry, which for n = 1 is related to the model of [6]. In section 3 we will apply this line of reasoning to the models of section 2, listing the extra self-duality frames accounting for various types of enhancement and checking the superconformal index. We will also discuss various deformations which break some of the manifest UV symmetries leading to further interesting IR enhancement. In 4 we will present the higher rank version of these models involving U Sp(2N ) gauge groups. In the appendix A we quickly revise the action of the Seiberg, CSST and IP dualities on the global symmetries. In appendix B we will discuss the IR behavior of the F E[U Sp (4)] theory, which is part of the family of F E[U Sp(2N )] theories discussed in [14], arguing that it enjoys SO(10) × U (1) 2 global symmetry. Finally in appendix C we study the plethystic logarithm of the index to extract the relation satisfied by marginal operators. Flips, relations and enhanced global symmetries In this section we apply the first strategy to look for models with interesting symmetry enhancement patterns. We begin by revisiting the discussion of the E 7 model of [23] from this perspective. Let's consider the U Sp(4) gauge theory with 8 fundamental chirals Q i , i = 1, . . . , 8 one traceless antisymmetric chiral X and a singlet x 2 with W = x 2 X 2 . Its superconformal index is given by 2 I = 1 + 28 t − 1 2 (pq) ∆ Q + 28 t 1 2 (pq) 1−∆ Q + t −2 (pq) 1−∆ A + (336 + 70) t −1 (pq) 2∆ Q + 28 t − 1 2 (p + q)(pq) ∆ Q + (378 + 336 − 63 − 1) pq + . . . (2.1) where ∆ Q is the R-charge of the 8 fundamental chirals and ∆ A is that of the antisymmetric chiral. ∆ Q and ∆ A satisfy a relation 4∆ Q + ∆ A = 2 which comes from requiring the existence of a non-anomalous R-symmetry. The first three terms are the contributions of the chiral ring generators 3 m 0,ij ≡ Tr g (Q i Q j ) , m 1,ij ≡ Tr g (Q i A Q j ) , x 2 (2.2) respectively, where the first two are in the antisymmetric representation of the SU (8) flavor symmetry while the last one is a singlet. In addition, one can see the current multiplet contribution −(63 + 1) pq, which reflects the SU (8) × U (1) t global symmetry. In this theory, the marginal operators are given by satisfying the relation which can be explained as follows [23]. Consider the object Q a i Q b j Q c k Q d l X ef , where both the U Sp(4) gauge indices a, b, c, d, e, f and the SU (8) flavor indices i, j, k, l are not contracted. We want to show that if we antisymmetrize all the flavor indices, there is no way of contracting the gauge indices to make a gauge invariant object. This is because Q a , where all the flavor indices are antisymmetrized, transforms in the fourth antisymmetric power of the fundamental representation of U Sp(4) (since the Qs are bosons), which is just a singlet. Hence, there is no way of multiplying this by the antisymmetric A ef and contracting the gauge indices so to make a non-vanishing gauge invariant object. Thus, those in (2.3) subject to the relation (2.4) are 28 × 28 − 70 = 378 + 336 independent marginal operators. One can remove the marginal operators by flipping either m 0 or m 1 , where the two choices are merely related by a duality. Once the marginal operators are removed, the 70 relations among them now join the current. Thus, the total number of conserved currents is 63+1+70 = 133+1, which form the adjoint representation of E 7 × U (1). The model where m 0 is flipped by M 0 is exactly the model with E 7 × U (1) global symmetry found in [23]. One may wonder whether we can analogously construct an SU (2) model with E 7 symmetry. If we look at the index of the SU (2) theory with 8 chirals and W = 0: we note two things: first, the global symmetry is SU (8) without additional U (1) because there is no antisymmetric matter for the SU (2) theory; second, the only chiral ring generators are the mesons m 0,ij = Tr g (Q i Q j ), which transform in the antisymmetric representation of such SU (8) flavor symmetry. The marginal operators in this theory are given by m 0,ij m 0,kl (2.6) subject to the relation One way to see where these relations originate is along the lines of the argument we used for the U Sp(4) gauge theory. If we consider the combination Q a where all the SU (2) gauge indices are not contracted while the SU (8) flavor indices are antisymmetrized, it should transform in the fourth antisymmetric power of the fundamental representation of SU (2), which doesn't exist. Hence, we can't construct a gauge invariant object out of it. Thus, those in (2.6) subject to the relation (2.7) are 28·29 2 − 70 = 336 independent marginal operators. One may attempt to remove the marginal operators by flipping m 0 such that the 70 relations are translated into 70 conserved currents as above. However, even though we introduce new flipping fields, say M 0,ij , which flip m 0,ij such that the original marginal operators are removed, M 0,ij provide new marginal operators M 0,ij M 0,kl subject to the same number of relations M 0,[ij M 0,kl] = 0. Thus, there is no change in the contributions of the relation and the current multiplet. This is because the assumption O (1) = O (2) in the above argument fails to hold. Indeed, a similar situation happens for higher odd ranks; there are operators of R-charge 1 whose squares give marginal operators which cannot be removed by the flipping of the operators of R-charge 1. Therefore, one cannot obtain an E 7 model for odd-rank theories, at least in this way. Since in SU (2) models we cannot achieve the symmetry enhancement by removing completely the marginal operators we can try to introduce flips of operators which break the SU (8) global symmetry into subgroups and only partially remove the marginal operators. We will see that while those partial flips reduce the manifest global symmetry in the UV, they eventually lead to intriguing patterns of symmetry enhancements in the IR. Since the partial flips break the UV global symmetry, they can be organised along the line of the symmetry breaking pattern of the SU (8) global symmetry. The maximal subgroups of SU (8) are: The R-charges of the chiral fields in the original theory preserving SU (8) are determined by the anomaly condition and do not change along the RG-flow because there is no U (1) that can be mixed with U (1) R . On the other hand, if the UV symmetry is broken as in (2.8) by introducing flipping fields we do have a U (1) symmetry. If this U (1) is not mixed with U (1) R , the R-charges of the operators do not change and we cannot remove the marginal operators for the same reason we explained before for the E 7 case. Thus, nothing interesting happens in this case. On the other hand if the U (1) is mixed with U (1) R , the contributions charged under U (1) at order pq before the flip won't appear at order pq anymore after the flip. Thus, only the U (1)-neutral contributions will remain at order pq and, among these, those with negative sign that used to correspond to relations before the flip are of our interest since they may now combine to the flavor current. Thus, when we will decompose operators, relations and currents using the branching rules for the (2.8) cases, we will only look at the U (1)-neutral contributions. As we are going to discuss below only the symmetry breaking to SU (6) × SU (2) × U (1) has neutral sectors suitable for our discussion and leads to E 6 × U (1) IR global symmetry. We can then further break the SU (6) × SU (2) × U (1) symmetry. We may consider either the breaking of SU (6) or SU (2). The breaking of the latter into U (1), however, doesn't have U (1)-neutral relations, thus, we only need to consider the breaking of SU (6) into: In this case only the SU (4) × SU (2) × U (1) breaking has U (1)-neutral sectors and as we will see leads to SO(10) × U (1) 2 IR global symmetry. Lastly, we consider the breaking of SU (4), which includes where only SU (2) × SU (2) × U (1) has U (1)-neutral relations and as we will see leads to We begin by decomposing SU (8) representations in terms of SU (6) × SU (2) × U (1) v (we use the branching rules of [24]): • Conserved currents: • Relations: For each contribution above, the U (1)-neutral sectors are as follows. Our analysis actually holds regardless of the gauge rank. The only change is that there is another U (1) a in the UV global symmetry acting on the traceless antisymmetric matter. Hence, we expect higher rank models exhibiting E 6 × U (1) v × U (1) a obtained by partial flips of operators constituting the marginal operators. We will discuss this in section 4. In section 3.1 we will check this enhancement using the superconformal index. The higher rank version of this model is discussed in section 4. Flips, self-dualities and symmetry enhancements In the previous section we observed that by considering three different partial flips breaking the SU (8) UV global symmetry of the SU (2) theory with 8 fundamental chirals to we expect to find models exhibiting enhanced IR global symmetries respectively. In section 3.1 we will check that these models indeed enjoy the expected symmetry looking at the superconformal index expansion and in addition, following the second strategy discussed in the introduction, we list all the duality frames accounting for the enhanced global symmetry. In section 3.2 we will consider various deformations, which in particular lead to models with SO(9) and F 4 symmetries. We begin by introducing all the fields and their charges. In this section we will work with conventions in which only the SU (2) 4 × U (1) t × U (1) u × U (1) v subgroup of the full UV symmetry is explicitly manifest. This is done by splitting the 8 fundamental chiral fields into four doublets Q 1 , . . . , Q 4 , one for each SU (2) flavor symmetry. The singlets we introduced in the previous section to break the SU (8) symmetry will also split accordingly. All models will have bifundamental singlets D 1 , D 2 , D 3 coupled as: 3) where Tr 1,2,3,4 denote the traces over the SU (2) 1,2,3,4 flavor indices. For some models we will also consider singlets b i with i = 1, · · · , 4 contributing b i Tr g Tr i (Q i Q i ) to the superpotential. In general we will denote by T i with i = 1, 2, 3 the theory where the interactions involving respectively b 1 or b 1 , b 2 or b 1 , b 2 , b 3 are turned on. We will also denote by T i the theory where also the interaction involving b 4 is turned on. We use a different notation in this case because this latter interaction is not involved in the symmetry enhancement process but we might need to turn it on to avoid having decoupled fields. For convenience we will also work in a different basis for the U (1) 3 symmetry. Specifically, we will use a parametrization of the abelian symmetries that we will denote by U (1) the previous section by the following redefinition of the charges: 4) or equivalently at the level of the fugacities in the index The charges of the fields with this new parametrization are as in table 1, where we also give a possible choice of UV trial R-symmetry U (1) R 0 . Self-dualities and Enhancements We start discussing two models enjoying the SO(8) × U (1) 3 enhancement. In the first model T 0 we introduce only the singlets D 1 , D 2 , D 3 interacting with W 0 . The label for the theory stands for the fact that we don't introduce any of the b i fields in this case. The matter content of the theory is summarized in the quiver diagram of figure 1. Combining the information on the U (1) 3 charges of table 1 with the redefinition (3.4) we can see that the operators flipped by D 1 , D 2 , D 3 in W 0 are precisely the operators (2, 2, 1, 1) 0,2,2 , (1, 2, 2, 1) −1,−1,2 , (2, 1, 2, 1) 1,−1,2 of section 2.3. Hence, we expect in this case that the manifest SU (2) 4 × U (1) 3 UV global symmetry gets enhanced in the IR to SO(8) × U (1) 3 . This can be checked computing the superconformal index of the theory T 0 . We first perform a-maximization [25] to find the values of the mixing coefficients of the R-symmetry with U (1) corresponding to the superconformal R-symmetry, which we approximate to The superconformal index then reads 4 (3.8) In the expression of the index, each number is the character of an SO(8) representation and, in particular, the term −(28 + 3)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of SO(8)×U (1) 3 . In this case we also have 72−48 = 24 marginal operators. Notice that the fact R 1 = R 2 = R 3 implies, among the three U (1) symmetries, only U (1) v mixes with the R-symmetry. Thus, although we have used approximate R-charges for the expansion of the index, the terms independent of fugacity v are exact; the terms of order pq, corresponding to 24 marginal operators and 31 conserved currents, are such examples. Now according to the argument in [22], since the size of the Weyl group of SO(8) is given by Figure 1. Quiver diagram for the T 0 model. and W (SU (2) 4 ) = 2! 2! 2! 2! is manifest, we expect 12 = 4! 2 3 /(2! 2! 2! 2!) self-dual frames including the original theory itself. To find the self-dual frames we proceed as follows. We specialize the SU (8) fugacities vector u defined in (A.2) according to the breaking of SU where y i is the SU (2) i fugactity. This corresponds to choosing one particular representative in the oribit of the UV SU (2) 4 Weyl group. Now we implement the Seiberg, CSST and IP dualities. As explained in appendix A, these dualities transform the fugacities vector respectively as in eqs. (A.8), (A.12), (A.5). Inspecting the transformed vectors we can identify the self-dual frames. Those will correspond to frames where we have a collection of charged chirals with the same R-charge and U (1) t,u,v charges as in the original frame. We also checked that the self-dual frames have the same collection of singlets. In the end we have found that the 12 self-dual frames are realized by 1 original , Another model exhibiting SO(8) × U (1) 3 enhancement is T 1 , obtained including the singlet b 1 with the superpotential 5 5 Given the symmetry of the quiver in figure 2 this is equivalent to introducing the singlet b2. where W 0 is given by The matter content of the theory is now summarized in the quiver diagram of figure 2. Notice that the new b 1 field is flipping the meson Tr g (Q 1 Q 1 ) corresponding to the operator (1, 1, 1, 1) 2,2,2 in the notation of section 2.3. Performing a-maximization we find the following approximate values of the mixing coefficients: . (3.14) The index computed with this R-symmetry is then given by Again the index organises into characters of SO(8) and at order pq we can see the contribution of the SO(8) × U (1) 3 current highlighted in blue. We can explain the enhancement of theory T 1 in terms of self-dualities exactly in the same way as for theory T 0 . Indeed, the operator Tr g (Q 1 Q 1 ) is trivially mapped to itself under all the self-dualities (3.11). Hence, these are also self-dualities of theory T 1 and the same counting we did for T 0 explains the SO(8) enhancement for T 1 . The singlet b 4 is also a spectator from the point of view of the self-dualities (3.11). This means that theories T 0 and T 1 , where we also turn on b 4 Tr g Tr 4 (Q 4 Q 4 ) in the superpotential, will still exhibit the SO(8) × U (1) 3 enhancement. SO(10) × U (1) 2 model Now we consider a model with the IR SO(10) × U (1) 2 symmetry. We denote this model by T 2 as, in addition to the usual D 1 , D 2 , D 3 singlets, we also introduce the singlets b 1 , b 2 . The superpotential is where W 1 is given by The matter content of the theory is summerized in the quiver diagram of figure 3, but the full manifest UV global symmetry is actually In particular D 3 and b 1 , b 2 form the antisymmetric representation of SU (4), which flip the mesonic operators corresponding to the operator (6, 1, 1) 2,2 of section 2.2. Indeed, as it can be seen combining the data contained in table 1 and the map of the charges (3.4), their charges are compatible with those expected from the branching rule (2.27). The singlets D 1 , D 2 also recombine to form the bifundamental representation between SU (4) × SU (2) 3 , which flips corresponding to the operator (4, 2, 1) −1,2 . Performing a-maximization we find the following approximate values of the mixing coefficients: Notice that equal R 1 and R 2 imply U (1) t doesn't mix with the R-symmetry, which is consistent with the fact that U (1) t is part of the nonabelian symmetry SU (4). The index computed with this R-symmetry is then 6 Each number is the character of an SO(10) representation and, in particular, the term −(45+ 2)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of SO(10) × U (1) 2 . Let's now discuss the self-duality frames responsible for the enhancement. The size of the Weyl group of SO(10) is given by where W (SU (4) × SU (2) 2 ) = 4! 2! 2! is manifest. Thus, 20 = 5! 2 4 /(4! 2! 2!) self-dual frames including the original theory itself are expected. Also in this case to find the self-dual frames we specialize the SU (8) fugacities vector u defined in (A.2) according to the breaking of where w i are the SU (4) fugacities subjected to the constraint 4 i=1 w i = 1. In this case we collect a subset of the transformations (A.8), (A.12), (A.5) corresponding to Seiberg, CSST and IP dualities respectively for which we get a transformed vector associated to a collection of chirals with the same R-charge and U (1) u,v charges, as well as the same set of singlets. In the end we have found that the 20 self-dual frames are realized by dualities. Also in this case the singlet b 4 is a spectator from the point of view of the self-dualities (3.24), so the theory The last model we consider is the one exhibiting the E 6 × U (1) symmetry enhancement of [22]. 7 Following the same nomenclature of the previous cases, we call this theory T 3 as on top of the singlets D 1 , D 2 , D 3 we also introduce all the singlets b 1 , b 2 , b 3 , b 4 with superpotential where W 2 is given by In this case we have to include also the singlet b 4 since otherwise the operator Tr g Tr 4 (b 4 Q 4 Q 4 ) would be free in the IR. The matter content of the theory is summerized in the quiver diagram of figure 4, but the full manifest UV global symmetry is actually SU (6)×SU (2) 4 ×U (1), since SU (2) 1 , SU (2) 2 , SU (2) 3 and two U (1) out of U (1) 3 , specifically U (1) t and U (1) u , recombine into SU (6). In particular the singlets D 1 , D 2 , D 3 and b 1 , b 2 , b 3 form the antisymmetric representation of SU (6), which flips the mesonic operators which indicates U (1) t and U (1) u do not mix with the R-symmetry because they are part of the nonabelian symmetry SU (6). The index computed with this R-symmetry is then 9 Each number is the character of an E 6 representation and, in particular, the term −(78+1)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of In [22] this enhancement was explained by studying the self-dualities of the model, similarly to what we did for the previous cases. The size of the Weyl group of E 6 is given by where |W (SU (6) × SU (2))| = 6! × 2! is manifest. The remaining 36 = 51840/(6! 2!) should be realized as self-dualities. In order to determine which of the 72 frames correspond to self-dualities, we specialize the SU (8) fugacities vector u defined in (A.2) according to the breaking of SU where w i are the SU (6) fugactities subjected to the constraint 6 i=1 w i = 1. In this case we collect a subset of the transformations (A.8), (A.12), (A.5) corresponding to Seiberg, CSST and IP dualities respectively for which we get a transformed vector associated to a collection of chirals with the same R-charge and U (1) v charges, as well as the same set of singlets. In the end we have found that the 36 self-dual frames are realized by Performing a-maximization in the theory without the singlet b4 we would find a value of the mixing coefficients for which the operator Trg(Q4Q4) is below the unitarity bound, meaning that it becomes a decoupled free field in the IR. 9 We choose to parametrize the U (1) symmetry that doesn't participate in the enhancement with U (1)v. dualities. Symmetry breaking deformations So far we considered singlets b i coupling to SU (2) flavor singlet mesons Tr g Tr i (Q i Q i ) which ensures that the manifest symmetry includes SU (N ) groups. On the other hand we can introduce extra deformations of the form b i Tr g Tr j (Q j Q j ) with i = j. In the E 6 × U (1) modelT 3 all the SU (2) flavor singlet mesons are already flipped and thus trivial in the chiral ring, so this model cannot be deformed in this way. We will thus focus on the deformations of the SO(10) × U (1) 2 model T 2 and the SO(8) × U (1) 3 model T 1 (the theory T 0 can't be deformed in this way as it doesn't contain any of the b i singlets). We begin with the T 2 model. Looking at W 2 in eq. (3.16) we see that Tr g Tr 1 (Q 1 Q 1 ) and Tr g Tr 2 (Q 2 Q 2 ) are trivial in the chiral ring while Tr g Tr 3 (Q 3 Q 3 ) is not. Thus, we can obtain a new theory T 2 by introducing a stable 10 deformation term 11 where W 2 is given by Here i can be either 1 or 2. For definiteness, we will take i = 2 and we sahll denote the deformed theory by T 2 . This deformation breaks the manifest UV symmetry SU (4)×SU (2) This can be seen as follows. The deformation (3.33) breaks one combination of U (1) Specifically, at the level of fugacities it imposes the constraint which means that a combination of U (1) x 2 ,x 3 or equivalently of U (1) t,u is broken. We decide to parametrize the surviving combination of these two U (1), which we shall denote by U (1)ũ, as At the level of the charges this means One can then check for example that the fields Q 2 and Q 3 have the same R-charge and U (1)ũ ,v charges and can thus be organized into the fundamental representation of U Sp (4). Similarly, the singlets D 1 and b 2 can be organized into the traceless antisymmetric representation of U Sp (4). Finally, the singlets D 2 and D 3 can be collected into the bifundamental representation of SU (2) 1 × U Sp (4). Similarly to what we did in the previous subsection, one can check that out of the 20 selfdualities (3.24) of the original T 2 theory only 12 map ∆W 2 to itself and remain self-dualities of the deformed theories. These correspond exactly to the self-dualities (3.11) of the SO(8) models, which were 1 original , Therefore, the expected size of the Weyl group of the enhanced symmetry is now Thus, we expect that the SO(10) × U (1) 2 model T 2 deformed by (3.33) has the SO(9) × U (1) 2 IR symmetry with the UV symmetry SU (2) 1 × U Sp(4) × SU (2) 4 recombining into SO(9) in the IR. This is confirmed by the superconformal index expansion. Performing a-maximization we find the following approximate values of the mixing coefficients: The index computed with this R-symmetry is then 12 (3.42) 12 We choose to parametrize the U (1) symmetries, which don't participate in the enhancement, with U (1)ũ,v. Each number is the character of an SO (9) representation and, in particular, the term −(36 + 2)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of SO(9) × U (1) 2 . Thus, the deformation (3.33) added to the SO(10) × U (1) 2 model T 2 leads to a new theory exhibiting the IR symmetry enhancement We can further deform the T 2 theory. From W 2 in eq. (3.33) we see that one linear combination of Tr g Tr 2 (Q 2 Q 2 ) and Tr g Tr 3 (Q 3 Q 3 ) is flipped by b 2 , but there is another linearly independent combination, say Tr g Tr 3 (Q 3 Q 3 ), which is non-trivial in the chiral ring. Thus, we can obtain a new theory T 2 by introducing another stable deformation term 13 where W 2 is given in (3.33). This deformation breaks the manifest UV symmetry SU (2) This can be seen as follows. The deformation (3.44) breaks one combination of the two U (1) symmetries. Specifically, at the level of fugacities it imposes the constraint which means that the U (1)ũ symmetry is broken. Hence, the surviving abelian symmetry is U (1) v , which we recall was defined in (3.4)-(3.5). One can then check for example that the fields Q 2,3 , which we already collected into the fundamental representation of U Sp (4), have also the same R-charge and U (1) v charge of Q 1 and we can thus organize them into the fundamental representation of U Sp (6). Similarly, the singlets D 1 , D 2 , D 3 and b 1 , b 2 can be organized into the traceless antisymmetric representation of U Sp (6). The manifest Weyl group is W (U Sp (6)×SU (2)), whose size is 3!×2 3 ×2!. In addition, all the 12 self-dualities of the SO(9)×U (1) 2 model still remain self-dualities after the deformation (3.44). The expected size of the Weyl group of the enhanced symmetry is then This is confirmed by the superconformal index expansion. Performing a-maximization we find the following approximate value of the mixing coefficient: The index computed with this R-symmetry is then 15 19 + (324 + 1)v −4 (pq) 17 19 −(52 + 1)pq + · · · . (3.49) Each number is the character of an F 4 representation and, in particular, the term −(52+1)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of F 4 × U (1). Thus, the deformation (3.44) added to the SO(9) × U (1) 2 model T 2 leads to a new theory exhibiting the IR symmetry enhancement Note that T 2 can be also obtained from T 3 by integrating out the trace part of the antisymmetric representation of flavor SU (6) constituted by (D 1 , D 2 , D 3 , b 1 , b 2 , b 3 ), which breaks SU (6) into U Sp (6). This can be done by introducing an additional singlet c with a superpotential ∆W = c (b 1 +b 2 +b 3 ), which makes both c and the trace part b 1 +b 2 +b 3 massive. Once those massive fields are integrated out, the resulting theory is exactly T 2 with the superpotential (3.44). The enhanced IR symmetry E 6 × U (1) is then partially broken to F 4 × U (1). SO(8) → SO(9) → SO(8) deformations Now we consider the T 1 model with the SO(8) × U (1) 3 symmetry. While the SO(8) × U (1) 3 symmetry can be realized either with or without b 1 , we stick to the model with b 1 because we need at least one b i field to deform the theory. Looking at W 1 in eq. (3.12) we can see that both Tr g Tr 2 (Q 2 Q 2 ) and Tr g Tr 3 (Q 3 Q 3 ) are non-trivial in the chiral ring. Thus, we can obtain a new theory T 1 by introducing a stable deformation term 14 where W 1 is given by Here i can be either 2 or 3. For definiteness, we will take i = 2 and we shall denote the deformed theory by T 1 . This deformation modifies the manifest UV symmetry SU This can be seen as follows. The deformation (3.51) breaks one combination of the three U (1) symmetries. Specifically, at the level of fugacities it imposes the constraint which means that the symmetry U (1) t is broken. Hence, the surviving abelian symmetries are U (1) u,v , which we recall were defined in (3.4)-(3.5). One can then check for example that the fields Q 1 and Q 2 have the same R-charge and U (1) u,v charges and can thus be organized into the fundamental representation of U Sp (4). Similarly, the singlets D 3 and b 1 can be organized into the traceless antisymmetric representation of U Sp(4). Finally, the singlets D 1 and D 2 can be collected into the bifundamental representation of U Sp(4) × SU (2) 3 . The Weyl group W (U Sp(4) × SU (2) 2 ) of the manifest symmetry is of size 2! × 2 2 × 2! × 2!. Similarly to what we did in the previous subsection, one can check that all of the 12 self-dualities (3.11) of the original T 1 theory are still self-dualities of the deformed theory. Therefore, the expected size of the Weyl group of the enhanced symmetry is now (3.55) Thus, we expect that the SO(8) × U (1) 3 model T 1 deformed by (3.51) has IR symmetry SO(9) × U (1) 2 . This is confirmed by the superconformal index expansion. Performing a-maximization we find the following approximate values of the mixing coefficients: (3.56) The index computed with this R-symmetry is then 15 (3.57) Each number is the character of an SO(9) representation and, in particular, the term −(36 + 2)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of SO(9) × U (1) 2 . Thus, the deformation (3.51) added to the SO(8) × U (1) 3 model T 1 leads to a new theory exhibiting the IR symmetry enhancement This T 1 model can be also obtained from T 2 by integrating out the trace part of the antisymmetric presentation of flavor SU (4) constituted by (D 1 , D 2 , b 1 , b 2 ), which breaks SU (4) into U Sp (4). This also partially breaks the enhanced IR symmetry SO(10) × U (1) 2 into SO(9) × U (1) 2 . We can further deform the T 1 theory. From W 1 in eq. (3.51) we see that the operator Tr g Tr 3 (Q 3 Q 3 ) is still non-trivial in the chiral ring. Thus, we can obtain a new theory T 1 by introducing another stable deformation term 16 where W 1 is given in (3.51). This deformation breaks the manifest UV symmetry U Sp(4) × Indeed, the deformation (3.59) breaks one combination of the two U (1) symmetries. Specifically, at the level of fugacities it imposes the constraint which means that the U (1) u symmetry is broken. Hence, the surviving abelian symmetry is U (1) v , which we recall was defined in (3.4)-(3.5). The superpotential now doesn't preserve the U Sp(4) symmetry since the singlet b 1 , which in T 1 formed a U Sp(4) together with D 3 , now appears (without D 3 ) in the deformation (3.59). The manifest Weyl group is W (SU (2) 4 ), whose size is (2!) 2 . In addition, all the 12 self-dualities of the SO(9) × U (1) 2 model T 1 still remain self-dualities after the deformation (3.59). The expected size of the Weyl group of the enhanced symmetry is then Thus, we expect the SO(9) × U (1) 2 model T 1 deformed by (3.59) to have the IR symmetry SO(8) × U (1). Indeed, we have confirmed it by looking at the superconformal index. Performing amaximization we find the following approximate value of the mixing coefficient: (3.63) 16 Using the superconformal R-charge of theory T 2 we can check that this deformation has R-charge R[b1 Trg Tr3(Q3Q3)] 1.95598 < 2 so it is a relevant deformation. 9 10 −(28 + 1)pq + · · · (3.64) The index computed with this R-symmetry is then Each number is the character of an SO(8) representation and, in particular, the term −(28 + 1)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of SO(8) × U (1). Thus, the deformation (3.59) added to the SO(9) × U (1) 2 model T 1 leads to a new theory exhibiting the IR symmetry enhancement Higher rank theories In section 2 we have argued that enhanced symmetries are expected for U Sp(2N ) theories once we flip a set of operators breaking the SU (8) UV global symmetry to particular subgroups. Since the argument holds regardless of the gauge rank, we have an infinite family of theories for a given enhanced symmetry in the IR. 17 For higher rank theories there can be multiple operators in a given representation of the global symmetry and we need to flip all of them to realise the enhanced symmetry. In this section, we show how this works explicitly in some examples. We consider 4d N = 1 U Sp(2N ) theories with one antisymmetric chiral A and 8 fundamental chirals, and extra gauge singlets D i 1,2,3 and a subset of b i 1,2,3,4 for i = 0, . . . , N − 1, whose global charges are shown in table 2. The D i 1,2,3 singlets couple to the fundamental chirals via the following superpotential terms: while the b i 1,2,3,4 singlets, if present couple through In such higher rank cases we can also construct gauge invariant operators of the form Tr g A i . These turn out to fall below the unitarity bound in all the examples we are going to consider, so we will also need additional singlets a i that flip them through the superpotential terms a i Tr g A i . Similarly to the rank one case, we will denote by T N i with i = 1, 2, 3 the theory where the interactions involving respectively b 1 or b 1 , b 2 or b 1 , b 2 , b 3 are turned on and by T N i the theory where also the interaction involving b 4 is turned on, where now the upper index denotes the rank of the gauge group. E 6 × U (1) 2 model In section 2.1 we have shown that to realize the E 6 symmetry we have to flip the operators either in the representation (6, 2) −2 or in the representation (15, 1) 2 of SU (6) × SU (2) × U (1) v ⊂ SU (8). For definiteness we may take the latter, which are then given by Furthermore, we also flip because those with low powers of A, violate the unitarity bound. Thus, we introduce the following flipping fields in total: with the superpotential given in (4.1)-(4.2)-(4.3). In the following, we will denote this theory by T N 3 , where the lower index represents the number of towers b-singlets included, while the upper index is the rank of the gauge group. Once those operators are flipped, the manifest UV symmetry is given by where, similarly to what we discussed in section 3.1.3 for the rank one case, SU (6) is formed by SU (2) 1 × SU (2) 2 × SU (2) 3 and the two combinations of U (1) x 1 ,x 2 ,x 3 corresponding to the U (1) t,u symmetries we defined in (3.4)-(3.5). According to the argument in section 2.1, this is supposed to be enhanced in the IR to This enhancement can be checked by computing the superconformal index of the theory for low values of N , for example N = 2. In this case, performing a-maximization we find the following approximate values of the mixing coefficients of the R-symmetry with U (1) The superconformal index of the theory computed with these R-charges then reads Each number is the character of an E 6 representation and, in particular, the term −(78+2)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of It is worth mentioning that the existence of the N = 2 copies of the gauge singlets listed in (4.6) is crucial to realize the E 6 symmetry. Let us look at the second term 27 v −2 a − 1 2 (pq) 69 176 . 27 is decomposed into the representations of SU (6) × SU (2) 4 as follows: Once those flipping fields are taken into account, the UV symmetry is broken to where, similarly to what we discussed in section 3.1.2 for the rank one case, SU (4) is formed by SU (2) 1 × SU (2) 2 and the combination of the abelian symmetries 2(U (1) According to the argument in section 2.2, this is supposed to be enhanced in the IR to This enhancement of the global symmetry can be checked using the superconformal index for low values of the rank of the gauge group. For instance, for N = 2 we find the following approximate values of the mixing coefficients from a-maximization: The superconformal index of the theory computed with these R-charges then reads 19 15 22 + a −2 (pq) 11 15 + · · · −(45 + 3)pq + · · · . (4.25) Each number is the character of an SO(10) representation and, in particular, the term −(45 + 3)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of SO(10) Analogously to the SU (2) case, one can deform the theory by introducing extra superpotential terms of the form b i m Tr g Tr n (A i Q n Q n ) with m = n. The first term we introduce is 20 and we label the theory obtained from this deformation as T 2 N . Similarly to what happened in the rank one case of section 3.2.1, this deformation breaks the global symmetry of theory where again we are parametrizing the two surviving combinations of U (1) x 1 ,x 2 ,x 3 with U (1)ũ ,v which are defined in (3.37)-(3.38). Note, for example, that D i 1 and b i 2 construct the traceless antisymmetric representation of U Sp (4). Given the approximate mixing coefficients of U (1) Each number is the character of an SO(9) representation and, in particular, the term −(36 + 3)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of We should comment that the index can be also written in terms of SO(8) characters because any SO(9) representation can be decomposed into SO (8) representations. In that case, the term (9 − 36 − 3) pq would be written as −28 SO(8) − 2 pq, where 28 SO(8) is the character of the adjoint representation of SO (8). Indeed, if there is no marginal operator that can be constructed, the conserved current must be in the representation 28 SO(8) and the enhanced non-abelian symmetry is SO(8) rather than SO (9). Thus, we have to show that the marginal operators in the representation 9 exist in order to claim that the enhanced symmetry includes SO(9). Let us list the operators contributing to order pq except some boson-fermion pairs trivially canceled. We first define the following single trace operators organized into representations of the manifest symmetry SU (2) 1 × U Sp(4) × SU (2) 4 , where we omit the charges under the abelian symmetries: (1, 4, 2) , Now let us look at the bosonic operators M 0 Π 1 , M 1 Π 0 , L 0 P 1 and L 1 P 0 , which satisfy the relations (4.32) as well as the F-term conditions realized by parts of the fermionic operators σ 0 Π 1 , σ 1 Π 0 , τ 0 P 0 and τ 1 P 1 . The remaining independent one is only (2, 1, 2), which cannot be lifted because there is no fermionic operator to be paired up. Similarly, M 0 Σ 0 and M 1 Σ 1 can be paired up with σ 0 Σ 0 and σ 1 Σ 1 respectively, leaving Note that those cancelations reflect the F -term conditions from the superpotential. The remaining operators in (4.34) are in the representation 2×(1, 5, 1). One combination of them can become a long multiplet being paired up with the traceless antisymmetric part of µ. On the other hand, the other combination still remains short and combines with the remaining (2, 1, 2) of M 0 Π 1 , M 1 Π 0 , L 0 P 1 , L 1 P 0 into 9 of SO (9). Therefore, we have found 9 marginal operators consisting of (2, 1, 2) from M 0 Π 1 , M 1 Π 0 , L 0 P 1 , L 1 P 0 and (1, 5, 1) from (4.34). Moreover, the remaining fermionic operators constitute the supersymmetric partners of the conserved current in the adjoint representation of SO (9) The second deformation we introduce is 22 and we label the theory obtained from this deformation as T 2 N . Similarly to what happened in the rank one case of section 3.2.1, due to this deformation U (1)ũ is broken whereas SU (2) 1 × U Sp(4) gets enhanced to U Sp (6). Thus, the entire UV global symmetry is now given by where once again U (1) v is defined as in (3.4)-(3.5). Note, for example, that D i 1 , D i 2 , D i 3 and b i 1 , b i 2 are organized into the traceless antisymmetric representation of U Sp (6). Given the approximate mixing coefficients of U (1) with the R-symmetry of the theory for rank N = 2 we have the superconformal index for N = 2 as follows: Each number is the character of an F 4 representation and, in particular, the term −(52+2)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of Again we need to check if the marginal operators in the representation 26 really exist. We first define the following single trace operators organized into representations of the manifest symmetry U Sp(6) × SU (2) 4 : where we have used the same names of the operators as those in the previous SO(9) case. The operators contributing to order pq are now given by In appendix C we check this looking at the plethystic logarithm of the superconformal index. Comparing (4.40) and (4.41), we find that there are exactly 26 bosonic operators in the representation (6, 2) + (14, 1) that cannot be paired up with any of the fermionic operators in (4.41). Therefore, the theory has the marginal operators in the representation (6, 2)+(14, 1), which is 26 of F 4 . Moreover, the remaining fermionic operators constitute the supersymmetric partners of the conserved current in the adjoint representation of SO(8) × U (1) 4 models and SO(8) → SO(9) → SO(8) deformations The last two examples are the models exhibiting the SO(8) × U (1) 4 global symmetry. As we discussed in section 2.3, in order to obtain this model we need to flip, for example, the operators in (2, 2, 1, 1) 0,2,2 + (1, 2, 2, 1) −1,−1,2 + (2, 1, 2, 1) 1,−1,2 + (1, 1, 1, 1) 2,2,2 , which are representations of SU (2) 4 × U (1) t × U (1) u × U (1) t ⊂ SU (8). The corresponding operators are for i = 0, . . . , N − 1 respectively. In addition, we need to flip because those with low i violate the unitarity bound. Thus, we introduce the following flipping fields: with the superpotential given in (4.1)-(4.2)-(4.3). We denote this theory by T N 1 . Once those flipping fields are taken into account, the UV symmetry is broken to which is supposed to be enhanced in the IR to where U (1) t,u,v are again defined as in (3.4)- (3.5). This enhancement of the global symmetry can be checked using the superconformal index for low values of the rank of the gauge group. For instance, for N = 2 we find the following approximate values of the mixing coefficients from a-maximization: . (4.48) The superconformal index of the theory computed with these R-charges then reads 23 2772 + a −2 (pq) 8 11 + + · · · + (8 c u −3 t + 8 c u 3 t −1 −28 − 4)pq + · · · . (4.49) Each number is the character of an SO(8) representation and, in particular, the term −(28 + 4)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation Analogously to the SU (2) case, one can deform the theory by introducing extra superpotential terms of the form b i 1 Tr g Tr l (A i Q l Q l ). The first term we introduce is 24 and we label the theory obtained from this deformation as T 1 N . Similarly to what happened in the rank one case of section 3.2.2, this deformation breaks U (1) t and makes SU (2) 1 × SU (2) 2 recombine into U Sp(4). Therefore, the manifest symmetry is now given by where as usual U (1) u,v are defined in (3.4)-(3.5). Note, for example, that D i 1 and b i 1 construct the traceless antisymmetric representation of U Sp (4). Given the approximate mixing coefficients of U (1) x 1 × U (1) x 2 × U (1) x 3 × U (1) a with the R-symmetry of the theory for rank N = 2 , (4.52) the superconformal index for N = 2 is given by 25 Each number is the character of an SO(9) representation and, in particular, the term −(36 + 3)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation of Note that there exist the marginal operators in the representation 9, which can be explicitly constructed in a similar way to the SO(9) × U (1) 3 model in the previous subsection. The second deformation we introduce is 26 and we label the theory obtained from this deformation as T 1 N . Similarly to what happened in the rank one case of section 3.2.2, this deformation breaks U (1) u as well as U Sp(4) into SU (2) 1 × SU (2) 2 . Hence, the entire UV global symmetry is now given by where as usual U (1) v is defined in (3.4)-(3.5). Given the approximate mixing coefficients of U (1) 1 × U (1) 2 × U (1) 3 × U (1) a with the R-symmetry of the theory of rank N = 2 we have the superconformal index for N = 2 as follows: (4.57) 25 We choose to parametrize the U (1) symmetries, which don't participate in the enhancement, with U (1)u,v,a. 26 Using the superconformal R-charge of theory T 1 N =2 we can check that this deformation has R-charge R[b i 1 Trg Tr3 A i Q3Q3 ] 1.96449 < 2 for i = 0, 1, so it is a relevant deformation. The remaining fermionic operators constitute the supersymmetric partners of the conserved current in the adjoint representation of SO (8) We conclude considering the other model exhibiting the SO(8) × U (1) 4 IR global symmetry. Indeed, similarly to what happens in the rank one case of section 3.1.1, the operators Tr g Tr 1 A i Q 1 Q 1 in the representation (1, 1, 1, 1) 2,2,2 of SU (2) 4 × U (1) t × U (1) u × U (1) t ⊂ SU (8) are spectators from the point of view of the SO(8) enhancement. Hence, we expect that theory T N 0 including the same singlets (4.45) that define theory T N 1 except the b i 1 fields will have the same enhancement. More precisely, the manifest UV symmetry is supposed to be enhanced in the IR to where U (1) t,u,v are again defined as in (3.4)- (3.5). This enhancement of the global symmetry can be checked using the superconformal index for low values of the rank of the gauge group. For instance, for N = 2 we find the following approximate values of the mixing coefficients from a-maximization: The superconformal index of the theory computed with these R-charges then reads 27 (4.65) Each number is the character of an SO (8) representation and, in particular, the term −(28 + 4)pq highlighted in blue reflects the current multiplet, which is in the adjoint representation where This theory enjoys three different types of dualities [1], which are generalizations of the Intriligator-Pouliot [4], Seiberg [2] and Csaki-Schmaltz-Skiba-Terning [3] dualities for N = 1. The generalization of Intriligator-Pouliot duality first appeared in [5]. The dual theory is still a U Sp(2N ) gauge theory with one antisymmetric chiral and 8 fundamental chirals q a , but in addition we have 28N gauge singlet chiral fields M ab;i with a < b = 1, . . . , 8 and i = 1, . . . , N interacting with the superpotential The action of the duality on the global symmetries can be easily expressed in terms of the fugacities we introduced in (A. Accordingly, we have the following operator map: The generalization of Seiberg duality breaks the manifest SU (8) v symmetry to the subgroup SU (4) 2 × U (1) in the dual frame. Indeed, the dual theory is again a U Sp(2N ) gauge theory with one antisymmetric and 8 fundamental chirals, but now the fundamentals are naturally divided into two groups of four that we denote by q a and p b with a, b = 1, . . . , 4. This is because we also have additional 16N gauge singlets M ab;i with a, b = 1, . . . , 4 and i = 1, . . . , N interacting with the superpotential The action of the duality on the global symmetries can again be expressed in terms of the fugacities we introduced in (A.2). In order to do so, we have to make a choice on how to break SU (8) v to the subgroup SU (4) 2 × U (1), which is equivalent to choosing how to split the 8 chirals Q a of the original theory into two groups of four. The most intuitive option is to split Q 1,2,3,4 from Q 5, 6,7,8 . With this choice, we have where we defined u 4 + = 4 a=1 u a and u 4 − = 8 a=5 u a . Accordingly, we have the following operator map 28 Clearly, this is not the unique choice for splitting 8 chirals into two groups of four. In total we have 1 2 8 4 = 35 different possibilities that will give rise to inequivalent dual frames. Finally, the generalization of the Csaki-Schmaltz-Skiba-Terning duality also breaks the manifest SU (8) v symmetry to the subgroup SU (4) 2 ×U (1) in the dual frame. Indeed, the dual theory is once more a U Sp(2N ) gauge theory with one antisymmetric and 8 fundamental 28 Here and in the following we use that the two-index antisymmetric representation of SU (4) is real to freely lower its indices. For example explicitly (A.9) chirals, where the fundamentals are naturally divided into two groups of four that we denote by q a and p b with a, b = 1, . . . , 4. This time this is due to the presence of additional 12N gauge singlets µ ab i , ν ab i with a, b = 1, . . . , 4 and i = 1, . . . , N interacting with the superpotential Also in this case in order to express the action of the duality on the global symmetries in terms of the fugacities of (A.2) we have decide how to split the 8 chirals Q a of the original frame into two groups of four. Using the most natural decomposition into Q 1,2,3,4 and Q 5,6,7,8 we have the transformation where recall that we defined u 4 + = 4 a=1 u a and u 4 − = 8 a=5 u a . Accordingly, we have the following operator map: [14]. 29 The F E[U Sp(2N )] theories where shown to enjoy U Sp(2N )×U Sp(2N )× U (1) 2 global symmetry with one of the U Sp(2N ) factor emerging in the IR from SU (2) n . As we will see, for n = 2 we have a further enhancement with U Sp(4) × U Sp(4) recombining into SO (10). F E[U Sp (4)] is an N = 1 SU (2) gauge theory with 8 fundamental chiral fields and 15 gauge singlets, so it belongs to the class of theories studied in the main text. Following similar conventions to [30], we split the fundamental chirals in three groups that we denote by D (1) α , 29 See also [26,29] for discussions of the more general F E[U Sp(2N )] theory and applications in models with global symmetry enhancements. Table 3. Transformation rules under the global symmetries and the trial R-symmetry U (1) R0 of all the chiral fields of the theory. V α and q a and the gauge singlets in other four groups that we denote by b, D αa , A and O H,[ab] , with α = 1, 2 and a = 1, · · · , 4 and with O H such that Tr 4 O H = 0. The superpotential is W = A Tr g Tr x (q q) + Tr g Tr x (O H q q) + Tr g Tr x Tr y 2 V q D (2) + b Tr g Tr y 1 D (1) D (1) , where Tr x and Tr y i are the traces over the U Sp(4) x and SU (2) y i flavor indices respectively. The matter content is summarized in the quiver diagram of Figure 5. The non-anomalous manifest global symmetry is In [14] it was argued that the symmetries 2 n=1 SU (2) y i of the saw are enhanced to a second U Sp(4) y symmetry in the IR. In order to detect the enhancement we can compute the superconformal index of F E[U Sp (4)]. This can be done using the assignment of charges for the chiral fields summarized in Table 3. With this parametrization, we get from a-maximization the following values for the mixing coefficients of U (1) c and U (1) t with U (1) R 0 : In computing the index we approximate these mixing coefficients with R c We can also see at order pq the contribution of SO(10) conserved current. Indeed we find −45 − 2, which is the character of the adjoint representation of SO(10) This enhancement of the global symmetry is quite peculiar, as U Sp(4) 2 is a maximal subgroup of SO(10) but its rank is lower than the one of SO(10). This curious behaviour can be explained as follows. It is useful to work in a basis of fields that makes manifest the SU (2) 2 subgroup of U Sp (4), similarly to what we did in section 3. Specifically, we split the four fundamental chirals q into two groups of two that we denote by Q 1 and Q 2 . In these new conventions, the first two terms in the superpotential (B.1) read where we wrote explicitly all the traces Tr x , Tr y 1 , Tr y 2 over the U Sp(4), SU (2) y 1 , SU (2) y 2 flavor indices and we are omitting Tr g for the contraction of color indices. From this expression, it is clear that this superpotential violates the chiral ring stability criterion [27]. Suppose that we deform the theory by removing the first term A Q 2 1 . The equation of motion of the field O H,12 set the operator Q 2 1 to zero, meaning that the operator A Q 2 1 that we removed vanishes in the chiral ring of the deformed theory. Hence, this term is unstable in the superpotential (B.5) and should be dropped. Notice that this operation doesn't really modify the theory, since it can be realized with a trivial linear field redefinition It is easy to check that with such a modification we recover nothing but the T 2 theory of section 3. Hence, after the stabilization the manifest UV symmetry is actually the SU (4)×SU (2) 2 × U (1) 3 symmetry of T 2 , which we know is enhanced in the IR to SO(10) × U (1) 2 . Furthermore, it has been discussed that the compactification of F E[U Sp(2N )] on a circle and its real mass deformations lead to many interesting 3d theories exhibiting similar properties of F E[U Sp(2N )] [14,31]. This is certainly true for F E[U Sp (4)]. For example, the 30 Remember from the discussion in section 3.1.2 that this field is a spectator from the point of view of the SO(10) enhancement. direct reduction of F E[U Sp (4)] gives rise to the 3d theory with the same matter contents and a monopole superpotential, which exhibits the same enhancement of the global symmetry into SO(10) × U (1) 2 . In addition, one can also take a subsequent real mass deformation, as explained in [14], such that each U Sp(2n) factor is broken to U (n). With a certain traceless condition imposed, the resulting theory is F M [SU (2)] proposed in [31]. While the manifest UV symmetry of F M [SU (2)] is SU (2)×U (1) 3 , we have checked that its superconformal index exhibits the characters of SO(6) × U (1) 2 . Thus, we expect F M [SU (2)] enjoys the symmetry enhancement: Moreover, this SO(6) enhancement is closely related to an example discussed in [32]. Indeed, as explained in [31], one can further deform F M [SU (2)] to obtain the theory called F T [SU (2)] [33], which has one less U (1). F T [SU (2)] is basically the same theory as the model in [32] exhibiting SO(6) but with an unstable superpotential. After the stabilization of the superpotential as above, we obtain a U (1) gauge theory with two flavors (Q α ,Q β ), four gauge singlets η αβ for α, β = 1, 2 and the superpotential which is exactly the model in [32] showing the enhancement of the global symmetry: C The plethystic logarithm and the representations of the relations In section 4, we have seen that higher rank theories mostly have marginal operators satisfying some relations. The correct identification of such relations is important to argue the existence of the independent marginal operators. In this appendix, we explain how to read the relations of the marginal operators by examining the superconformal index. We introduce the plethystic logarithm [34] PL which is the inverse function of the plethystic exponential The coefficient µ(k) is the Möbius function defined by k has repeated prime factors, 1 , k = 1, (−1) n , k is a product of n distinct primes. If we take the plethystic log of the superconformal index, it will give the generating function of the single trace operators, either bosonic and fermionic, as well as the relations among them, which reflect the interaction of the theory. For example, N free chiral multiplets have the PL index where a i is the fugacity for each U (1) rotating each chiral multiplet. There will be extra terms if the theory is interacting. In addition, the power of pq will be varied by the shift a i → a i (pq) i where i is determined by the interaction. In fact, one should remember that there are not only the relations of bosonic and fermionic operators but also those of relations themselves. Here, however, we focus on the relations of bosonic operators, which then give negative contributions to the PL index. One should note that such relations of bosonic operators appear in the PL index in two different ways: one is realized by the negative contribution of a fermionic single trace operator while the other is the negative contribution corresponding to the absence of a bosonic operator in the original index. For example, let us consider an F-term condition coming from a superpotential where S is a gauge singlet and O is some gauge invariant bosonic operator. The F-term condition demands that This relation for the bosonic operator is realized in the index as the contribution of the fermionic operator Ψ S , which cancels the contribution of O such that it vanishes in the index. On the other hand, the other type of relation does not come up with a fermionic operator. In order to understand this, let us consider the example of the SU (2) gauge theory with 8 fundamental chirals and no superpotential that we reviewed in section 2. Recall that in this theory the chiral ring generators are the mesons m 0,ij = Tr g (Q i Q j ) which transform in the antisymmetric representation of the SU (8) The term 28(pq) 1 2 represents the chiral ring generators m 0,ij , while the term −133pq corresponds to the sum of 70 relations m 0,[ij m 0,kl] = 0 and of the fermionic superpartners of the conserved current in the adjoint representation of SU (8). In order to distinguish the two types of relations, we can introduce a fictitious fugacity F i in the numerator of the 1-loop determinant of each matter multiplet. Then the contributions involving any matter fermion will come up with extra factor i F n i i with some power n i . Such contributions can be either the independent fermionic operators or the fermionic operators corresponding to relations. On the other hand, we said that there are also the relations that are not realized by fermionic operators. The contributions of those relations do not include any factor of F i . If we go back to our example of the SU (2) gauge theory with 8 fundamental chirals and turn on the same fictitious fugacity F for all the fermions contained in the chirals, we obtain the following PL of the index PL [I] = 28(pq) The positive term correspons indeed to the contribution of Tr g λ 2 , which recombines with one of the 64 fermionic operators into a long multiplet in the true index that we get in the limit F → 1. The remaining 63 fermionic operators correspond to the superparteners of the SU (8) flavor current, while the 70 relations do not carry any power of F , meaning that they do not come from fermionic operators as we anticipated before. Now let us apply this strategy to some of the models we considered in section 4. Consider for example the rank-2 T 2 N =2 model in section 4.2. This has the manifest symmetry SU (2) 1 × U Sp(4) × SU (2) 4 × U (1) 3 , (C. 12) which is enhanced to SO(9) × U (1) 3 (C. 13) in the IR. In order to prove the symmetry enhancement to SO(9) × U (1) 3 , it was important to argue the existence of the marginal operators in the representation 9 of SO(9). Those independent marginal operators can be found by constructing the candidate marginal operators and their relations, which are either realized by fermionic operators or not. In particular, we have claimed in (4.32) that the relations not realized by fermionic operators are in the representation (2, 1, 2) + (2, 5, 2) (C.14) of SU (2) 1 ×U Sp(4)×SU (2) 4 . Here we explain how to read this representation of the relations from the PL index. Turning on a fugacity F i = F for each matter fermion, the plethystic log of the superconformal index is given by where we have turned off the fugacities for SU (2) 1 × U Sp(4) × SU (2) 4 for simplicity. If they are turned on, all the numeric coefficients are written as the characters of SU (2) 1 × U Sp(4) × SU (2) 4 , which are enhanced to those of SO(9) in the F → 1 limit. For the first several terms, one can read bosonic single trace operators as well as their relations, especially the Fterm conditions, realized by fermionic operators. For example, the first term (1 − F ) a 2 r 18/11 indicates the operator Tr g A 2 (C. 16) and its F-term relation Tr g A 2 = 0 (C. 17) due to the superpotential term a 2 Tr g A 2 .
17,483.2
2020-10-20T00:00:00.000
[ "Physics" ]
I SSUES IN COMBINING THE USE OF VARIOUS TOOLS IN SOLVING MATHEMATICAL PROBLEMS – WHY IS COPY-PASTE OFTEN USELESS Mathematical formulae are information objects that can be entered in a computer, visualized, and evaluated. Thus, by the majority of (mostly occasional) users it is also expected that they are transferable through the simple copy-paste procedure. This transfer is particularly interesting when users are involved in tasks that span different mathematical activities or domains. For example, when performing computations and writing a report about them, or when performing algebraic computations from geometric constructions. Essentially, using copy-paste also allows users to use a particular mathematical software for the tasks that it does best. To uncover the possible difficulties within this process, we approached students who are beginning to learn the use of mathematical tools. Through analysing their home-work where they report on the usage of various tools we observed their use of the copy-paste transfer procedure, both from their reports and from the dialogue they had with the teachers. Their attempts show a multitude of issues which we try to explain. It appears that the copy-paste procedure is often useless, especially for inexperienced users. When something is displayed and selectable on a computer screen, users expect it to be copy-paste-able: the copy function can be used and the selection inserted (pasted) into other places, within the same program or transferred between two different programs.Copy-paste is an exchange mechanism between different "places" inside documents.It consists of the following phases:  An origin "content" is selected (e.g., with the mouse). It is copied; this transfers several copies of the content at the origin into the system's clipboard, one for each flavour.  A target place is selected in the document being edited.  The contents of the clipboard are pasted.The application editing the document chooses which flavour is best and inserts its associated content in the given place. Operating systems and applications have been offering this procedure for decades.It is already part of the daily routine; at least for texts and images. Mathematical formulae displayed within various computation or editing systems could, in principle, also be transferred.How successful is this?How often do users assume that it is possible to use the copypaste procedure for the exchange between systems and how often does this procedure fail to produce the desired result?Should there be more effort put towards making the exchange process between applications smoother?These questions represent the core of this paper. This appears particularly interesting as mathematical software is diverse and rich, and each software can perform some tasks very well and others only below par.For example, it is natural within a dynamic geometry system to study the geometry of various configurations using the visual aspect, but it is quite difficult to obtain the symbolic expressions of the coordinates of an intersection.Similarly, preparing the layout of a document containing mathematical formulae with word-processing or page-layout software is feature rich, flexible, and often considered easy.It can, however, also be done with computer algebra systems, sometimes with spreadsheet systems, or even with geometry systems, but doing so appears to be a largely unrealistic task where users have to agree to multiple compromises and may have to do considerably more work to achieve the result of a similar quality. Our observations confirmed the initial postulation that students mostly do not expect any difficulties in using the copy-paste procedure; contrary to more experienced users, who are well aware of the limitations, as they understand the concepts behind the representations of each system.The students were surprised when things did not work or even worsethey performed the procedure without noticing the incomplete or wrong transfer of a mathematical object due to the copy-paste procedure.An experienced user can find a roundabout solution for quite a few of the reported problems, as they understand the meaning of the underlying representations.However, the main emphasis in this paper is on the problems an inexperienced user, as students mostly are, has when using the copy-paste procedure. It is thus argued in this paper that more emphasis should be put on the possibilities of exchange between programs, and that it should be investigated to what extent such exchange is expected from current users. Outline The paper is organized as follows: first, we describe the backgrounds of the students that we observed, including the competencies aimed at.We then present techniques that we used to analyze the copy-andpaste processes from a technical point of view and some of the most typical difficulties and issues that the students expressed concerning the usage of the copy-paste procedure.An attempt of analysis of the origins of those difficulties follows, where we describe what we consider the expectations of (especially inexperienced) users to be and suggest possible future avenues of research. Related work Very few academic papers describe user studies of the copy-paste function, and there are even fewer papers on the topic that discuss its use for mathematical purposes.A mention as a positive feature of copy-pasting TeX expressions is found in [21].The survey [12] mentions it in connection with graphics only.The gesture survey [19] indicates the copy-paste feature (in abstraction) as an aspect of user interface difficulty, since the user may obtain different services depending on the selection. Industrial specifications, however, commonly describe the operations that application programs should follow as being copy-paste, including user-interface guidelines.Notably, this includes the concept of Uniform Type Identifier to describe the data-type (generally called flavour) stored in the clipboard on Apple-branded computers [2], and the predefined data-types available in Windows environments [16]. Industrial standards attempt to support widespread use of agreed data-types.Among others one can find MathML [6] which attempts to define its clipboard flavours in the media-type declaration (in Annex B), a practice which has been followed by a few other standards. Mathematical input can be considered within a register, a set of conventions and signs which represent mathematical objects [9].In the technology-based landscape registers include the affordance to input mathematical objects and to apply operations to them.Examples of technology-based registers include formulae for computation systems, formulae for typography, geometric configurations, or tabular data.While these registers appear closed, many translations between these registers are feasible (e.g.displaying a formula as a result of a computation, converting a geometric configuration into a symbolic expression, plotting a data-set, or converting a formula from a text-document to a computable one).The copy-paste procedure expands the spectrum of possible operations on mathematical objects. The book [13] gives broad spectrum of educational and mathematical innovations that technology empowers.As an eloquent description of the potential operations on mathematical objects, this book contains [20].It depicts multiple facets of mathematical assessment tools including the very diverse procedures that evaluate mathematical objects. We thus see that the literature we have found describes the copy and paste functionalities as a potentially important aspect but never as a central feature of the mathematical activities.Hence, the present paper is an attempt to gather the current state of anticipations on how a beginner user would expect to use the functionalities of computer tools in their mathematical activities and how these expectations tend to succeed or fail. Experiment Framework: A Course in Practical Mathematics In order to investigate the possible difficulties in using the copy-paste procedure mathematical software users, especially beginners, could have, we have chosen a population of users of mathematics software which, we believe, represents a fair share of the mathematics professionals of the future: students of mathematics. We In the preparation of the course, we focused mainly on the highlighted competences.Therefore, the main aim of this course is to show the students the practical usage of various tools in solving mathematical problems.Students learn about and learn to use techniques for computer tools suitable for various mathematical procedures.An important aspect of it is the learning of how to report on their work. The syllabus is primarily focused on getting used to the programs for numerical and symbolic calculations, drawing graphs of functions, and solving mathematical tasks in analysis and algebra with a computer.Tools that are exposed include computer algebra systems, spreadsheets, visualization and computer-aided dynamic geometry systems.We agreed on the following intended learning outcomes:  Knowledge and understanding: The students are familiar with computer tools and approaches to their usage in mathematical procedures.They are able to report on the process of solving a mathematical problem with computer tools.  Application: The students are able to employ the knowledge gained to support various subjects.  Reflection: The students are able to recognize the importance of computer tools in learning and doing mathematics and to report on solving mathematical problems.  Transferable skills: The students are able to embrace other mathematical subjects employing learning tools, in order to deepen their knowledge of these subjects. This course appears to be a well-chosen space to investigate what users naively do with the copy-paste procedure: In many cases, users are likely to be stronger at creating or studying mathematical topics within one environment instead of another.This will thus naturally lead them to expect the copy-paste procedure to work. Observing the Clipboard Technically As it has already been explained, the clipboard (the "place" containing what is being copied) is made of a series of data-fragments labelled by a flavour: source applications export all flavours while receiving applications choose the representations they consider best for importing data within the current insertion process.A single exception to this rule is the use of the copy-paste procedure within a single application which then has the ability to copy by reference and thus fetch the data in the internal representation of the program for the source. To observe what is being exchanged over the clipboard, little programs called clipboard inspectors can be used.We present the two which were used: In Windows, the open-source project ClipSpy [8] provides a simple overview of the names and a hexadecimal dump of the content of each flavour.Similarly, in MacOS, Clipboard Viewer.app is a program available as sample software from Apple developer tools. A screenshot of each is presented in Figure 1. The usage of these tools has allowed us to see what flavour has been used in exchanges we have reproduced below. The Observations After an introductory period during which students get a basic knowledge of the functionalities of a few applications (computer algebra (Mathematica), numerical processing (MatLab), dynamic geometry system (GeoGebra), layout tool (Microsoft word, LibreOffice Writer) students got an assignment to solve several mathematical tasks where the tools were used in the solution process and to report on the whole process.Some were allowed to use web-based editors such as the Wiris Input Editor. The task given was to solve a mathematical problem and to present its solution in electronic media.They also had to write the report on their approach to the solution, describing the way they used software and to report on possible difficulties and expectations they had beforehand with their use of tools.In all the given problems the usage of different mathematical softwares was implicitly (or sometimes even explicitly) foreseen.Most often a combination of computer algebra system, dynamic geometry system, and numerically oriented matrix software was required.Therefore, even when certain tasks could be solved within one environment (software), the students' use of multiple softwares was highly likely, due to their limited familiarity with them. For reporting, they mostly used the most commonly used word processing software.We were especially interested in observing how they would cope with the process of exchanging the mathematical object between programs and how they would report on that. Here is an abbreviated example of a task (an important part of instructions was also the requirement to consult the lecturer and the teaching assistant about the task when they were directed towards the appropriate use of tools and the copy-paste process): Beside the solution process, you should also report to which extent the clipboard (copy-paste combination) is useful when transferring a mathematical object from one computer tool to another.For example, we perform a certain computation with MATLAB.Now we would like to transfer the matrix obtained to Mathematica (or to GeoGebra or to …).Of course, the majority of those exchanges will occur during the writing of the report where equations and such will be transferred from or to the word processor.Describe your expectations and the experiences you got! Selected Cases In this section, the students' observations from their seminar papers are gathered and representative samples are presented.We also used information from the consultations students had with the teaching assistant and the lecturer.We group these observations according to the pair of software tools used.The majority of examples presented on pictures are taken from the student's reports, however some of them were reproduced by us to further analyse them (via Clipboard-viewer for example), to group them accordingly or as well as to improve the pictures quality which were sometimes unsuitable for publication. Perhaps this quote from one of the seminar papers summarize best the opinion students had "It was an extremely tough task.I spent numerous hours doing it.I tried several techniques to force the programs to properly copy-paste the expression with no success.Finally, I transferred the expressions manually.But here I had problems again as even the usage of parentheses, not to even mention the names of commands, was not the same.I do not understand why each software producer uses their own syntax." For quite a few of the reported problems, an experienced user can devise a workaround.However, the main emphasis in this paper is on the problems an inexperienced user, as students mostly are, has in using copy-paste procedure. From Word to Mathematica The most common pattern of use was MS Word and Mathematica pair.Through observations (mostly from consultations students used) we identified transfers of mathematical objects in both directionsfrom Word to Mathematica as well as from Mathematica to Word. In Word, the student enters the expression using Insert > Equation, marks the expression, uses Ctrl-C (the copy operation), switches to Mathematica and uses Ctrl-V (Paste).How successful were the students in this first part? Acta Didactica Napocensia, ISSN 2065-1430 If using that method of copying, a warning appears when the computation is requested, depicted in Figure 2, which indicates that interpretation issues may appear.As reported in consultations students usually decided not to press Always and were bothered by the warning at every pasting attempt. Many examples perform well.The intuitive exchange works and further computations can be done right away.Mathematica had no problems with recognizing simple expressions; it properly recognized multiple elaborate expressions depicted in Figure 3. Of course, there are issues when students employed notations, which are beyond Mathematica's notations.Examples reported include sen(α) for the sine function of alpha in the Spanish speaking countries (as reported as widespread notation for the sine and cosine functions on notations.hoplahup.net/Census/CD-transc1/sinamong students we observed we had two Erasmus students from Portugal), or the bullet for the multiplication (see Figure 4). Other input modes: We assumed that students employ the menu command Insert > Equation.We, however, frequently observed that simple formulae are generally typed as part of the text and that special transformations (called auto format -AF) support this practice: thus 6-b becomes 6b (where the minus sign becomes a dash), similarly 1/4 becomes ¼.These characters stay characters when pasted into Mathematica and are thus treated as a variable-name as in Figure 5: Mathematica treats -(the dash form) as a variable when 2+3-a + b -a is entered as text.Screenshots in Word (left) with auto-format (above) or not (below), pasted in Mathematica (right).Students usually do not expect that as they blindly assume correctness of copy-paste procedure.When using Free form input in Mathematica, it facilitates some of the issues described above (notably the dash instead of the minus).This mode, however, did not succeed with other parts which worked normally, notably on all variables that are made of several letters. From Mathematica to Word The probably most used transfer of mathematical objects via copy-paste students used was from Mathematica to Word.Having worked with Mathematica, students have some results in Mathematica's notebook and would like to copy them into Word.Many students made snapshots of Mathematica's window, crop it appropriately, and then inserted them into Word as a picture.This is clearly a nonmathematical practice which we shall not discuss.We are interested in the process where expressions stay expressions; it, however, satisfies the students as the consistency of display is insured. If the same copy-paste technique is used, we did not observe any problems.The expression is entered into Word as an equation.This equation can be modified or used. However, the situation is not an ideal one: A couple of students inquired the teaching assistant how to insure that notations in Word and Mathematica be more similar.Due to some peculiarities of Mathematica's syntax, such as that of using square brackets for functions' arguments and function names starting with the uppercase letter are not converted back.So from Word, the equation in Figure 6 transferred to Mathematica and back, is converted to the equation in Figure 7 to which careful students object. Between GeoGebra and Mathematica One of the possible tasks students could choose was to compare capabilities of Mathematica and GeoGebra in dealing with basic algebraic expressions.As they have an apparently similar syntax, one of the student wrote in his report "as I was short in time, I picked this task expecting everything will be mostly the same and thus my seminar paper quickly done.However I had major problems".Also several other students reported on different issues occurred during the copy-paste transfer between this two programs.This section builds on issues reported by students and reports to what extent Mathematica and GeoGebra (in CAS mode) are capable of exchanging expressions. The reader is reminded that formulae in GeoGebra CAS's output are copied using the contextual menu triggered by a right click on the expressions only.Thus only complete expressions can be copied. If only basic operations are used, as can be seen in Figure 8, there are no special issues, except for a minor fact that the copy operation in GeoGebra already equips the expression with additional parentheses and inserts the multiplication symbol. There are also no problems with the usage of special symbols for variables, such as α or β (Figure 9), regardless of whether we copy from the input or the output expression. Using Free Form input in Mathematica (starting the expression with = symbol) is problematic whenever variables with long names are used.As GeoGebra and Mathematica use similar notations (essentially characterized by square brackets for commands), there are no problems whenever commands with the same name and the same number of parameters are used (Expression 1 in GeoGebra vs. Expression 10 in Mathematica in Figure 10).This interoperability rapidly meets its limits, however.For example the command Factor in GeoGebra has a version with two parameters which is unknown to Mathematica (compare expressions 2, 3, and 4 in GeoGebra with expressions 11, 12, and 13 in Mathematica in Figure 10 and attempt using free-form-input in Mathematica in Figure 11).Now take the opposite direction: Mathematica to GeoGebra. Ordinary expressions are not problematic, as long as special symbols are not used.However, copying expressions where we were previously successful, such as α + β now fails.As can be seen in Figure 7, Mathematica inserts its own syntax (which has a LaTex style): this creates errors in GeoGebra (Figure 12). Even the use of alternative copy functionality (Copy as…) is unsuccessful.For example, Figure 13 displays the result of pasting after having "Copy as MathML". With commands that are common to both Mathematica and GeoGebra, a transfer can happen -Mathematica copies them to the clipboard as "plain text".When GeoGebra does not recognize the command, it also changes the parentheses from square to ordinary ones. Wiris Input Editor and Mathematica Some of the students decided to use the Wiris input editor as part of a reporting tool.As in the previous section, exchanges between two similar function-scopes are explored so that they match each other in different situations (e.g.Wiris when on browsers-only and Mathematica when at the lab, or Wiris to facilitate input, e.g. with pen input, and Mathematica to solve difficult differential equations…). The Wiris Input Editor is a web-based component and thus cannot export arbitrary flavours; instead, it puts MathML in the clipboard as plain text.Pasting MathML is sniffed at by Mathematica.That is, Mathematica detects that the plain text is not to be interpreted as source (generally expected from plain text).Thus, it displays a dialog proposing to paste the plain text (the MathML source) or interpret MathML (Figure 14).It is probable that eager users will answer always, but users that expect to manipulate MathML sources might feel uncertain doing so and will thus be bothered by such a dialog every time. After MathML is input, a similar warning as that of Figure 3 about the usage of Mathematica's traditional form is displayed.It may also inform users that a possibly failing interpretation of MathMLpresentation into a computable form will be performed.After going beyond these warnings, and using proper notations, the exchanges work rather well, as displayed in Figure 15. However, these exchanges failed for some mathematical symbols, which are not interpreted as known notations by Mathematica.Among others, while the binomial coefficient is exchanged well; the permutations count (without repetitions) is not. Matrices from Octave to Mathematica One of the students who started his report with "I am going to show how matrix objects can be copied from one program to another.The programs in question will be Mathematica, GeoGebra, Octave and Python (numpy)."quickly realized "Since a matrix is represented differently in each of the programs, we cannot simply copy-paste it.Some conversion must be done.".What follows are selected problems students reported on. The first problem arises when we try to define a vector (Figure 16). Vectors in Octave are entered using square brackets, while curly brackets should be used in Mathematica. The same confusion happens when matrices are used (Figure 17). While trying to copy the defined vector or matrix from Octave to Mathematica, we get the product of each line as a result (Figure 18). Elementary operators seem to work better between these two systems. It is obvious from Figure 19 that while addition and subtraction work, multiplication in Mathematica multiplies element by element, while Octave does matrix multiplication.Transposing does not work either.Element by element division in Octave cannot be copy-pasted either. Analysis In this section, we first present a feedback extract which gives representative feelings of the students towards the copy-paste procedure.We then try to discuss the issues that most of the students encountered. In the last part we sketch how one could approach the construction of a more useful copy-paste procedure. Individual Feedbacks Some observations and remarks of the students, copied from their seminar papers and expressed in consultations with teaching assistant and lecturer (translated from Slovenian by the authors): • "... Copy-paste method does not work with drawing plots in different programs.For drawing plots each of the programs used has its own 'language' which we have to use." • "When we copied expressions from Word to GeoGebra, it worked in some cases even without changing expressions.If exponents were written with symbol '^' in Word, there would not be any problems.From there on, some changes and adaptations were necessary, especially when copying expressions from GeoGebra to Mathematica" • "...I was very rarely able to use the copy-paste method to transfer expressions between programs and even then some corrections were usually needed for the programs to work" • "After several attempts I gave up.The only way to transfer an expression between the programs is to manually retype it …" • "Entering a matrix in X is surprisingly identical to entering the matrix in Y.But unfortunately this is where the fun ends."• "Apart from matrix and scalar addition, and matrix scalar multiplication, the commands for matrices, even though really similar, are not compatible and are misinterpreted by those three (Octave, GeoGebra, Mathematica) applications."In the discussion afterwards, they all expressed their profound disappointment with the software.When the students started the task, most of them expected some minor problems.As one student said, "it is all mathematical software and there is but one mathematics, so I expected no problems in using the copypaste procedure". From their feedback, a global frustration can be inferred: the programs do not allow a transparent continuation of work from one environment to another. Issues to Copy-and-paste Based on the examples, we deduce that the following issues are common to beginner users of mathematical systems:  Encodings mismatches: While some programs transfer much of the information into the clipboard, allowing other programs to recover most information in principle, quite a few merely include information such as the identified meaning of an operator or the desired spacing around them in encodings which are difficult for other applications to receive in a consistent manner. Acta Didactica Napocensia, ISSN 2065-1430 We could observe this behaviour when copying, e.g., from Mathematica to GeoGebra (e.g. the Greek letters), from Wiris to Mathematica (e.g. the permutation count), or when copying vectors from Octave to Mathematica.We have not encountered this issue for visual mathematical formulae when carried from Mathematica or Wiris to Word for example (where MathML-presentation seems to be successfully complete).  Preference for pasting plain text when other formats may be more appropriate.It appears that most programs we met generally choose to export content in plain text and often prefer to paste it so.This is an issue because other encodings seem to be ignored even though they could offer a far richer experience and their encoders are already available in these systems (this is the case of Mathematica or Maple for example).Such a preference can be observed when copying Greek-letter variables from Mathematica (which then prefers to export the TeX-syntax of the Greek letters).This is also shown to be an issue in the Octave-Mathematica or GeoGebra-Mathematica exchanges described above: the syntax compatibility rapidly meets its limit.This is also necessary for the Wiris Input Editor, being a JavaScript widget which has no authorization to export MathML or other flavours; the experience remains somewhat positive thanks to contentsniffing, the practice of analysing the bytes of a piece of content in order to bypass the flavours declarations and decide on the better of the flavours.  Misinterpretation of mathematical notation: Employing a layout software to input mathematical formulae leaves the user quite free to use mathematical notations in a way that does not make sense.What makes sense or not, however, is not shared across the planet.Examples include the usage of the cross-sign for multiplications or the word sen to denote the sine function (which fails from Word to Mathematica).Similarly, different softwares use different function names which are not properly exchanged, e.g. the symbol P to denote the permutations' count from Wiris to Mathematica.Dedicated extensions of the parsing engines are offered but then compromise other aspects (e.g. the free-form of Mathematica). Lack of usage of more semantic encodings: All of the computer algebra systems described in this paper include a rich set of mathematical operators and functions.However, any interoperability they offer to the outside world is either in linear syntax (known to be very program-specific) or in presentation-oriented syntax (such as MathML presentation or pictures).We see that this usage rapidly encounters issues in the Octave-Mathematica or GeoGebra-Mathematica exchanges above. Partial but unpredictable success: Probably the biggest user-perceived issue, and what we have seen as the most frustrating aspect of the exercise is the non-uniformity of exchange capabilities: The examples described above show a very diverse set of interoperability issues: from elementary in/compatibilities to subtle mismatches that simply need a text adjustment.There appears to be no way of explaining to a user in simple terms, the rules of how to tell whether a formula that has just been processed will be transferred successfully. Future Work Many of the issues are clearly related to the fact that most software makers predict only a few possible software targets where the copy and paste exchange could happen.While one sees dedicated attempts to make things work (e.g.Mathematica to Maple, according to the software makers), it appears that there is probably too big a diversity to attempt simple pair-wise checking and that abstraction is required.This abstraction is provided by standardized encodings: when exchanging pictures, for example, it is clear that raster pictures use one of the current formats (PNG, TIFF, JPEG, PDF …).Would plain-text syntax be sufficient for this?This is not usually the case (e.g. when inspecting the GeoGebra-Mathematica exchanges).Attempts at making TeX such a standard have almost always failed since configurable macros are almost always desired and are not exchangeable. Standardized encodings are provided by such standards as MathML [6], OpenMath [18], or the yet to become I2G [1] with a definition made outside the scope of a precise software and based on the consensus of software makers.Indeed, the use of MathML-presentation by the Wiris editor was successful to a large extent.We claim that a computer algebra system that would perform its copy function as MathML-content, OpenMath, MathML-presentation, HTML-with-MathML, vector and raster pictures, and plain-text would have a much bigger success than what can currently be observed.Other computer algebra systems would understand all basic expressions of MathML content and OpenMath (avoiding the problems between Octave and Mathematica, and most of the problems between GeoGebra and Mathematica), paste them properly as formulae within Word-processing applications, email programs, and web-based editors, as vector and raster pictures in the respective applications, and finally as plaintext for all others.The use of proprietary syntax might be added to the mix (e.g. the input source syntax, and probably richer representations), but it should not be in the plain-text flavour but in dedicated flavours which indicate the syntax and version used. To evaluate this claim, and to support the users in reporting a more precise reporting and inspection of the clipboards when involved in a mathematical task where reporting is not their main focus, we plan to construct a help tool that is able to detect the origin of a clipboard, perform a large part of automatic conversions as are generally available (e.g. the numerous parsers for Maple or Mathematica, online conversion services based on these software), and enrich the clipboard.This tool would also be the central place to report on the utility and success of a copy-paste procedure when performing a particular mathematical task.Such a tool would allow a more statistically exact evaluation of the users' needs, beyond the scope of single experiments within a lecture. Another area of research lies in the mathematics-educational dimensions implied by the use of the copy-paste procedure: It represents a truly computer-empowered conversion between registers in the sense of [9].As indicated there, a conversion keeps one part of the content and loses another.The mathematical exercise of representation change lies nearby and is likely to be an effective exploration of mathematical objects, provided appropriate questions are posed to the students about the transferred objects: A simple operation such a function evaluation, a derivation, or another well-known property check allows any user to verify that a copy-pasted expression has indeed been transferred as a function.Applying such strategies in teaching can support critical thinking in the use of the computer tools where mathematical operations support the user in questioning the validity of the objects being manipulated.A mix of media critiques and mathematical competencies is thus developed, especially if some of the programs used come from sources of unknown reputation such as web-pages or freshly installed software. Figure 6 : Figure 6: With classical notation, input in Word.Figure 7: Changed notation, back from Mathematica. Figure 7 : Figure 6: With classical notation, input in Word.Figure 7: Changed notation, back from Mathematica. Figure 9 : Figure 9: Using special symbols from GeoGebra to Mathematica. Figure 13 : Figure 13: Using Copy As MathML then pasting in GeoGebra Figure 18 : Figure 18: From Octave output to Mathematica input. Volume 10 Number 1, 2017 For the seminar paper, you have to choose one of the tasks given below.You have to study it thoroughly and consider all possible special cases.In the solving process, you will combine your mathematical knowledge as well as the capabilities of computers.To successfully solve the task (which, "purely" mathematically, should not be a problem for you) several of the steps have to be solved with computer tools we had learned about.It is expected that you will use various tools.If none of the tasks given is interesting for you, you may suggest another problem (which will (or not) be approved by the lecturer):  The Fibonacci sequence: geometrical meaning and usage …A very important part of your job is to write a report about the problem.Here you should use the capabilities of the word processor.Use results obtained from the tools and insert them in the report, as well as screenshots of the "tool in action". In this experiment we found that Octave and Mathematica are barely compatible, so copypasting from one to another would almost never work.They are only compatible in two basic operators + and -.Trying to copy anything from one and pasting it into another, would either generate an error or return a completely wrong answer."•"Mathematicaconverts expressions, but it works poorly."•"If we look back, we can see that matrices did not do well after the copy-paste procedure; be it in Mathematica or in Octave.But this is not the only issue.Equations are not capable of adapting from Mathematica to Octave and the other way around, either.Calculus has fewer problems with variables/unknowns, while basic math operations/expressions do not have any."• "Please do not give us such tasks anymore.It is impossible to switch between different syntaxes all the time.At the end I did not even know if I had got everything right, even though the mathematics behind the problems was simple enough." • "
7,816.2
2017-05-30T00:00:00.000
[ "Computer Science" ]
Succinic acid inhibits the activity of cytochrome P450 (CYP450) enzymes Abstract Context Succinic acid, extracted from amber, is widely used in cardiovascular therapy. Objective The effect of succinic acid on the activity of cytochrome P450 (CYP450) enzymes was investigated in this study. Materials and methods The effect of succinic acid (100 μM) on the activity of eight isoforms of CYP450 (i.e., 1A2, 3A4, 2A6, 2E1, 2D6, 2C9, 2C19 and 2C8) was investigated compared to the specific inhibitor and blank controls in pooled human liver microsomes in vitro. The inhibition of CYPs was fitted with competitive or non-competitive inhibition models and corresponding parameters were also obtained. Results Succinic acid exerted inhibitory effect on the activity of CYP3A4, 2D6, and 2C9 with the IC50 values of 12.82, 14.53, and 19.60 μM, respectively. Succinic acid inhibited the activity of CYP3A4 in a non-competitive manner with the Ki value of 6.18 μM, and inhibited CYP2D6 and 2C9 competitively with Ki values of 7.40 and 9.48 μM, respectively. Furthermore, the inhibition of CYP3A4 was found to be time-dependent with the KI/Kinact value of 6.52/0.051 min−1·μM−1. Discussion and conclusions Succinic acid showed in vitro inhibitory effects on the activity of CYP3A4, 2D6, and 2C9, which indicated the potential drug-drug interactions. Succinic acid should be carefully co-administrated with the drugs metabolized by CYP3A4, 2D6, and 2C9. Introduction Cardiovascular disease is a ubiquitous heart disease that threatens human health (Ga c et al. 2017). With the development of various therapies, traditional Chinese medicine (TCM) has been widely used in the treatment of cardiovascular disease. In TCM, it is common to co-administrated different kinds of drugs in one prescription. Succinic acid is a main component of amber, which is commonly used for the therapy for arrhythmia. Succinic acid has been reported to possess a variety of pharmacological effects, including cardioprotective, antithrombotic, anti-inflammatory, and antibacterial, which make it easier to co-administrated with other drugs (Tang et al. 2013;Zhang et al. 2014;Radkowski et al. 2018;Nissen et al. 2019). Cytochrome P450 (CYP450) enzymes are responsible for the metabolism of a wide array of endogenous compounds and xenobiotics, which exist mostly in the liver (Uno et al. 2012). Approximately 70-80% of the known Phase I and II metabolism is attributed to CYP450s (Foo et al. 2015;Li et al. 2015). The induction or inhibition of CYP450s might affect the concentration of co-administered drugs in the blood or therapeutic targets and result in adverse effects, which results in the toxicity of drugs or failure of treatment (Kiser et al. 2013). Recently, the in vitro effect of various bioactive compounds on the activity of CYP450s has been studied in human liver microsomes, which is closed to the in vivo system. However, the specific effects of succinic acid on the activity of CYP450s was not clear, which is necessary for the clinical administration and co-administration of succinic acid. Assay with human liver microsomes As shown in Table 1, to investigate the effects of succinic acid on different CYP isoforms in HLM, the following probe reactions were used, according to the previously described method (Zhang et al. 2007;Qi et al. 2013): phenacetin O-deethylation for CYP1A2, testosterone 6b-hydroxylation for CYP3A4, coumarin 7-hydroxylation for CYP2A6, chlorzoxazone 6-hydroxylation for CYP2E1, dextromethorphan O-demethylation for CYP2D6, diclofenac 4 0 -hydroxylation for CYP2C9, S-mephenytoin 4-hydroxylation for CYP2C19, and paclitaxel 6a-hydroxylation for CYP2C8. All incubations were performed in triplicate, and the mean values were utilized. The typical incubation systems contained 100 mM potassium phosphate buffer (pH 7.4), NADPH generating system (1 mM NADP þ , 10 mM glucose-6-phosphate, 1 U/mL of glucose-6-phosphate dehydrogenase, and 4 mM MgCl 2 ), the appropriate concentration of HLMs, a corresponding probe substrate and hispidulin (or positive inhibitor for different probe reactions) in a final volume of 200 lL. The concentration of succinic acid was 100 lM, and the positive inhibitor concentrations were as follows: 10 lM furafylline for CYP1A2, 1 lM ketoconazole for CYP3A4, 10 lM tranylcypromine for CYP2A6, 50 lM clomethiazole for CYP2E1, 10 lM quinidine for CYP2D6, 10 lM sulfaphenazole for CYP2C9, 50 lM tranylcypromine for CYP2C19, 5 lM montelukast for CYP2C8. Probe substrates, positive inhibitors (except for dextromethorphan and quinidine, which were dissolved in water), and succinic acid were dissolved in methanol, with a final concentration of 1% (v/v), and 1% neat methanol was added to the incubations without inhibitor. The final microsomal protein concentration and incubation times for the different probe reactions are shown in Table 1. There was a 3 min preincubation period (at 37 C) before the reaction was initiated by adding an NADPH-generating system. The reaction was terminated by adding a 100 lL acetonitrile (10% trichloroacetic acid for CYP2A6) internal standard mix, and the solution was placed on ice. The mixture was centrifuged at 12,000 rpm for 10 min, and an aliquot (50 lL) of supernatant was transferred for HPLC analysis. The instrument used in this study was Agilent 1260 series instrument with DAD and FLD detector, and the quantitative assay for the corresponding metabolites was performed as previously reported (Lang et al. 2017;Zhang et al. 2017). Enzyme inhibition and kinetic studies of succinic acid Succinic acid (100 lM) was used to initially screen for its direct inhibitory effects towards different human CYP isoforms. For the CYP isoforms whose activities were strongly inhibited, secondary studies were performed to obtain the half inhibition concentration (IC 50 ). K i values were obtained by incubating various concentrations of different probe substrates (20-100 lM testosterone, 10-50 lM dextromethorphan, 5-20 lM diclofenac) in the presence of 0-50 lM succinic acid. Time-dependent inhibition study of succinic acid To determine whether succinic acid could inhibit the activity of CYP3A4, 2D6, and 2C9 in a time-dependent manner, succinic acid (20 lM) was pre-incubated with HLMs (1 mg/mL) in the presence of an NADPH-generating system for 30 min at 37 C. After incubation, an aliquot (20 lL) was transferred to another incubation tube (final volume 200 lL) containing an NADPHgenerating system and probe substrates whose final concentrations were approximate to K m . Then, further incubations were performed to measure the residual activity. After being incubated for 0, 5, 10, 15, and 30 min, the reactions were terminated by adding a 100 lL acetonitrile internal standard mix and then placed on ice; the corresponding metabolites were determined by HPLC. To determine the KI and K inact values for the inactivation of CYP3A4, the incubations were conducted using higher probe substrate concentrations (approximately 4-fold K m values) and various concentrations of succinic acid (0-50 lM) after different preincubation times (0-30 min), with a two-step incubation scheme, as described above. Statistical analysis The enzyme kinetic parameters for the probe reaction were estimated from the best fit line, using least-squares linear regression of the inverse substrate concentration versus the inverse velocity (Lineweaver-Burk plots), and the mean values were used to calculate V max and K m . Inhibition data from the experiments that were conducted using multiple compound concentrations were represented by Dixon plots, and inhibition constant (K i ) values were calculated using non-linear regression according to the following equation: where I is the concentration of the compound, K i is the inhibition constant, S is the concentration of the substrate, and K m is the substrate concentration at half the maximum velocity (V max ) of the reaction. The mechanism of the inhibition was inspected using the Lineweaver-Burk plots and the enzyme inhibition models. The data comparison was performed using Student's t-test and performed using IBM SPSS statistics 20 (SPSS Inc.). Succinic acid inhibits the activity of CYP3A4, 2D6, and 2C9 The activity of CYP3A4, 2D6, and 2C9 was significantly inhibited after incubating with succinic acid (p < 0.05, Figure 2). The administration of 100 lM succinic acid inhibited the activity of CYP3A4 to 14.12%, and the value of IC 50 was obtained as 12.82 lM (Figure 3(A)). Similarly, the inhibition of CYP2D6 and 2C9 was also performed in a concentration-dependent manner with the IC 50 values of 14.53 and 19.60 lM, respectively ( Figure 3(B, C)). The results from the Lineweaver-Burk plots showed that the inhibition of CYP3A4 was best fitting with the non-competitive model with the Ki value of 6.18 lM (Figure 4). The inhibition of CYP2D6 and 2C9 was shown to be conducted in a competitive manner with the Ki value of 7.40 and 9.48 lM, respectively (Figures 5 and 6). The inhibition of CYP3A4 was time-dependent The activity of CYP3A4 was decreased to 26.65% after incubating with succinic acid for 30 min, which indicated that inhibition of CYP3A4 by succinic acid was performed in a time-dependent manner. While the activity of CYP2D6 and 2C9 was not affected by the incubation time (Figure 7). For the time-dependent inhibition of CYP3A4, the corresponding parameters KI/K inact was 6.52/0.051 min/lM, which suggested that about 5.1% CYP3A4 came inactivated by succinic acid (Figure 8). Discussion In traditional Chinese medicine, it is common to combine two or more types of drugs in the clinic, which can make the treatment more efficient. The effects of drugs on the activity of CYP450 have drawn special attention in previous studies, which provides reference for the potential interaction between different drugs. Therefore, drug-drug interaction is one of the most important factors which is associated with the pharmacokinetics of drugs. Succinic acid is a major extraction of amber, which is widely used in cardiovascular disease therapy. It is easier for succinic acid to be co-administrated with other drugs, which makes the drug-drug interaction possible. The in vitro effect of succinic acid on the activity of CYPs was investigated in this study in the pooled HLMs. Succinic acid inhibited the activity of CYP3A4, 2D6, and 2C9 in a concentration-dependent manner. Succinic acid acted as a non-competitive inhibitor of CYP3A4 and a competitive inhibitor of CYP2D6 and 2C9, which might be caused by the similar structure of succinic acid and the substrates of CYP2D6 and 2C9. Moreover, the inhibition of CYP3A4 was affected by the incubation time. These results indicated in the potential drug-drug interaction between succinic acid and drugs metabolized by CYP3A4, 2D6, and 2C9, and the administration dosage and the incubation time are also two vital factors that affected the drug-drug interaction. CYP1, CYP2, and CYP3 are three kinds of major CYP450s responsible for the metabolism of most drugs (Xu et al. 2018). CYP3A4 is one of the most important enzymes in the CYP3A family, which is involved in the metabolism of 50% drugs (Basheer and Kerem 2015;Srinivas 2016). The inhibition of CYP3A4 could induce the toxicity or failure of drugs. The inhibition of CYP3A4 by succinic acid indicated that succinic acid should be used carefully when combining with drugs metabolized by CYP3A4, and the intrinsic clearance rate may be affected by this kind of inhibition, which can predicate the clearance of drugs in the clinic (Chao et al. 2010). Although CYP2D6 and 2C9 accounts a little of CYP450 expressed in the liver, they still participate in the metabolism of various drugs, and CYP2D6 was known to be responsible for the genetic polymorphism (de Groot et al. 2009). The inhibition of CYP2D6 and 2C9 for the poor metabolizer population may be harmful since its inhibition may significantly increase drug concentration in the body, leading to severe adverse effects (Lymperopoulos et al. 2015). The limitation of this study is that in vivo experiments are lacked, which is beneficial for the extensive understanding of the clinical application of succinic acid and the co-administration of succinic acid with other drugs. On the other hand, the effect of succinic acid on the other enzymes or proteins that play roles in the metabolism, transformation, or transport of drugs, such as Pgp and UDP-glucuronosyltransferases (UGT) (Romand et al. 2017;Mano et al. 2018;Yang et al. 2018;Vrba et al. 2020). In preclinical studies to evaluate pharmacokinetics and toxicity, animal models have been commonly used to replace humans (Shi et al. 2016). Therefore, further studies should pay more attention to the in vivo interactions and effects on the activity of P-gp and UGT. Conclusions The in vitro inhibitory effect of succinic acid on the activity of CYP3A4, 2D6, and 2C9 was found in this research. The concentration of drugs and the incubation time are two important factors that influenced the activity of CYPs. These results indicated the potential interaction between succinic acid and drugs metabolized by CYP3A4, 2D6, and 2C9, but further in vivo studies are needed to verify these potential drug-drug interactions.
2,879.8
2020-01-01T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Image Hiding in Stochastic Geometric Moiré Gratings : An image hiding scheme based on stochastic moiré gratings is proposed, discussed, and illustrated in this paper. The proposed scheme is based on a counter-intuitive optical feature of specially designed stochastic moiré gratings when similar images in the static mode become very different in the time-averaged mode. A soft computing PSO algorithm was used for the construction of stochastic gratings. Complex computational algorithms were required to construct the cover image; however, the decryption process was completely visual. The cover image must oscillate in a predefined direction and at a predefined amplitude (the amplitude of the harmonic oscillation is one of the parameters of the proposed image hiding scheme). Computational experiments were used to demonstrate the efficacy of this optical image hiding scheme based on the stochastic moiré gratings. Introduction Geometric moiré is a classical optical experimental technique used for the experimental identification of displacements, strains, etc., from the patterns of moiré fringes [1][2][3][4].Geometric moiré is classified into double exposure geometric moiré and time-averaged geometric moiré [1,2].Both optical techniques require the formation of moiré gratings on the surface of a deformable object.Double exposure geometric moiré uses the superposition of two images.The first image of the moiré grating is taken in the state of equilibrium.The second image of the grating is taken in a deformed state.Two static moiré gratings yield a two-dimensional pattern of fringes used to describe the spatial field of in-plane deformations. Time-averaged geometric moiré techniques use exposure times such that many oscillation periods fit into the averaging interval.Naturally, time-averaged geometric moiré techniques are commonly used for vibrating deformable bodies [2]. Two main goals exist in the analysis of moiré patterns (generated by double exposure or time-averaged techniques).The first goal deals with the interpretation of experimentally obtained patterns of moiré fringes.Those patterns are used to reconstruct the field of displacements, strains, etc., at the centerlines of moiré fringes [2][3][4].The other goal is associated with the synthesis of a predefined target pattern of moiré fringes [5,6].Note that there is a tight link between both tasks as one task gives insight into the other. Recently, moiré techniques have been successfully exploited in different image hiding applications.For example, an image encryption technique based on embedding the secret image into the geometric moiré pattern is discussed in [7].Unfortunately, with this method, rough details of the secret image may still be visible in the deformed pattern of moiré grating lines, even to the naked eye.To address this issue, an improved algorithm for image encryption and decryption that uses stochastic moiré grating manipulations is presented in [8].In this approach, the secret information is hidden by deforming a stochastic moiré grating according to the grayscale intensities of the secret image.Decryption is performed by correlating the pixel intensities in the encrypted image and the original stochastic moiré grating [8]. The relationship between the pitch of the moiré grating, the order of the time-averaged fringe, and the amplitude of the harmonic oscillations forms the basis for the dynamic visual cryptography (DVC) image hiding technique [9].This term (DVC) originates from the concept of visual cryptography (VC), which was introduced in 1995 [10].VC can be characterized by two basic features.Complex computational algorithms are required to encode the secret image.However, a computer is not required to decode the image; the secret is revealed and can be interpreted by the naked eye [10].VC algorithms are used to split the secret image into two (or more) separate shares (usually printed on transparencies).The secret image emerges when those transparencies are aligned and stacked together [10]. DVC is not based on image sharing [9].The secret dichotomous image is embedded into the moiré grating in such a way that the stationary cover image does not reveal any hidden information, but the secret can be decoded as a pattern of time-averaged moiré fringes when the cover image oscillates harmonically with a predefined amplitude a, and time-averaging techniques are used to register the time-averaged image [9].The secret can be directly visually decoded by observing the oscillating cover image.The naked human eye is capable of integrating oscillating images over time and perceiving embedded secret information only if the minimum frequency of oscillations exceeds 25-28 Hz [11]. The main objective of this paper is to introduce stochastic moiré gratings into DVC schemes, which enables an increase in the security of the cover image.The main advantage of the proposed DVC scheme is based on the fact that the stochastic structure of the cover image ensures robustness to statistical algorithms.In other words, statistical analysis of the stationary cover image cannot reveal the encoded visual information. This paper is structured as follows.Section 2 discusses the one-dimensional Wada index, governing equations of time-averaged geometric moiré, and the scheme for image hiding in harmonic gratings.The construction of the dynamic visual cryptography scheme based on near-optimal stochastic gratings is introduced in Section 3. Concluding remarks are presented in the last section. The Wada Index for the Evaluation of the Grating Complexity It is common to evaluate the randomness of digital grayscale images using the Shannon entropy [12,13].In [14], the Wada index, based on weighted and truncated Shannon entropy, was proposed for detecting Wada boundaries in phase plots of nonlinear dynamical systems.Moreover, the Wada index provides a numerical estimate of the number of different colors and their proportions in a two-dimensional phase space diagram, making it applicable not only in nonlinear dynamics but also in image processing in general [14].In this study, the Wada index is used to estimate the number of different grayscale levels and their distribution in stationary and time-averaged moiré gratings. Let us list the notations required for the introduction of the one-dimensional (1D) Wada index, which is used to evaluate the complexity of the 1D digital image: 2 is equal to 1 if the number of grayscale levels in the 1D observation window is greater than or equal to 2: • The indicator function 1 (s) 3 is equal to 1 if the number of grayscale levels in the 1D observation window is greater than or equal to 3: 1 The Shannon entropy of different grayscale levels in the 1D observation window: The Wada index ω (s) 1 in the 1D observation window of length s reads [14]: ( Let us denote the number of 1D observation windows covering the whole 1D digital image as N.Then, the Wada index W (s) 1 for the whole 1D digital image can be formulated in accordance with [14]: where ω (s) 1,k and 1 1 (Equation ( 2)) and the indicator function in the k-th observation window. Time-Averaged Geometric Moiré: Harmonic Grating The one-dimensional harmonic moiré grating M(x) in the state of equilibrium reads: where x is the one-dimensional spatial coordinate; λ is the pitch of the grating, indicating the distance between the centers of two adjacent white (or black) grating lines.Values of M(x) vary from 0 (which corresponds to the black color) to 1 (the white color); values from the interval [0, 1] represent different grayscale levels.Let us suppose that the moiré grating M(x) (Equation ( 4)) oscillates in time according to the harmonic function a sin(ωt + ϕ) in the direction perpendicular to the grating lines (a, ω and ϕ stand for the amplitude, frequency, and phase of harmonic oscillations, respectively).The brightness of the gray color at point x at time moment t 0 reads: If time-averaging techniques are used to register the image in Equation ( 5) for a sufficiently long exposure time T, the grayscale time-averaged image reads [2]: where J 0 is the zero-order Bessel function of the first kind: It is clear that the brightness of the grayscale color in the time-averaged geometric moiré depends only on the amplitude of the harmonic oscillations.The oscillation frequency ω and the phase ϕ have no influence on the time-averaged M(x, a) (when T → ∞). The computational representation of M(x, a) (Equation ( 6)) can be implemented by computing the arithmetic average of M(x)| t=t k (k = 1, 2, 3, . ..) over one period of harmonic oscillations (0 ≤ t k < 2π) [9]: The accuracy of a numerical reconstruction of the time-averaged image is determined by the number of discrete time steps n used in one period of oscillations (Equation ( 8)).The relationship between n and the number of reconstructed time-averaged moiré fringes with a predefined accuracy is given in [9]. Time-averaged moiré fringes become fully developed when The relationship between the sequential number of time-averaged fringes, the amplitude of harmonic oscillations, and the pitch of the moiré grating reads: 2π where r i denotes the i-th root of J 0 (Equation ( 7)); a i is the amplitude of oscillations at the center of the i-th time-averaged fringe. The graphical interpretation of Equation ( 10) is presented in Figure 1.One-dimensional time-averaged moiré gratings M(x, a) at increasing amplitudes of harmonic oscillations a are depicted in the left panel of Figure 1.Note that the top row of this image (at a = 0) coincides with the stationary moiré grating M(x).The stationary moiré grating M(x), its optical representation, and the Wada index W (s) 1 are depicted in Figure 1a.Note that 0 ≤ M(x) ≤ 1, but the brightness of pixels in the optical representation varies from 0 to 255.The panels representing Wada indexes in Figure 1 are constructed in such a way that the horizontal axis represents the length of the 1D observation window s.In other words, Figure 1 represents the variation of the Wada index W (s) 1 is depicted at a = 0 (Figure 1a), the middle of the first time-averaged moiré fringe (a = λr 1 2π ≈ 0.1914, Figure 1b), and the zone between the first and the second time-averaged fringes (a = 0.3, Figure 1c). Note that W (s) 1 is a monotonous function with respect to s.The wider the observation window, the larger the number of pixels with different brightness.The maximum value of W (s) 1 is produced at the largest s.However, periodic moiré gratings (Figure 1a,c) yield maximum values of W (s) 1 even when the width of the observation window is equal to one period of the grating. Image Hiding in Harmonic Gratings Let us suppose that two moiré gratings M 1 (x) and M 2 (x) are given with two different pitches λ 1 = 0.5 and λ 2 = 0.57.Static moiré gratings M 1 (x) and M 2 (x) are depicted in Figure 2a,c.Optical representations of the static moiré gratings M 1 (x) and M 2 (x) are depicted in Figure 2b,d. Both moiré gratings are oscillated harmonically with the same amplitude of oscillations a = 0.19 (Figure 2).Time-averaged moiré gratings M 1 (x, 0.19) and M 2 (x, 0.19) are shown in Figure 2a,c; their optical representations are shown in Figure 2b,d. The amplitude of harmonic oscillations a = 0.19 is preselected in such a way that the pitch λ 1 = 0.5 satisfies Equation (10).The time-averaged image M 1 (x, 0.19) becomes plain gray and the time-averaged fringe becomes fully developed (Figure 2b). However, amplitude a = 0.19 and the pitch λ 2 = 0.57 do not satisfy the relationship given by Equation (10).The time-averaged moiré fringe is not fully developed, and the blurred original moiré grating is still visible in the resulting time-averaged image (Figure 2d). The DVC encoding scheme is illustrated by a simple computational example when a plain circle is embedded into the surrounding background (Figure 2e).The stochastic initial moiré phase algorithm and the algorithm for the regularization of moiré phases on the boundary lines between the secret and the background [9] are used to hide the "secret" circle in the background (Figure 2e).The naked eye cannot interpret the secret image in the stationary encoded image. However, the secret image is leaked when the amplitude of harmonic oscillations is preselected in such a way that a fully developed moiré fringe is formed either in the background or in the area occupied by the secret image.The Moiré grating M 1 (x) is used to cover the background; the moiré grating M 2 (x) is the area occupied by the circle.Therefore, the background becomes plain gray, and the circle appears in the form of notfully developed moiré fringe in the time-averaged image in Figure 2e. The image hiding scheme in harmonic gratings (the DVC scheme based on harmonic moiré gratings) can be characterized by the two essential features.The secret image should be unrecognizable to the naked eye from the stationary cover image.The secret image should be interpretable to the naked eye when the cover image oscillates, and the amplitude of harmonic oscillations is set to a given value (the amplitude of harmonic oscillations is one of the keys used for decoding the secret) [9]. Time-Averaged Random Moiré Grating Two moiré gratings (M 1 (x) and M 2 (x)) used to construct the DVC scheme in Figure 2 are harmonic functions.Moreover, although the naked eye cannot interpret the embedded secret in the static cover image, advanced statistical algorithms could be exploited to check the variations in the period of the pixel brightness in each row of the digital cover image.The robustness of the DVC scheme to statistical algorithms can be enhanced by using non-harmonic oscillations [15], near-optimal periodic gratings [16], chaotic oscillations [17], and deformable periodic gratings [18]. Nevertheless, the periodicity of the moiré grating in DVC schemes is a drawback from the point of view of the robustness of steganalysis algorithms.Note that VC schemes are completely robust to steganalysis [10,19].However, VC schemes are required to split the secret image into shares.The fact that the secret image is split into separate shares also raises serious security issues.VC schemes suffer from the possibility of cheating when the eavesdropper obtains information about one of the shares and can generate another/other share/shares in order to produce fake secret images [19].Special advancements in VC schemes help to lower the cheating probability, but it remains an issue related to the security of those schemes [20,21]. Therefore, the ability to exploit random non-periodic moiré gratings in DVC schemes remains an attractive objective, which could raise the security of those schemes to another level. Time-averaged patterns produced by random moiré gratings are discussed in [22].Time-averaging effects produced by random moiré gratings are illustrated in Figure 3. One-dimensional time-averaged random moiré gratings M(x, a) at increasing amplitudes of harmonic oscillations a are depicted in the left panel of Figure 3.Note that the top row of this image (at a = 0) coincides with the stationary random moiré grating M(x).The stationary random moiré grating M(x), its optical representation, and the Wada index W Dynamic Visual Cryptography Based on Stochastic Gratings 3.1. One-Dimensional Stochastic Gratings for Image Hiding Applications As mentioned previously, all available DVC schemes exploit the relationship between the pitch of the periodic moiré grating, the parameters of oscillations, and the order of the time-averaged fringe to hide the secret in a single cover image.Such DVC schemes are based on the phase regularization and random initial phase shifting algorithms [9].The secret information is leaked as a pattern of time-averaged moiré fringes when the cover image oscillates at a predefined amplitude, and time-averaging techniques (or the naked eye) are used to register the time-averaged image [11]. The main objective of this paper is to introduce stochastic moiré gratings into DVC schemes.The apparent simplicity of the problem is truly misleading.The stationary cover image must remain uninterpretable (the secret image should not be directly leaked from the stationary image).The secret image should be leaked in the time-averaged image when the cover image oscillates according to a pre-set amplitude of harmonic oscillations.Clearly, phase regularization and random initial phase shifting algorithms are not applicable for stochastic moiré gratings (the concept of the phase is lost in the set of pixels with random brightness).Moreover, changes in the contrast of stochastic images affected by motion blur are completely different from the mathematical point of view.As discussed in the previous section, DVC schemes with harmonic gratings are based on Bessel functions and the interplay of their roots.A completely new nomenclature has to be introduced in order to describe (and control) motion-induced blur for stochastic gratings. Recently, the Wada index was introduced to measure the randomness of digital images that are more complex than fractal dichotomous basin boundaries of nonlinear dynamical systems [14].Since the stochastic DVC scheme is based on manipulation with pixels in separate rows (columns) of the cover image, the concept of the Wada index needs to be adapted to one-dimensional sets of pixels (Section 2.1). The proposed DVC scheme based on stochastic moiré gratings is also based on the interplay between two gratings, where one grating is used to encode the background and the other grating is used to encode the secret.Unfortunately, as mentioned previously, the concept of the phase and the pitch is lost in stochastic moiré gratings.A completely new strategy is required to design those two stochastic gratings. The basic principle of the DVC scheme must hold true.The first requirement is that the naked eye should not be able to interpret a secret embedded in the cover image.The second requirement is that the secret should be optically leaked from the time-averaged image when the cover image oscillates at a predefined amplitude of harmonic oscillations. We will use the Wada index W 1 and the size of the one-dimensional observation window s helps reveal the complexity of the stochastic grating throughout multiple scales.This is absolutely necessary for the successful implementation of the two DVC requirements since the pitch of the stochastic grating is not defined. In fact, it would be easy to construct stochastic moiré gratings if both DVC requirements would be constructed in an alternative (opposite) way.It is easy to construct such stochastic gratings where the secret is leaked in the stationary cover image.Then, the motion-induced optical blur in the time-averaged images would equalize both gratings, and the secret would not be recognizable in the time-averaged mode. However, the formulated DVC requirements pose a serious challenge from both theoretical and computational perspectives.The Wada indexes for the two stochastic gratings should be almost identical in the static mode, while being as different as possible for the same stochastic gratings in the time-averaged mode. Clearly, manual manipulations with the brightness of pixels cannot be expected to produce stochastic gratings that satisfy both DVC requirements simultaneously.Highend machine learning algorithms should be employed for this purpose.Moreover, the application of machine learning algorithms, including the cost functions and system of constraints, is highly nontrivial.Special additional requirements for the stochastic gratings should be formulated and efficiently implemented. Requirements for Stochastic Gratings Requirements for two stochastic gratings M 1 (x) and M 2 (x) are formulated in this section.Grating M 1 (x) will be used for the background; grating M 2 (x) will be used in the areas occupied by the secret information. Requirements for The standard deviation of the stationary grating M 1 (x) is as high as possible.This requirement is necessary to ensure that the brightness range of pixels used to construct the grating is as large as possible. 2. The mean and standard deviation are approximately the same in any segment of the stationary grating M 1 (x).This requirement is necessary to ensure the consistency of the grating. 3. The difference between the brightness of each pixel of the time-averaged grating M 1 (x, a) and the value 127.5 is as small as possible.This requirement ensures that the time-averaged image of the stochastic grating closely resembles a plain gray image. Requirements for M 2 (x) As mentioned previously, the static stochastic image represented by M 2 (x) should be as close as possible to the image represented by M 1 (x).On the other hand, the time- averaged image of M 2 (x) must be as different as possible from the time-averaged image of M 1 (x). 1. The stationary grating M 2 (x) should be as similar as possible to M 1 (x). 2. The mean of the stationary grating M 2 (x) is approximately the same as the mean of M 1 (x).Otherwise, the secret will be clearly visible in the static cover image. 3. The standard deviation of the stationary grating M 2 (x) is approximately the same as the standard deviation of M 1 (x).This requirement is also crucial for hiding the secret in the cover image.4. The Wada index of the stationary grating M 2 (x) is approximately the same as the Wada index of M 1 (x).This requirement defines the similarity of the secret and the background in multiple scales of the observation window.5. The standard deviation of the time-averaged grating M 2 (x, a) is as high as possible.This requirement ensures that the area occupied by M 2 (x) is not transformed into a plain gray image in the time-averaged mode.6. The mean, the standard deviation, and the Wada index are approximately the same in any segment of the time-averaged grating M 2 (x, a).This requirement ensures the consistency of the secret image in the time-averaged mode. The Formulation of the Cost Functions Let us consider that the spatial coordinate x is bounded by the limits of the onedimensional observation interval 0 ≤ x ≤ L.Moreover, let us assume that this observation interval comprises n grayscale pixels, and that coordinate x i , 1 ≤ i ≤ n is the center point of the i-th pixel. Let us divide the observation interval into such q segments that each segment comprises exactly ν pixels (qν = n). Notations of Statistical Characteristics The following notations of statistical characteristics used in the construction of the cost functions are introduced: • The average brightness of the moiré grating M 1 (x) over the entire observation interval: • The average brightness of M 1 (x) in the k-th segment: • The standard deviation of the brightness of M 1 (x) over the entire observation interval: • The standard deviation of the brightness of M 1 (x) in the k-th segment: • The average brightness of M 1 (x, a) over the entire observation interval: • The average brightness of M 1 (x, a) in the k-th segment: • The standard deviation of the brightness of M 1 (x, a) over the entire observation interval: • The standard deviation of the brightness of M 1 (x, a) in the k-th segment: • The fourth central moment of the brightness of M 1 (x, a) over the entire observation interval: The Formulation of the Cost Function F 1 The formulation of the cost function F 1 (M 1 ) (to be minimized) for the stochastic moiré grating M 1 (x) follows the set of requirements defined in Section 3.2.1: where k 1 , k 2 , k 3 are real positive weighting coefficients. The larger the standard deviation of the 1D image M 1 (x) in the state of equilibrium, the smaller the term 127.5 σ (1) − 1.Note that the highest possible standard deviation of a digital image is equal to 127.5.Such a situation is possible when half of all pixels in the observation interval are black, and the other half are white. 2 tend to zero when the mean brightness and the standard deviation in all segments become approximately the same.The term µ (1) 4 is close to zero when the time-averaged grating M 1 (x) is continuously gray (the brightness of all pixels is close to 127.5).The power of 4 in Equation ( 19) prevents large fluctuations around 127.5. The Formulation of the Cost Function F 2 According to the requirements listed in Section 3.2.2, the stationary stochastic grating M 2 (x) should not differ substantially from M 1 (x) (otherwise the secret would leak in the cover image).Therefore, M 2 (x) is constructed as the perturbation of M 1 (x).The perturbation is implemented in the form of additive noise: where ε is the array of integer corrections, ε ∈ [−C, C], where C defines the magnitude of the perturbation.The vector of corrections ε is constructed in such a way that the second cost function where k 1 , k 2 , k 3 , and k 4 are real positive weighting coefficients.The proportional weights of terms at k 2 , k 3 , and k 4 are similar.Therefore, the constraint k 2 = k 3 = k 4 is set in further computations (which also helps to reduce the computational complexity of the optimization process).The corrections ε are minimized because the stationary stochastic grating M 2 (x) should not be very different from M 1 (x). It is clear that lower values of the correction vector ε result in a lower value of the term 1 n ∑ n i=1 ε i in the cost function F 2 (M 1 , ε).The terms m (1) − m (2) , σ (1) − σ (2) , and max tend to zero when the means, standard deviations, and maximal Wada indexes of the gratings M 1 (x) and M 2 (x) become similar. Analogously, the higher the standard deviation σ (2) , the smaller the value of the term 1 σ (2) .Note that a higher value of σ (2) is required to ensure that the area occupied by M 2 (x) is not transformed into a plain gray image in the time-averaged mode. The (2,j) max 2 tend to zero when the mean brightness, the standard deviation, and the maximal Wada index in all segments of the time-averaged image M 2 (x, a) become similar. Evolutionary Algorithms for the Optimization of Stochastic Gratings for Image Hiding Applications It is clear that the complexity of the cost functions (Equations ( 20) and ( 22)) is so high that deterministic optimization algorithms should be replaced by evolutionary optimization algorithms.Since the particle swarm optimization (PSO) algorithm is widely considered to be a robust optimization algorithm with good properties of convergence [23], further construction of stochastic gratings is performed by means of PSO (though other soft computing algorithms are also used in image steganography [24]).PSO is an algorithm inspired by the social behaviors of animal groups [23,25].The initial version of the algorithm was proposed by J. Kennedy and R. Eberhart in 1995 [26].Since then, different modifications and new versions of the PSO algorithm were introduced [27]. A new parameter, called an inertia weight, incorporated into the original particle swarm optimizer, improved the performance of the original technique [28].The constriction coefficient introduced into the PSO algorithm enhance the convergence speed and improves the balance between global exploration and local exploitation [29].The sinecosine acceleration coefficients are introduced into the particle swarm optimizer in [30].A proportional factor based on the Nash equilibrium is incorporated in the PSO in [31]. Numerous new learning and searching strategies for different applications were incorporated into the PSO optimizer as well.The authors of [32] propose selectively informed PSO, in which the particles choose different learning strategies based on their connections.A new version of PSO based on several initializations in different zones of the search space (using charged particles) is presented in [33].A level-based multi-strategy learning swarm optimizer for large-scale multi-objective optimization is introduced in [34].A multi-swarm particle swarm optimization algorithm using an adaptive factor selection strategy is described in [35]. The constricted PSO algorithm was employed in further investigations.Note that the constricted PSO enables the avoidance of early convergence and swarm explosion [23,25].Information about each particle i is stored in n-dimensional vectors P i , Q i , and V i , representing the current location, the previous best position, and the current velocity of the particle, respectively.Vector Gb denotes the position of the best particle.In each j-th iteration, particle i updates its position and velocity according to the following iterative equations [25,27,29]: where ϕ 1j and ϕ 2j are random numbers uniformly distributed in (0, 1); coefficient c 1 pulls the particle towards the position where it has the best fitness, while c 2 propels the particle towards the current best particle; χ is the constriction coefficient Parameter κ controls the exploitation and exploration abilities of the swarm.If κ is close to 0, then the convergence is faster with local exploitation, and if κ approaches 1, then the convergence is slow with a higher degree of exploration [27]. We set c = 4.1 as the convergence is quick and guaranteed at c > 4 [23,27].The parameters c 1 and c 2 are set to be equal (c 1 = c 2 = 2.05) [27,36].We use κ = 0.7 in our computations.Thus, the value of the constriction coefficient is set to χ ≈ 0.5109. Let us consider the stochastic grating consisting of 100 pixels.Then, the computational optimization of the cost function F 1 (M 1 ) in Equation ( 20) yields the near-optimal stochastic moiré grating M 1 (x) (Figure 4a).Unsurprisingly, the time-averaged image M 1 (x, 5) is almost a plain image (Figure 4a).Note that the scale of x and the amplitude of oscillations in M 1 (x, 5) are defined in terms of discrete pixels (the amplitude is set to cover the distance equal to five adjacent pixels). The variation of W (s) 1 for the static stochastic grating M 1 (x) is almost linear with respect to s and reaches 77.11 at s = 100 (Figure 4c).However, the variation of the Wada index W (s) 1 for the time-averaged grating M 1 (x, 5) only reaches 4.94 (Figure 4c).Such an effect is caused by the blur induced by the harmonic oscillations of the time-averaged image. The optimization of the cost function F 2 (M 1 , ε) in Equation ( 22) yields the near-optimal perturbed stochastic moiré grating M 2 (x) (Figure 4d).However, M 2 (x) is not transformed into a plain image in the time-averaged mode (Figure 4d).This is predetermined by the structure of the cost function F 2 (M 1 , ε), where (M 2 (x) is similar to M 1 (x), but time- averaged effects produced by M 2 (x) and M 1 (x) are substantially different). Note that W (s) 1 for M 2 (x) is almost identical to M 1 (x) in the static mode (Figure 4c,f).However, W (s) 1 for M 1 (x, 5) and M 2 (x, 5) are very different (panels (c) and (f) in Figure 4).Such optical effects build the foundation for the construction of the dynamic visual cryptography scheme based on stochastic moiré gratings.The optical representations of M 1 (x) and M 1 (x, 5) are depicted in panel (b).Wada indexes W (s) (1) max = 5.1).The optimal grating M 2 (x) is shown in panel (d); µ 2 = 127.9,σ 2 = 66.2.The statistical characteristics of the time-averaged grating M 2 (x, 5) read as follows: µ 2 = 127.9,σ 2 = 8.6.The optical representations of M 2 (x) and M 2 (x, 5) are depicted in panel (e).Wada indexes W (s) In general, the optical effects illustrated in Figure 4 are strongly counterintuitive.The motion-induced blur makes the time-averaged images more similar, not vice versa.This fact is illustrated in Figure 5.Note that the DVC scheme developed in this paper is based on completely different optical effects than demonstrated in Figure 5. (Baboon) max = 57.1,ω (Baboon) max = 37.1).Note that one-dimensional images are very different in the static mode, but become similar in the time-averaged mode.The DVC scheme developed in this paper is based on a completely different optical effect. The DVC Scheme Based on Stochastic Moiré Gratings Let us consider a two-dimensional dichotomous secret image of size n y × n x (with n y rows, each containing n x pixels).The main steps of the proposed scheme are shown in Figure 6.In the preprocessing step (STEP 1), two near-optimal stochastic gratings (M 1 and M 2 ) must be generated.Both gratings M 1 and M 2 are computed using the PSO algorithm (the detailed pseudo-code and flowchart of the PSO algorithm are described in [23]).Note that both near-optimal stochastic gratings M 1 and M 2 are constructed only once and can be reused to encode any secret dichotomous image. The encoding of the two-dimensional secret dichotomous image (STEP 2) is implemented by a straightforward combination of two-dimensional stochastic gratings M 1 and M 2 .The pixels containing the secret information are covered by M 2 (the background is covered by M 1 ). The decoding of the secret information (STEP 3) can be performed visually [9,11] or by using computational simulation [9].Visual decoding lies in the essence of DVC; complex algorithms are required to encode the secret, but the decoding of the secret does not require a computer.In our case, the cover image should oscillate in a given direction with a given amplitude; the naked eye is sufficient to interpret the secret image [11].However, computational simulations could also be used to decode the secret.The additive superposition of a sufficient number of identical copies of the cover image deflected from the state of equilibrium according to the harmonic motion law with a given amplitude results in the interpretable secret image [9].Note that all illustrations of the decoded secret images in this paper are constructed using computational decoding algorithms. The preprocessing step appears to be the most computationally intensive stage of the encoding process.Let us suppose that the swarm size used in the PSO algorithm is preset to N particles.A comprehensive discussion about the swarm size can be found in [39] (we used N = 1000 in our computations).Five multiplications and four additions per particle per dimension are required for one iteration of the PSO algorithm to update the particle's velocity [40].One "addition" operation per particle per dimension is also required to update the particle's position (Equation ( 23)) [40].Overall, 10Nn x arithmetic operations are required for the whole swarm of N particles with dimension n x for one iteration of the PSO algorithm (for the one-dimensional grating).This number of operations results in an algorithmic complexity of O(n x ) per iteration for the one-dimensional stochastic grating.The computational complexity of one evaluation of the cost-functions F 1 (M 1 ) and x per iteration of the PSO algorithm (for the one-dimensional grating).Although the computations described in STEP 1 are time-consuming, two-dimensional stochastic gratings are constructed only once and can then be reused for encoding any secret dichotomous image of size n y × n x .Computational decoding requires the arithmetic superposition of the encoded image per one period of harmonic oscillations, resulting in a complexity estimate of O(n x ). The DVC scheme based on the stochastic moiré gratings is illustrated in Figure 7.The resolution of the cover image is set to 300 × 180 pixels.Moreover, 180 copies of nonidentical near-optimal one-dimensional moiré gratings M 1 (x) constitute the static image in panel (a) of Figure 7.The time-averaged image M 1 (x, 5) yields an almost plain gray image (Figure 7a).Moreover, 180 copies of non-identical near-optimal one-dimensional moiré gratings M 2 (x) constitute the static image in Figure 7b.The time-averaged image M 2 (x, 5) in panel (b) is very different from M 1 (x, 5) in panel (a). The dichotomous secret image is depicted in panel (c).The static digital image in Figure 7a is used for the construction of the background of the cover image; the static digital image in Figure 7b is used to fill the regions occupied by the secret.The resulting static cover image is depicted in Figure 7d.Note that the secret is uninterpretable to the naked human eye (due to the optimization of M 1 (x) and M 2 (x)).However, the time-averaged cover image yields the secret (Figure 7d). The Comparison between the Proposed Technique and Classical DVC Schemes The proposed DVC technique based on stochastic gratings is compared with two classical DVC schemes.The static cover image produced by the DVC scheme based on the harmonic moiré gratings [9] is depicted in Figure 8a.The static cover image produced by the DVC scheme based on the dichotomous black and white gratings is shown in Figure 8b.Note that the embedded secret information is exactly the same as in Figure 7. Harmonic oscillations with amplitude a = 5 are used to decode the secret for both schemes (Figure 8a,b).The same pitches λ 1 = 2πa r 1 = 13.06 (Equation ( 10)) and λ 2 = 0.9 • λ 1 = 11.75 are used for the harmonic and dichotomous gratings.Pitch λ 1 is used for the background image; pitch λ 2 is used for the areas occupied by the secret image.One of the main features used to characterize the quality of the decoded image in visual cryptography is the difference between the standard deviation of the pixels in the background of the decoded image and the standard deviation of the pixels in the areas of the secret image [41].This difference in standard deviations defines the contrast and visual interpretability of the decoded secret.The corresponding calculations are presented in Table 1.The contrast of the decoded image produced by the proposed scheme is lower compared to the classical DVC scheme based on the harmonic moiré grating.On the other hand, the major advantage of the proposed scheme is its robustness to statistical analysis algorithms (Table 1). Concluding Remarks A novel DVC scheme based on the stochastic moiré gratings is proposed, discussed, and illustrated in this paper.The proposed scheme is based on a counter-intuitive optical feature of time-averaged moiré gratings, when similar images in the static mode become very different in the time-averaged mode. Special cost functions and a complex system of constraints were developed for the computation of the moiré gratings used for the background and the representation of the secret.The defined optimization problem is very complicated, and near-optimal soft computing algorithms are used for the construction of two planar images. The proposed DVC scheme is a typical VC scheme.Complex computational algorithms are required for the construction of the cover image.However, a computer is not required to decrypt the embedded image.The decryption process is completely visual.The cover image must oscillate in a predefined direction and at a predefined amplitude (the amplitude of harmonic oscillations is one of the parameters of the proposed image's hiding scheme).Computational experiments were used to demonstrate the efficacy of the proposed optical DVC scheme based on the stochastic moiré gratings. As mentioned in the introduction, classical DVC schemes are based on harmonic moiré gratings with slightly different pitches.The proposed DVC scheme based on the stochastic moiré gratings provides a serious advantage compared to DVC schemes based on harmonic gratings.It is interesting to observe that this advantage in the security of the scheme is achieved without compromising the quality of the decrypted secret in the timeaveraged cover image.Moreover, the developed DVC scheme based on the stochastic moiré gratings provides a foundation for different extensions and a large variety of experimental applications.DVC schemes based on the stochastic moiré gratings and chaotic oscillations, as well as building experimental prototypes for testing the human visual system, will remain definite objectives of future research. • s - The length of the 1D observation window measured in the number of pixels; s ≥ 2. • c-The number of different grayscale levels in the 1D observation window; c ≥ 1. • ν k , k = 1, 2, . . ., c-The number of k-th color pixels in the 1D observation window.• p k = ν k s 2 , k = 1, 2, . . ., c-The discrete probability of the k-th color in the 1D observation window.• The indicator function 1 (s) Figure 1 . Figure 1.Optical time-averaging effects induced by harmonic oscillations on a periodic moiré grating (λ = 0.5).Time-averaged moiré gratings are depicted in the left panel at increasing values of amplitude a (the numbering on the right boundary of the panel corresponds to the consecutive order of the time-averaged fringe).Time-averaged moiré gratings, their optical representations, and their Wada indexes W (s) 1 are shown at a = 0 (panel (a)), a = λr i 2π (panel (b)), and a = 0.3 (panel (c)). Figure 2 . Figure 2. A schematic illustration of the DVC scheme based on two harmonic moiré gratings.Two moiré gratings (M 1 (x) and M 2 (x)) with slightly different pitches (depicted in panels (a,c)) are used to represent the background and the secret (the circle); their optical representations are shown in panels (b,d).The static and time-averaged cover image is shown in panel (e).The amplitude of harmonic oscillation a is chosen in such a way that the time-averaged moiré fringe is not fully developed in the area occupied by the secret image. depicted in Figure 3a.The time-averaged random moiré grating M(x, a) at a = 0.4, its optical representation, and the Wada index W (s) 1 are depicted in Figure 3b.The higher the value of the amplitude of harmonic oscillations, the larger the motion-induced optical blur, and the lower the maximum value of the Wada index W (s) 1 in the time-averaged image (Figure 3b,c). Figure 3 . Figure 3. Harmonic oscillations of a random moiré grating produce optical motion-induced blur in the time-averaged image.Time-averaged random moiré gratings are depicted in the left panel at increasing values of amplitude a. Time-averaged random moiré gratings, their optical representations, and their Wada indexes W (s) 1 are shown at a = 0 (panel (a)), a = 0.4 (panel (b)), and a = 0.8 (panel (c)). the randomness of the stochastic moiré gratings.The relationship between W (s) STEP 1 :STEP 2 :Figure 6 . Figure 6.The main steps of the proposed scheme. Figure 7 . Figure 7.The DVC scheme based on the stochastic moiré gratings.The resolution of the cover image is set to 300 × 180 pixels; 180 copies of non-identical near-optimal one-dimensional moiré gratings M 1 (x) constitute the static image in panel (a).The time-averaged image M 1 (x, 5) yields an almost plain gray image (panel (a)); 180 copies of non-identical near-optimal one-dimensional moiré gratings M 2 (x) constitute the static image in panel (b).The time-averaged image M 2 (x, 5) in panel (b) is very different from M 1 (x, 5) in panel (a).The dichotomous secret image is depicted in panel (c).The static cover image containing the embedded secret is shown in panel (d).The time-averaged image of the cover image yields the secret. Figure 8 . Figure 8.The stationary cover image and the decoded secret image produced by the different DVC schemes.The DVC scheme based on the harmonic moiré gratings is used to encode and decode the secret information in panel (a).The DVC scheme based on the dichotomous moiré gratings is depicted in panel (b). Table 1 . The comparison of the proposed scheme with classical DVC schemes (the DVC scheme based on harmonic moiré gratings (Scheme A), and the DVC scheme based on dichotomous moiré gratings (Scheme B)).
9,923.2
2023-04-07T00:00:00.000
[ "Physics" ]
Wear of vibrating disc working bodies of smoothing machines The issues of wear of disk vibrating working bodies of smoothing machines are considered. It is shown that during friction of the working body against the concrete mixture, mainly abrasive and less oxidative wear occurs. In this case, the amount of wear is proportional to the smoothing capacity of the working body, the operating time, the specific pressure of the working body on the surface of the concrete mixture and the coefficient taking into account the working conditions. It was found that the position of natural and forced vibrations of the working body on the smoothing process leads to significant changes in the magnitude and nature of the disc wear. It was determined that the general one-sided character of the disk depends on the parameters of forced vibrations generated by the working body. The presence of localized zones with an expressed wear of the disc was revealed. The latter reduces most significantly the quality of the surface treatment of the concrete mixture. The relationship between the presence of wear of localized wear zones and natural vibrations of the metal structure of the portal of the smoothing machine is shown. It is suggested that the issues of vibration protection (vibration isolation) in the broad sense of controlling the dynamic state of the smoothing machine as a technological complex are of decisive importance for the prevention or significant reduction of local localized wear of the working body. Introduction In the works [1, 2, 15], the economic feasibility of using rigid concrete mixtures is justified, as their use allows reducing the consumption of cement by 10-20% while maintaining the specified strength of concrete, reducing the hardening time and increasing the durability of concrete products. At the same time, the surface treatment of products formed from rigid concrete mixtures is associated with the application of significant dynamic loads to the treated environment. Studies [11,12,13] showed that the most promising way to improve the working processes of machines for surface treatment of concrete products is an additional vibration effect of the working body of the machine on the treated surface. The effectiveness of the vertical oscillations of the disk of the working body is experimentally confirmed, in which the concrete mix is a mechanical redistribution of components in the most dense packing and sand-cement paste and the milk necessary to smooth the irregularities is displaced to the friction surface, whereby the surface quality is significantly improved. In the last three decades, a number of authors, including those at the Department of Construction and Road Machinery of the Bratsk State University [6,10,14], have developed and studied a fairly large number of various vibration working bodies of smoothing machines, including disk ones. However, the proposed designs have not found wide practical application yet. Materials and methods A comparative review (not intended to be exhaustive) of concrete finishing machines offered on the Russian market shows that they are mainly represented by foreign manufacturers: Barikell (Italy), Masalta Engineering CO., LTD (China), Tremix LTD Tremix (Sweden), Schwamborn GmbH (Germany), etc. This equipment is self-propelled and manual, one and two-disc (two-rotor) concrete finishing machines with diameters of smoothing discs from 600 to 1600 mm. The drive is electric or from the internal combustion engine, which provides rotation of the disk working body with a frequency within 40-140 rpm. Hydraulic or mechanical transmission provides progressive movement of the working body along the treated surface, and the working bodies themselves do not contain vibration exciters and, accordingly, do not have an additional vibration effect on the treated surface. Among the main reasons that disk vibrating working bodies containing a kinematic or inertial type of vibration exciter as part of their design do not find sufficient application in the mass production of concrete finishing machines, the following can be noted: the large metal consumption of such working bodies; the presence of a larger number of mating rubbing pairs and, accordingly, a lower resource and additional costs for their maintenance. Another main reason that significantly reduces the operational properties of vibrating working bodies is their greater and often uneven wear, which negatively affects the quality of surface treatment of the concrete mixture. The problems of wear of vibration-free working bodies of concrete-finishing machines, including disk ones, are considered in the works of Professor AV. Bolotny [4,5]. It is established that when the working body is rubbed against the concrete mixture, mainly abrasive and to a lesser extent oxidative wear occurs. Taking into account that the physical state of the mixture under the influence of the velocity gradient remains constant in the first approximation, and the magnitude of deformations is relatively small, it was suggested that in this case processes similar to those described in [16] take place. Under this assumption, the amount of abrasive wear is proportional to the friction path and the specific pressure: where К p = К 1 ·К 2 …К n -coefficients that take into account the influence of the material and surface temperature, the size of the gap between the working body and the surface, etc.; Рthe value of the specific pressure of the working body on the surface;; S-friction path. In its physical sense, the smoothing ability of the working body S р0 is the length of the line during which the working body acts on the elementary surface of the processed surface, and for disk working bodies can be determined by the dependence: where V d -the peripheral velocity of the disk; V d -progressive velocity of the disk along the surface; Rdisc radius. At a known operating time T of the working body, the friction path is determined: (3) In relation to the process of smoothing concrete mixtures, on the basis of measurements of wear and working bodies and the time T of their operation before И wear, knowing the specific pressure Р of the working body on the surface, the dependence (1) allows setting the values of the coefficient К р : This allows predicting the amount of wear of working bodies of different sizes made of different materials when processing the surface of concrete mixtures with different stiffness. This approach, based on the measurements made and their subsequent analysis, allowed to A.V. Bolotny came to the following conclusions: it is advisable to use hard disk working bodies with an easily removable steel rubbing surface; linear wear of the steel disk working body is 1 mm for 30-40 hours of operation; the disk working body wears out relatively evenly and provides stable surface treatment quality throughout the entire service life. Research results Long-term experience in the operation of disk working bodies allowed establishing that the imposition of own or forced vibrations of the working body on the process of smoothing the surface of the concrete mixture leads to significant changes in the magnitude and nature of wear of the working body. In this case, self-oscillations are understood as vibrations caused by malfunctions of the machine components (disk deformation, bending of the disk drive shaft) or their limiting state (the presence of excessive gaps in the mating pairs), and under forced vibrations, vibrations that occur when the disk interacts with the surface to be processed. Figure 1 shows the cross-sections of the disc, which before the start of operation had a thickness of 8 mm, was installed on the basic smoothing machine of the bridge type SMR-13 and treated the surface of the concrete mixture with a stiffness of 60 s. Table 1 shows the results of wear measurements in sections 1-10 of the working body. The operating time to wear was 85 hours. The maximum wear value of 7.5 mm in section 1 with its gradual decrease to a minimum value of 0.6 mm in section 10 indicates the unilateral nature of disc wear. At the same time, linear wear of 1 mm in the presence of vibration effects is achieved in 20-25 hours of operation, which almost halves its service life. The appearance of forced vibrations is caused by a change in the force or coefficient of friction under the disk and is not associated with the action of any external periodic disturbing force. Forced vibrations, in the absence of a constant source of energy for these vibrations, appear when smoothing products formed from rigid inhomogeneous concrete mixtures, the surface of which can have a significant number of bumps and depressions. These vibrations contribute to accelerated and uneven wear of the disk and individual mechanisms of the machine, which generates the occurrence of natural vibrations of the system and leads to even faster wear of the disk surface. To date, various disc vibration working bodies have been developed in their design, which differ mainly in the parameters of the vibration effect on the treated surface and the type of vibration exciter. However, there are practically no studies of wear of such working bodies. The relevance of research in this area is due not only to the fact that the vibration effects imposed on the rotational and translational movements of the disk can almost halve its service life, but also significantly reduce the quality of surface treatment due to accelerated and uneven wear of the disk. It is known [4,5,6,10,12,14] that defect-free smoothing of the surface of the mixture is provided under the condition of maintaining the continuity of the material flow in the boundary layer of the mixture during its flow under the working body. It is evident that the nature and degree of wear of the vibrating working body will largely depend on the characteristics of the boundary layer flow in the zone of vibration effects of the working body. Under the conditions of application of vibration effects, the shear flow begins only when the shear stress exceeds the limit shear stress. After overcoming the limiting shear stress, the process of rapidly increasing mobility of the boundary layer of the concrete mixture and turning it into a heavy liquid with a gradually decreasing viscosity begins. In this process, two components should be distinguished: actual liquefaction, which occurs due to the displacement of free, physically unbound water to the surface of the mixture; pseudo-liquefaction, which occurs due to the linearization of dry friction under the influence of vibration and is expressed in the replacement of dry friction with viscous one. To obtain information about the actual size and nature of wear of the disk vibration working body, according to the method of Professor A.V. Bolotny, measurements of the removable disk of such a working body were made. The design of the disk working body was developed on the basis of the patent for invention No. 21825136 "Working body of the smoothing machine". Before the start of operation, the disk had the same thickness of 8 mm, as previously described, and was installed on the same basic machine SMR-13. Concrete mixes with a hardness of 60-65 s were treated. The smoothing capacity of this working body S д =40 m, the vibration intensity J=А 2  3 was 25-30 m 2 /s 3 . The operating time T before reaching wear was about 75 hours. Figure 2 shows the position of the disk sections, and table 2 shows the results of measurements of disk wear in these sections of the working body. Analysis of the results obtained with a difference from the maximum wear values of 7.1 mm in section 1 to the minimum 0.9 mm in sections 9 and II indicates the general one-sided nature of wear of the working body. It can be assumed that this type of wear depends on the parameters of forced vibrations generated by the working body. A comparison of the results presented in tables 1 and 2 indicates a slight decrease in the overall wear of the vibrating disk working body in sections 1-8, which can be explained in this case by replacing dry friction with viscous one, when the mixture flows under the working body. At the same time, localized zones with a clearly expressed local wear of up to 0.5 mm appear between sections 1-I, 2-I and 10-II, II-9. It can be assumed that the occurrence of localized wear zones is due to the natural vibrations of the portal of the smoothing machine due to insufficient vibration isolation of the working body from the main metal structure of the machine. Manual smoothing machines occupy a significant share in the total number of concrete finishing machines in operation. The experience of their operation indicates two features: their use is most appropriate for surface treatment of moderately hard concrete mixtures with a stiffness not exceeding 40s, and the specific pressure of the working body on the surface to be treated is constant and is determined by the weight of the machine distributed over the surface area of the working body. In accordance with the above procedure, the value and characteristics of wear of the disk working body with a diameter of 800 mm and a thickness of 8 mm of a manual vibration smoothing machine were estimated. The design of the machine is made in accordance with the patent for invention No. 2147513 "Working body of a manual smoothing machine of an oscillating type". Concrete mixtures with a hardness of 35s were treated. The disk working body had a smoothing capacity m, and the vibration intensity was 23-24 m 2 /s 3 . The operating time T before reaching wear was 80 hours. Figure 3 shows the position of the disk sections; table 3 shows the results of wear measurements in these combinations of the working body. The obtained results of measurements of disk wear in combinations 1-10 confirm the general one-sided nature of wear of vibrating disk working bodies, due to the contribution of forced vibrations generated by the working body. However, a comparison of these results with the results shown in table 2 indicates significantly lower values of absolute wear of the working body in sections 1-10 for manual smoothing machines. The latter can be explained by the fact that the specific pressure of the working body on the surface of the mixture in manual machines is significantly less than in stationary ones, and the process of linearization of dry friction under the influence of vibration and replacement of dry friction with viscous flows faster due to the displacement of large volumes of liquid phase to the friction surface during processing of moderately rigid mixtures. 6 stationary machines. The one-sided nature of wear becomes more expressed due to the additional localization of local wear in section V. In this case, the occurrence of localized wear zones is also due to its own vibrations of the manual machine body, which were clearly expressed during its operation due to insufficient vibration isolation of the working body, structurally represented by elastic elements in the form of rubber seals. The latter emphasizes the need to take into account the issues of vibration protection of vibration working bodies of manual machines, as a factor that significantly affects the magnitude and nature of their localized wear. Summary The results obtained and the assumptions made are preliminary and indicate the need for more extensive experimental studies in the field of wear of vibration working bodies with different geometric dimensions, in a sufficient range of their smoothing ability, when processing mixtures of different stiffness and when varying the parameters of the vibration effect on the treated surface of the mixture. However, as localized areas of local wear most adversely affect the quality of surface treatment, it can be concluded that the issues of vibration and overall control of dynamic as smoothing machines as a technological complex, are of fundamental importance to prevent or significantly reduce the magnitude of localized wear of the working body. In the general case, lower levels of self-vibrations of shells and manual smoothing machines and the basic structures of stationary machines, allowing for the difference in their mass-inertial characteristics, size and features of operation may be implemented for manual smoothing machines with structural study of a method of vibration control of the working body, and for stationary machines the most appropriate one is the installation of dynamic absorbers on the main metal machine. The issues of vibration protection and control of the dynamic state of technological machines based on the structural theory of vibration protection systems are widely considered in the works [3,7,8,9]. However, the development of technical solutions in relation to a particular type of vibrating working body working as part of a smoothing machine, which has its own massinertia characteristics and its own spatial metric, requires additional theoretical and experimental research.
3,865.4
2021-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]